From report at bugs.python.org Thu Feb 1 00:44:53 2018 From: report at bugs.python.org (Yang Yu) Date: Thu, 01 Feb 2018 05:44:53 +0000 Subject: [New-bugs-announce] [issue32739] collections.deque rotate(n=1) default value not documented Message-ID: <1517463893.34.0.467229070634.issue32739@psf.upfronthosting.co.za> New submission from Yang Yu : https://docs.python.org/3/library/collections.html#collections.deque rotate() works the same as rotate(1). The documentation did not mention the default for n. ---------- assignee: docs at python components: Documentation messages: 311403 nosy: docs at python, yuy priority: normal severity: normal status: open title: collections.deque rotate(n=1) default value not documented type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 1 04:10:58 2018 From: report at bugs.python.org (Vishal Kushwaha) Date: Thu, 01 Feb 2018 09:10:58 +0000 Subject: [New-bugs-announce] [issue32740] test_calendar and test_re fail with unknown locale: UTF-8 in _parse_localename Message-ID: <1517476258.4.0.467229070634.issue32740@psf.upfronthosting.co.za> New submission from Vishal Kushwaha : Fresh build on MacOS 10.13.2 ====================================================================== ERROR: test_locale_flag (test.test_re.ReTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/vikramsingh/Desktop/projects/cpython/Lib/test/test_re.py", line 1520, in test_locale_flag _, enc = locale.getlocale(locale.LC_CTYPE) File "/Users/vikramsingh/Desktop/projects/cpython/Lib/locale.py", line 587, in getlocale return _parse_localename(localename) File "/Users/vikramsingh/Desktop/projects/cpython/Lib/locale.py", line 495, in _parse_localename raise ValueError('unknown locale: %s' % localename) ValueError: unknown locale: UTF-8 ---------------------------------------------------------------------- ====================================================================== ERROR: test_option_locale (test.test_calendar.CommandLineTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/vikramsingh/Desktop/projects/cpython/Lib/test/test_calendar.py", line 838, in test_option_locale lang, enc = locale.getdefaultlocale() File "/Users/vikramsingh/Desktop/projects/cpython/Lib/locale.py", line 568, in getdefaultlocale return _parse_localename(localename) File "/Users/vikramsingh/Desktop/projects/cpython/Lib/locale.py", line 495, in _parse_localename raise ValueError('unknown locale: %s' % localename) ValueError: unknown locale: UTF-8 ---------------------------------------------------------------------- ---------- components: Library (Lib) messages: 311407 nosy: vishalsingh priority: normal severity: normal status: open title: test_calendar and test_re fail with unknown locale: UTF-8 in _parse_localename type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 1 07:08:36 2018 From: report at bugs.python.org (Andrew Svetlov) Date: Thu, 01 Feb 2018 12:08:36 +0000 Subject: [New-bugs-announce] [issue32741] Add asyncio.TimerHandle.when() function Message-ID: <1517486916.43.0.467229070634.issue32741@psf.upfronthosting.co.za> New submission from Andrew Svetlov : Should just return self._when attribute. I need it for testing purposes. Without the method there is no possibility to figure out scheduled wakeup time. An alternative is waiting for callback but the approach increases unittest execution time and not very reliable. I don't like looking into `handler._when` private property, uvloop does not expose it. Ned, is it possible to include the feature into Python 3.7? The change is safe and tiny, no backward compatibility problems etc. ---------- components: asyncio messages: 311421 nosy: asvetlov, ned.deily, yselivanov priority: normal severity: normal status: open title: Add asyncio.TimerHandle.when() function versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 1 10:10:53 2018 From: report at bugs.python.org (Peter Bengtsson) Date: Thu, 01 Feb 2018 15:10:53 +0000 Subject: [New-bugs-announce] [issue32742] zipfile extractall needlessly re-wraps ZipInfo instances Message-ID: <1517497853.08.0.467229070634.issue32742@psf.upfronthosting.co.za> New submission from Peter Bengtsson : The ZipFile class as a extractall method [0] that allows you to leave the 'members' empty. If empty, the 'members' becomes a list of all the *names* of files in the zip. Then it iterates over the names as sends each to `self._extract_member`. But that method needs it to be a ZipInfo object instead of a file name, so it re-wraps it [2]. Instead we can use `self.infolist()` to avoid that re-wrapping inside each `self._extract_member` call. [0] hhttps://github.com/python/cpython/blob/12e7cd8a51956a5ce373aac692ae6366c5f86584/Lib/zipfile.py#L1579 [1] https://github.com/python/cpython/blob/12e7cd8a51956a5ce373aac692ae6366c5f86584/Lib/zipfile.py#L1586 [2] https://github.com/python/cpython/blob/12e7cd8a51956a5ce373aac692ae6366c5f86584/Lib/zipfile.py#L1615-L1616 ---------- components: Library (Lib) messages: 311434 nosy: peterbe priority: normal severity: normal status: open title: zipfile extractall needlessly re-wraps ZipInfo instances type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 1 12:16:51 2018 From: report at bugs.python.org (Dmitry Alimov) Date: Thu, 01 Feb 2018 17:16:51 +0000 Subject: [New-bugs-announce] [issue32743] Typo in hamt.c comments Message-ID: <1517505411.15.0.467229070634.issue32743@psf.upfronthosting.co.za> New submission from Dmitry Alimov : In the comments to `hamt_node_collision_without` function in hamt.c module, I think should be `so convert` instead of `co convert`: ``` if (new_count == 1) { /* The node has two keys, and after deletion the new Collision node would have one. Collision nodes - with one key shouldn't exist, co convert it to a + with one key shouldn't exist, so convert it to a Bitmap node. */ ``` ---------- assignee: docs at python components: Documentation messages: 311452 nosy: delimitry, docs at python, yselivanov priority: normal severity: normal status: open title: Typo in hamt.c comments type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 1 12:43:36 2018 From: report at bugs.python.org (Dmitry Alimov) Date: Thu, 01 Feb 2018 17:43:36 +0000 Subject: [New-bugs-announce] [issue32744] PEP 342 double colons typos in code Message-ID: <1517507016.51.0.467229070634.issue32744@psf.upfronthosting.co.za> New submission from Dmitry Alimov : I've found "double colons" typos in examples: @consumer def jpeg_writer(dirname):: # 1) here fileno = 1 and try: while self.running and self.queue:: # 2) here func = self.queue.popleft() ---------- assignee: docs at python components: Documentation messages: 311458 nosy: delimitry, docs at python priority: normal severity: normal status: open title: PEP 342 double colons typos in code type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 1 14:54:50 2018 From: report at bugs.python.org (Thomas Heller) Date: Thu, 01 Feb 2018 19:54:50 +0000 Subject: [New-bugs-announce] [issue32745] ctypes string pointer fields should accept embedded null characters Message-ID: <1517514890.15.0.467229070634.issue32745@psf.upfronthosting.co.za> New submission from Thomas Heller : ctypes Structure fields of type c_char_p or c_wchar_p used to accept strings with embedded null characters. I noticed that Python 3.6.4 does refuse them. It seems this has been changed in recent version(s). There ARE use-cases for this: The Windows-API OPENFILENAME structure is one example. The Microsoft docs for the lpstrFilter field: """ lpstrFilter Type: LPCTSTR A buffer containing pairs of null-terminated filter strings. The last string in the buffer must be terminated by two NULL characters. """ I have attached a simple script which demonstrates this new behaviour; the output with Python 3.6.4 is this: Traceback (most recent call last): File "nullchars.py", line 8, in t.unicode = u"foo\0bar" ValueError: embedded null character ---------- components: ctypes files: nullchars.py keywords: 3.6regression messages: 311462 nosy: theller priority: normal severity: normal status: open title: ctypes string pointer fields should accept embedded null characters versions: Python 3.6 Added file: https://bugs.python.org/file47420/nullchars.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 00:55:26 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 02 Feb 2018 05:55:26 +0000 Subject: [New-bugs-announce] [issue32746] More misspellings, mostly in source code. Message-ID: <1517550926.68.0.467229070634.issue32746@psf.upfronthosting.co.za> New submission from Terry J. Reedy : This is similar to #32297. I intend to backport, if not too much problem, for the reason Victor gave on the PR 4803: "to reduce conflicts when backporting fixes into Python 3.6." I propose to not backport to Python 2.7, again for the reason Victor gave: "since Python 2.7 is dying and minor fixes are no more backported to 2.7." ---------- assignee: terry.reedy messages: 311476 nosy: terry.reedy priority: normal severity: normal stage: commit review status: open title: More misspellings, mostly in source code. versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 02:49:51 2018 From: report at bugs.python.org (Qian Yun) Date: Fri, 02 Feb 2018 07:49:51 +0000 Subject: [New-bugs-announce] [issue32747] remove trailing spaces in docstring Message-ID: <1517557791.36.0.467229070634.issue32747@psf.upfronthosting.co.za> New submission from Qian Yun : This is a simple PR that removes trailing spaces in docstring, which are found by: grep -R ' \\n\\$' . ---------- assignee: docs at python components: Documentation messages: 311484 nosy: Qian Yun, docs at python priority: normal severity: normal status: open title: remove trailing spaces in docstring versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 04:43:39 2018 From: report at bugs.python.org (Andrew Svetlov) Date: Fri, 02 Feb 2018 09:43:39 +0000 Subject: [New-bugs-announce] [issue32748] Improve _asyncio.TaskStepMethWrapper and TaskWakeupMethWrapper reprs Message-ID: <1517564619.34.0.467229070634.issue32748@psf.upfronthosting.co.za> New submission from Andrew Svetlov : Currently both helper classes have no custom tp_repr slot, it leads to autogenerated values. Both helpers are private but in debug mode asyncio loop reports about slow callbacks, the message doesn't point on executed coroutine -- it just prints 'Executing took 0.203 seconds'. The only way to figure out what coroutine is slow is monkey-patching CTask implementation back to PyTask usage. Sure, the method is too obscure for newbies. ---------- components: asyncio messages: 311486 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Improve _asyncio.TaskStepMethWrapper and TaskWakeupMethWrapper reprs versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 10:16:17 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 02 Feb 2018 15:16:17 +0000 Subject: [New-bugs-announce] [issue32749] Remove dbm.dumb behavior deprecated in 3.6 Message-ID: <1517584577.48.0.467229070634.issue32749@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Some behavior of dbm.dumb databases which was different from the behavior of other dbm databases was deprecated in 3.6 (issue21708). Now it is a time to remove it. ---------- components: Library (Lib) messages: 311497 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Remove dbm.dumb behavior deprecated in 3.6 type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 13:37:47 2018 From: report at bugs.python.org (Nick Smith) Date: Fri, 02 Feb 2018 18:37:47 +0000 Subject: [New-bugs-announce] [issue32750] lib2to3 log_error method behavior is inconsitent with documentation Message-ID: <1517596667.28.0.467229070634.issue32750@psf.upfronthosting.co.za> New submission from Nick Smith : The log_error method in refactor.RefactoringTool raises the exception: def log_error(self, msg, *args, **kwds): """Called when an error occurs.""" raise but every usage of it implies that it does not, e.g: def refactor_string(self, data, name): """Refactor a given input string. Args: data: a string holding the code to be refactored. name: a human-readable name for use in error/log messages. Returns: An AST corresponding to the refactored input stream; None if there were errors during the parse. """ # [..] try: tree = self.driver.parse_string(data) except Exception as err: self.log_error("Can't parse %s: %s: %s", name, err.__class__.__name__, err) return finally: # [..] This is the only explicit conflict I found in the documentation. From looking at the refactor_string function, it seems it should never raise on parse errors. Other uses of it are followed immediately by a return. I'd like to see log_error only log the exception and continue. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 311507 nosy: soupytwist priority: normal severity: normal status: open title: lib2to3 log_error method behavior is inconsitent with documentation type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 15:01:15 2018 From: report at bugs.python.org (Nathaniel Smith) Date: Fri, 02 Feb 2018 20:01:15 +0000 Subject: [New-bugs-announce] [issue32751] wait_for(future, ...) should wait for the future (even if a timeout occurs) Message-ID: <1517601675.34.0.467229070634.issue32751@psf.upfronthosting.co.za> New submission from Nathaniel Smith : Currently, if you use asyncio.wait_for(future, timeout=....) and the timeout expires, then it (a) cancels to the future, and then (b) returns. This is fine if the future is a Future, because Future.cancel is synchronous and completes immediately. But if the future is a Task, then Task.cancel merely requests cancellation, and it will complete later (or not). In particular, this means that wait_for(coro, ...) can return with the coroutine still running, which is surprising. (Originally encountered by Alex Gr?nholm, who was using code like async with aclosing(agen): await wait_for(agen.asend(...), timeout=...) and then confused about why the call to agen.aclose was raising an error complaining that agen.asend was still running. Currently this requires an async_generator based async generator to trigger; with a native async generator, the problem is masked by bpo-32526.) ---------- components: asyncio messages: 311509 nosy: asvetlov, giampaolo.rodola, njs, yselivanov priority: normal severity: normal status: open title: wait_for(future, ...) should wait for the future (even if a timeout occurs) versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 17:59:06 2018 From: report at bugs.python.org (Paul Pinterits) Date: Fri, 02 Feb 2018 22:59:06 +0000 Subject: [New-bugs-announce] [issue32752] no information about accessing typing.Generic type arguments Message-ID: <1517612346.62.0.467229070634.issue32752@psf.upfronthosting.co.za> New submission from Paul Pinterits : The documentation of the typing module explains how to instantiate generic types, but there is no information about how to extract the type arguments from a generic type. Example: >>> list_of_ints = typing.List[int] >>> >>> # how do we get out of list_of_ints? >>> list_of_ints.??? Through trial and error I've discovered list_of_ints.__args__, which *seems* to be what I'm looking for, but since it's never mentioned in the docs, it's unclear whether this __args__ attribute is an implementation detail or not. Please document the official/intended way to extract type arguments from a Generic. ---------- assignee: docs at python components: Documentation messages: 311520 nosy: Paul Pinterits, docs at python priority: normal severity: normal status: open title: no information about accessing typing.Generic type arguments type: enhancement versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 18:07:56 2018 From: report at bugs.python.org (Yarko Tymciurak) Date: Fri, 02 Feb 2018 23:07:56 +0000 Subject: [New-bugs-announce] [issue32753] ssl.SSLError exceptions in test_poplib Message-ID: <1517612876.52.0.467229070634.issue32753@psf.upfronthosting.co.za> New submission from Yarko Tymciurak : Just built v3.7.0b1, and have the following test hangs (see attached). My build is on Ubuntu 16.04; lsb_release -a output: LSB Version: core-9.20160110ubuntu0.2-amd64:core-9.20160110ubuntu0.2-noarch:printing-9.20160110ubuntu0.2-amd64:printing-9.20160110ubuntu0.2-noarch:security-9.20160110ubuntu0.2-amd64:security-9.20160110ubuntu0.2-noarch Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial My "usual" config / build process: $ make distclean $ ./configure --enable-shared --enable-loadable-sqlite-extensions --with-system-ffi --with-ensurepip=upgrade --enable-optimizations $ make -j I have been building 3.7 weekly, from master, and I've never seen anything like this before. ---------- components: asyncio files: v3.7.0b1-test-failure.txt messages: 311521 nosy: asvetlov, yarkot, yselivanov priority: normal severity: normal status: open title: ssl.SSLError exceptions in test_poplib type: crash versions: Python 3.7 Added file: https://bugs.python.org/file47421/v3.7.0b1-test-failure.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 2 18:29:02 2018 From: report at bugs.python.org (Alexander Mohr) Date: Fri, 02 Feb 2018 23:29:02 +0000 Subject: [New-bugs-announce] [issue32754] feature request: asyncio.gather/wait cancel children on first exception Message-ID: <1517614142.6.0.467229070634.issue32754@psf.upfronthosting.co.za> New submission from Alexander Mohr : currently gather/wait allow you to return on the first exception and leave the children executing. A very common use case that I have is of launching multiple tasks, and if any of them fail, then all should fail..otherwise the other tasks would continue running w/o anyone listening for the results. To accomplish this I wrote a method like the following: https://gist.github.com/thehesiod/524a1f005d0f3fb61a8952f272d8709e. I think it would be useful to many others as on optional perhaps a parameter to each of these methods. What do you guys think? ---------- components: asyncio messages: 311527 nosy: asvetlov, thehesiod, yselivanov priority: normal severity: normal status: open title: feature request: asyncio.gather/wait cancel children on first exception versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 3 05:03:48 2018 From: report at bugs.python.org (=?utf-8?b?0K7RgNC40Lkg0J/Rg9GF0LDQu9GM0YHQutC40Lk=?=) Date: Sat, 03 Feb 2018 10:03:48 +0000 Subject: [New-bugs-announce] [issue32755] Several cookies with the same name get intermixed Message-ID: <1517652228.92.0.467229070634.issue32755@psf.upfronthosting.co.za> New submission from ???? ?????????? : I'm using python 3.5.4. The site gives me two headers: I'm using aiohttp that iterates the headers and if it's set-cookie, calls SimpleCookie.load(). The latter maintains a dict inside by the cookie name. So that's what happens, first we add a dict entry with LOGIN_SESSION=deleted and phony expiration date. Next cookie, the valid one, gets into the same dict entry, updates the value to the right one, but expiration date remains in the past. The result is that this cookie is not used. I don't know the good way of handling it. Maybe clear the cookie fields before updating the dict? Or this behaviour is intended? I think the situation itself is wrong, the site shouldn't be sending this, but how to cope with it? ---------- components: Extension Modules messages: 311544 nosy: ???? ?????????? priority: normal severity: normal status: open title: Several cookies with the same name get intermixed type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 3 07:31:44 2018 From: report at bugs.python.org (Yauhen) Date: Sat, 03 Feb 2018 12:31:44 +0000 Subject: [New-bugs-announce] [issue32756] argparse: parse_known_args: raising exception on unknown arg following known one Message-ID: <1517661104.06.0.467229070634.issue32756@psf.upfronthosting.co.za> New submission from Yauhen : steps to reproduce: import argparse import sys parser = argparse.ArgumentParser(prog=sys.argv[0], add_help=False) parser.add_argument('-a', action='store_true') parsed_args, unknown_args = parser.parse_known_args(sys.argv[1:]) print(parsed_args) print(unknown_args) Expected result: $ python arparse_test.py -ab Namespace(a=True) ['b'] Actual result: $ python arparse_test.py -ab usage: arparse_test.py [-a] arparse_test.py: error: argument -a: ignored explicit argument 'b' ---------- components: Library (Lib) messages: 311546 nosy: actionless priority: normal pull_requests: 5345 severity: normal status: open title: argparse: parse_known_args: raising exception on unknown arg following known one versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 3 12:09:12 2018 From: report at bugs.python.org (hadimene) Date: Sat, 03 Feb 2018 17:09:12 +0000 Subject: [New-bugs-announce] [issue32757] Python 2.7 : Buffer Overflow vulnerability in exec() function Message-ID: <1517677752.38.0.467229070634.issue32757@psf.upfronthosting.co.za> New submission from hadimene : Hello ! Recently while debugging my python code I discovered an stack-based Buffer overflow Vulnerability in Python 2.7 and lower versions . This vulnerability is caused by exec() builtin function when we create "recursive" function using exec() ... Example : We want to Print "hello World !" str and we encode print "hello world" ) using chr() or unichr() print "hello World " becomes exec(chr(112)+chr(114)+chr(105)+chr(110)+chr(116)+chr(40)+chr(39)+chr(104)+chr(101)+chr(108)+chr(108)+chr(111)+chr(32)+chr(119)+chr(111)+chr(114)+chr(108)+chr(100)+chr(32)+chr(33)+chr(32)+chr(39)+chr(41)+chr(10)+chr(35)) and if we re-encode the result : exec() the result would be exec(chr(101)+chr(120)+chr(101)+chr(99)+chr(40)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(49)+chr(50)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(49)+chr(52)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(48)+chr(53)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(49)+chr(48)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(49)+chr(54)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(52)+chr(48)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(51)+chr(57)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(48)+chr(52)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(48)+chr(49)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(48)+chr(56)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(48)+chr(56)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(49)+chr(49)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(51)+chr(50)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(49)+chr(57)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(49)+chr(49)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(49)+chr(52)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(48)+chr(56)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(48)+chr(48)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(51)+chr(50)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(51)+chr(51)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(51)+chr(50)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(51)+chr(57)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(52)+chr(49)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(49)+chr(48)+chr(41)+chr(43)+chr(99)+chr(104)+chr(114)+chr(40)+chr(51)+chr(53)+chr(41)+chr(41)+chr(35)) If you do this manipulation 6-7 times and you run the encoded script then the Python Interpreter program will crash with a Segmentation Fault as error : (https://lepetithacker.files.wordpress.com/2018/01/capture-dc3a9cran-2018-01-31-191359.png) We can check the Segmentation Fault using gdb ( GNU Debugger ) https://lepetithacker.files.wordpress.com/2018/01/capture-dc3a9cran-2018-01-31-202241.png ) To get an Segmentation Fault error you can just run poc.py ! Conclusion In my opinion , to patch this vulnerability developers need to give more memory/buffer to the exec() arguments , and verify if the buffer can contains exec() arguments in integrality without any overflow ! An attacker could control the memory of an server written in python if the builtin function exec() is used and python version i of the server is 2.7 or lower (every version of python2 could be vulnerable like Python 2.9 but I didn't tried yet ) ---------- components: Interpreter Core files: poc.py messages: 311561 nosy: hadimene priority: normal severity: normal status: open title: Python 2.7 : Buffer Overflow vulnerability in exec() function type: security versions: Python 2.7 Added file: https://bugs.python.org/file47422/poc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 3 13:22:51 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 03 Feb 2018 18:22:51 +0000 Subject: [New-bugs-announce] [issue32758] Stack overflow when parse long expression to AST Message-ID: <1517682171.17.0.467229070634.issue32758@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Python 2 can crash when compile long expression. >>> x = eval('""' + '+chr(33)'*100000) Segmentation fault (core dumped) This was fixed in Python 3. RecursionError is raised now. >>> x = eval('""' + '+chr(33)'*100000) Traceback (most recent call last): File "", line 1, in RecursionError: maximum recursion depth exceeded during compilation >>> x = eval('+chr(33)'*1000000) Traceback (most recent call last): File "", line 1, in RecursionError: maximum recursion depth exceeded during compilation But compiling to AST still can crash. >>> import ast >>> x = ast.parse('+chr(33)'*1000000) Segmentation fault (core dumped) ---------- components: Interpreter Core messages: 311568 nosy: benjamin.peterson, brett.cannon, ncoghlan, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Stack overflow when parse long expression to AST type: crash versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 3 16:14:22 2018 From: report at bugs.python.org (OO O) Date: Sat, 03 Feb 2018 21:14:22 +0000 Subject: [New-bugs-announce] [issue32759] multiprocessing.Array do not release shared memory Message-ID: <1517692462.01.0.467229070634.issue32759@psf.upfronthosting.co.za> New submission from OO O : OS: Win10 / 8.1 Python: 3.5 / 3.6 My program use mp.Array to share huge data. But suffer from out of memory after running for a while. But Windows task manager didn't show which process use that huge memory. And I use pympler to check my python memory usage. Still shows noting. So, I use RamMap to check, it shows a huge shared memory is used. I can reproduce the case by the simple test code: #------------------------------------------------------- import numpy as np import multiprocessing as mp import gc def F (): a = mp.Array ( 'I', 1800000000, lock = False ) # F () gc.collect () #------------------------------------------------------- No matter how hard I tried. the memory is not released. I put what I tried in the attachment picture. ---------- components: Windows files: result.jpg messages: 311571 nosy: OO O, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: multiprocessing.Array do not release shared memory type: resource usage versions: Python 3.6 Added file: https://bugs.python.org/file47424/result.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 3 16:24:00 2018 From: report at bugs.python.org (JamesDinh) Date: Sat, 03 Feb 2018 21:24:00 +0000 Subject: [New-bugs-announce] [issue32760] [Python Shell command issue] Message-ID: <1517693040.31.0.467229070634.issue32760@psf.upfronthosting.co.za> New submission from JamesDinh : Hi Python dev. team, I would like to report below error: 1) Tittle: Running Linux shell command from python file always leads to reset config error. 2) Environment: + Linux distro: Both Ubuntu 16.04 64b and Fedora 25 happen this issue + Python: Python 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609] on linux2 3) Reproduce route: These commands can be run normally from Linux terminal: [I use buildroot for my embedded project] ./configure O=output project_name debug initramfs nofirewall make O=output But when I tried to call them from Python file, the configure command always lead to Restart config... - and all the new configuration values are discarded. For your information, I tried these options: a) spkConfigureCmd = ["./configure","O=output", "project_name","debug","initramfs","nofirewall"] subprocess.check_call(spkConfigureCmd) spkBuildCmd = ["make","O=output"] subprocess.check_call(spkBuildCmd) b) os.system("./configure O=output project_name debug initramfs nofirewall && make O=output") c) fid = open('ax2spkbuild.sh','w') fid.write('./configure O=output project_name debug initramfs nofirewall\n') fid.write('make O=output\n') fid.close() os.system('chmod +x ax2spkbuild.sh') os.system('./ax2spkbuild.sh') Actually I tried with another simple command like 'pwd', 'cat', 'echo' and they are working well. I wonder how come Python executes the Linux shell commands, which are slightly different to the Terminal typed commands. ---------- components: Build messages: 311572 nosy: JamesDinhBugPython priority: normal severity: normal status: open title: [Python Shell command issue] type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 3 17:24:20 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 03 Feb 2018 22:24:20 +0000 Subject: [New-bugs-announce] [issue32761] IDLE Keymap for Cntl-A Message-ID: <1517696660.49.0.467229070634.issue32761@psf.upfronthosting.co.za> New submission from Raymond Hettinger : The default keymap for Cntl-A should be , the same as Cntl-KeyLeft. This is consistent with how Cntl-A behaves at the bash prompt and in Emacs. >>> print('Hello World') ^--- Cntl-A should take you here ^------- Cntl-A currently takes you here (which is never helpful). ---------- assignee: terry.reedy components: IDLE messages: 311575 nosy: rhettinger, terry.reedy priority: normal severity: normal status: open title: IDLE Keymap for Cntl-A _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 06:55:23 2018 From: report at bugs.python.org (Andrew Svetlov) Date: Sun, 04 Feb 2018 11:55:23 +0000 Subject: [New-bugs-announce] [issue32762] Choose protocol implementation on transport.set_protocol() Message-ID: <1517745323.72.0.467229070634.issue32762@psf.upfronthosting.co.za> New submission from Andrew Svetlov : New buffered transports was introduced in Python 3.7. Actual transport implementation (get_buffer() or data_received()) is determined in transport constructor. Protocol can be changed by `set_protocol()` method, the implementation should be reselected again. Both selector-based and proactor transports are affected. ---------- components: asyncio messages: 311598 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Choose protocol implementation on transport.set_protocol() versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 09:30:37 2018 From: report at bugs.python.org (Boss Kwei) Date: Sun, 04 Feb 2018 14:30:37 +0000 Subject: [New-bugs-announce] [issue32763] write() method in Transport should not buffer data Message-ID: <1517754637.18.0.467229070634.issue32763@psf.upfronthosting.co.za> New submission from Boss Kwei : write() method implemented in https://github.com/python/cpython/blob/master/Lib/asyncio/selector_events.py#L830 is not stable in somecases. If this method was called too quickly, separate data will be packed and sent in same tcp package, which may be considered as unexpected behavior. ---------- components: asyncio messages: 311600 nosy: Boss Kwei, asvetlov, yselivanov priority: normal severity: normal status: open title: write() method in Transport should not buffer data versions: Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 11:01:42 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 04 Feb 2018 16:01:42 +0000 Subject: [New-bugs-announce] [issue32764] Popen doesn't work on Windows when args is a list Message-ID: <1517760102.46.0.467229070634.issue32764@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : test_subprocess is failing on Windows. C:\py\cpython3.7>./python -m test -uall -v -m test_nonexisting_with_pipes test_subprocess Running Debug|Win32 interpreter... == CPython 3.7.0b1+ (heads/3.7:1a0239e, Feb 4 2018, 16:19:37) [MSC v.1911 32 bit (Intel)] == Windows-10-10.0.16299-SP0 little-endian == cwd: C:\py\cpython3.7\build\test_python_11092 == CPU count: 2 == encodings: locale=cp1251, FS=utf-8 Run tests sequentially 0:00:00 [1/1] test_subprocess test_nonexisting_with_pipes (test.test_subprocess.ProcessTestCase) ... FAIL test_nonexisting_with_pipes (test.test_subprocess.ProcessTestCaseNoPoll) ... skipped 'Test needs selectors.PollSelector' ====================================================================== FAIL: test_nonexisting_with_pipes (test.test_subprocess.ProcessTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\py\cpython3.7\lib\test\test_subprocess.py", line 1194, in test_nonexisting_with_pipes self.assertEqual(stderr, "") AssertionError: 'Traceback (most recent call last):\n Fil[923 chars]le\n' != '' Diff is 965 characters long. Set self.maxDiff to None to see it. ---------------------------------------------------------------------- Ran 2 tests in 0.171s FAILED (failures=1, skipped=1) test test_subprocess failed test_subprocess failed 1 test failed: test_subprocess Total duration: 391 ms Tests result: FAILURE Here stderr is: Traceback (most recent call last): File "C:\py\cpython3.7\lib\subprocess.py", line 1101, in _execute_child args = os.fsdecode(args) # os.PathLike -> str File "C:\py\cpython3.7\\lib\os.py", line 821, in fsdecode filename = fspath(filename) # Does type-checking of `filename`. TypeError: expected str, bytes or os.PathLike object, not list During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 16, in File "C:\py\cpython3.7\lib\subprocess.py", line 756, in __init__ restore_signals, start_new_session) File "C:\py\cpython3.7\lib\subprocess.py", line 1104, in _execute_child args[0] = os.fsdecode(args[0]) # os.PathLike -> str File "C:\py\cpython3.7\\lib\os.py", line 821, in fsdecode filename = fspath(filename) # Does type-checking of `filename`. TypeError: expected str, bytes or os.PathLike object, not tuple In _execute_child args is passed to os.fsdecode() unless it is a string. In this case args is a list. os.fsdecode() doesn't accept a list. The regression was introduced in issue31961. ---------- components: Library (Lib), Windows messages: 311603 nosy: Phaqui, gregory.p.smith, paul.moore, serhiy.storchaka, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Popen doesn't work on Windows when args is a list type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 11:55:44 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sun, 04 Feb 2018 16:55:44 +0000 Subject: [New-bugs-announce] [issue32765] IDLE: Update configdialog docstrings to reflect extension integration Message-ID: <1517763344.13.0.467229070634.issue32765@psf.upfronthosting.co.za> New submission from Cheryl Sabella : The layout of the general tab changed with #27099, but the docstrings weren't updated with the new widgets. ---------- assignee: terry.reedy components: IDLE messages: 311606 nosy: csabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Update configdialog docstrings to reflect extension integration type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 12:57:24 2018 From: report at bugs.python.org (John Hossbach) Date: Sun, 04 Feb 2018 17:57:24 +0000 Subject: [New-bugs-announce] [issue32766] 4.7.7. Function Annotations Message-ID: <1517767044.74.0.467229070634.issue32766@psf.upfronthosting.co.za> New submission from John Hossbach : https://docs.python.org/3.5/tutorial/controlflow.html#function-annotations The end of the first paragraph states, "The following example has a positional argument, a keyword argument, and the return value annotated:" However, the only function call is f('spam') which has a single positional argument. I believe the author was referencing the output of print("Annotations:", f.__annotations__) which was: Annotations: {'ham': , 'return': , 'eggs': } and then confused that with 4.7.2. Keyword Arguments (https://docs.python.org/3.5/tutorial/controlflow.html#keyword-arguments) where it points out that keyword arguments follow positional arguments. However, the difference between identifying a positional argument vs keyword argument is done at the function CALL, not the function DEFINITION since any argument can be both positional or keyword, depending on how it is referenced. Moreover, the last sentence in 4.7.2. Keyword Arguments, points out that the order of unsorted sequences is undefined, which would then explain why 'return' appears in the middle here instead of at the end. ---------- assignee: docs at python components: Documentation messages: 311612 nosy: John Hossbach, docs at python priority: normal severity: normal status: open title: 4.7.7. Function Annotations versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 14:27:00 2018 From: report at bugs.python.org (Tim Peters) Date: Sun, 04 Feb 2018 19:27:00 +0000 Subject: [New-bugs-announce] [issue32767] Mutating a list while iterating: clarify the docs Message-ID: <1517772420.88.0.467229070634.issue32767@psf.upfronthosting.co.za> New submission from Tim Peters : This has come up repeatedly, and the docs should be updated to resolve it: https://stackoverflow.com/questions/48603998/python-iterating-over-a-list-but-i-want-to-add-to-that-list-while-in-the-loop/48604036#48604036 Seemingly the only relevant documentation is in the reference manual, but it's flawed: https://docs.python.org/3.7/reference/compound_stmts.html#the-for-statement - The behavior it's describing is specific to list iterators, but it pretends to apply to "mutable sequences" in general (which may or may not mimic list iterators in relevant respects). - It's not clear that the "length of the sequence" (list!) is evaluated anew on each iteration (not, e.g., captured once at the start of the `for` loop). - While it describes things that can go wrong, it doesn't describe the common useful case: appending to a list during iteration (for example, in a breadth-first search). ---------- assignee: docs at python components: Documentation messages: 311614 nosy: docs at python, tim.peters priority: normal severity: normal stage: needs patch status: open title: Mutating a list while iterating: clarify the docs type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 17:05:12 2018 From: report at bugs.python.org (VA) Date: Sun, 04 Feb 2018 22:05:12 +0000 Subject: [New-bugs-announce] [issue32768] object.__new__ does not accept arguments if __bases__ is changed Message-ID: <1517781912.98.0.467229070634.issue32768@psf.upfronthosting.co.za> New submission from VA : object.__new__ takes only the class argument, but it still accepts extra arguments if a class doesn't override __new__, and rejects them otherwise. (This is because __new__ will receive the same arguments as __init__ but __new__ shouldn't need to be overridden just to remove args) However, if a class has a custom __new__ at one point (in a parent class), and at a later point __bases__ is changed, object.__new__ will still reject arguments, although __new__ may not be overridden anymore at that point. See attached file. I can't check with all Python 3 versions, but the same code works fine in Python 2. ---------- files: bases.py messages: 311622 nosy: VA priority: normal severity: normal status: open title: object.__new__ does not accept arguments if __bases__ is changed type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file47425/bases.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 18:13:13 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sun, 04 Feb 2018 23:13:13 +0000 Subject: [New-bugs-announce] [issue32769] Add 'annotations' to the glossary Message-ID: <1517785993.54.0.467229070634.issue32769@psf.upfronthosting.co.za> Change by Cheryl Sabella : ---------- assignee: docs at python components: Documentation keywords: easy nosy: csabella, docs at python priority: normal severity: normal status: open title: Add 'annotations' to the glossary type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 19:34:34 2018 From: report at bugs.python.org (Anthony Flury) Date: Mon, 05 Feb 2018 00:34:34 +0000 Subject: [New-bugs-announce] [issue32770] collections.counter examples are misleading Message-ID: <1517790874.74.0.467229070634.issue32770@psf.upfronthosting.co.za> New submission from Anthony Flury : The first example given for collections.Counter is misleading - the documentation ideally should show the 'best' (one and only one) way to do something and the example is this : >>> # Tally occurrences of words in a list >>> cnt = Counter() >>> for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']: ... cnt[word] += 1 >>> cnt Counter({'blue': 3, 'red': 2, 'green': 1}) clearly this could simply be : >>> # Tally occurrences of words in a list >>> cnt = Counter(['red', 'blue', 'red', 'green', 'blue', 'blue']) >>> cnt Counter({'blue': 3, 'red': 2, 'green': 1}) (i.e. the iteration through the array is unneeded in this example). The 2nd example is better in showing the 'entry-level' use of the Counter class. There possibly does need to be a simple example of when you might manually increment the Counter class - but I don't think that the examples given illustrate that in a useful way; and I personally haven't come across a use-case for manually incrementing the Counter class entires that couldn't be accomplished with a comprehension or generator expression passed directly to the Counter constructor. ---------- assignee: docs at python components: Documentation messages: 311630 nosy: anthony-flury, docs at python priority: normal severity: normal status: open title: collections.counter examples are misleading versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 4 21:19:59 2018 From: report at bugs.python.org (Benjamin Peterson) Date: Mon, 05 Feb 2018 02:19:59 +0000 Subject: [New-bugs-announce] [issue32771] merge the underlying data stores of unicodedata and the str type Message-ID: <1517797199.65.0.467229070634.issue32771@psf.upfronthosting.co.za> New submission from Benjamin Peterson : Both Objects/unicodeobject.c and Modules/unicodedatamodule.c rely on large generated databases (Objects/unicodetype_db.h, Modules/unicodename_db.h, Modules/unicodedata_db.h). This separation made sense in Python 2 where Unicode was less of an important part of the language than Python3-recall Python 2's configure script has --without-unicode!. However, in Python 3, Unicode is a core language concept and literally baked into the syntax of the language. I therefore propose moving all of unicodedata's tables and algorithms into the interpreter core proper and converting Modules/unicodedata.c into a facade. This will remove awkward maneuvers like ast.c importing unicodedata in order to perform normalization. Having unicodedata readily accessible to the str type would also permit higher a fidelity unicode implementation. For example, implementing language-tailored str.lower() requires having canonical combining class of a character available. This data lives only in unicodedata currently. ---------- components: Unicode messages: 311634 nosy: benjamin.peterson, ezio.melotti, vstinner priority: normal severity: normal stage: needs patch status: open title: merge the underlying data stores of unicodedata and the str type type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 5 06:07:49 2018 From: report at bugs.python.org (Narendra L) Date: Mon, 05 Feb 2018 11:07:49 +0000 Subject: [New-bugs-announce] [issue32772] lstrip not working when string has =e in it Message-ID: <1517828869.78.0.467229070634.issue32772@psf.upfronthosting.co.za> New submission from Narendra L : Lstrip not working as expected when the string has "=e" in it. Python 2.7.11 (default, Jan 22 2016, 08:28:37) >>> test = "Cookie: test-Debug=edttrace=expires=1517828996" >>> test.lstrip('Cookie: test-Debug=') 'dttrace=expires=1517828996' ---------- components: Library (Lib) messages: 311659 nosy: Narendra L priority: normal severity: normal status: open title: lstrip not working when string has =e in it type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 5 11:01:16 2018 From: report at bugs.python.org (Jeroen Demeyer) Date: Mon, 05 Feb 2018 16:01:16 +0000 Subject: [New-bugs-announce] [issue32773] distutils should NOT preserve timestamps Message-ID: <1517846476.27.0.467229070634.issue32773@psf.upfronthosting.co.za> New submission from Jeroen Demeyer : When a Python project is installed, distutils copies the files from the build to install directory using copy_file(). In this copy operation, timestamps are preserved. In other words, the timestamp of the installed file equals the timestamp of the source file. By contrast, autotools does not preserve timestamps: the timestamp of the installed files equals the time of installation. This makes more sense because of dependency checking: if you reinstall a package, you typically want to rebuild everything depending on that package. This issue is mostly relevant for installing .h files: most build systems (including distutils itself) provide a way to recompile C/C++ source files if they depend on a changed header file. But that only works if the timestamp of the header is updated when it is installed. Note that ./command/build_py.py contains a comment # XXX copy_file by default preserves atime and mtime. IMHO this is # the right thing to do, but perhaps it should be an option -- in # particular, a site administrator might want installed files to # reflect the time of installation rather than the last # modification time before the installed release. but without justification. ---------- components: Distutils messages: 311673 nosy: dstufft, eric.araujo, erik.bray, jdemeyer priority: normal severity: normal status: open title: distutils should NOT preserve timestamps versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 5 11:35:56 2018 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Wirtel?=) Date: Mon, 05 Feb 2018 16:35:56 +0000 Subject: [New-bugs-announce] [issue32774] distutils: cyclic reference in the documentation Message-ID: <1517848556.04.0.467229070634.issue32774@psf.upfronthosting.co.za> Change by St?phane Wirtel : ---------- assignee: docs at python components: Documentation nosy: docs at python, matrixise priority: normal severity: normal status: open title: distutils: cyclic reference in the documentation versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 5 15:07:22 2018 From: report at bugs.python.org (Tim Graham) Date: Mon, 05 Feb 2018 20:07:22 +0000 Subject: [New-bugs-announce] [issue32775] fnmatch.translate() can produce a pattern which emits a nested set warning Message-ID: <1517861242.52.0.467229070634.issue32775@psf.upfronthosting.co.za> New submission from Tim Graham : As discussed in issue30349#msg311684, fnmatch.translate() can produce a pattern which emits a nested set warning: >>> import fnmatch, re >>> re.compile(fnmatch.translate('[[]foo]')) __main__:1: FutureWarning: Possible nested set at position 10 re.compile('(?s:\\(.s:[[]foo\\\\\\]\\)\\\\Z)\\Z') ---------- components: Library (Lib) messages: 311687 nosy: Tim.Graham, serhiy.storchaka priority: normal severity: normal status: open title: fnmatch.translate() can produce a pattern which emits a nested set warning type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 5 16:22:42 2018 From: report at bugs.python.org (holger) Date: Mon, 05 Feb 2018 21:22:42 +0000 Subject: [New-bugs-announce] [issue32776] asyncio SIGCHLD scalability problems Message-ID: <1517865762.95.0.467229070634.issue32776@psf.upfronthosting.co.za> New submission from holger : I intended to use the asyncio framework for building an end-to-end test for our software. In the test I would spawn somewhere between 5k to 10k processes and have the same number of sockets to manage. When I built a prototype I ran into some scaling issues. Instead of launching our real software I tested it with calls to sleep 30. At some point started processes would finish, a SIGCHLD would be delivered to python and then it would fail: Exception ignored when trying to write to the signal wakeup fd: BlockingIOError: [Errno 11] Resource temporarily unavailable Using strace I saw something like: send(5, "\0", 1, 0) = -1 EAGAIN (Resource temporarily unavailable) waitpid(12218, 0xbf8592d8, WNOHANG) = 0 waitpid(12219, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG) = 12219 send(5, "\0", 1, 0) = -1 EAGAIN (Resource temporarily unavailable) waitpid(12220, 0xbf8592d8, WNOHANG) = 0 waitpid(12221, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG) = 12221 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=12293, si_uid=1001, si_status=0, si_utime=0, si_stime= 0} --- getpid() = 11832 write(5, "\21", 1) = -1 EAGAIN (Resource temporarily unavailable) sigreturn({mask=[]}) = 12221 write(2, "Exception ignored when trying to"..., 64) = 64 write(2, "BlockingIOError: [Errno 11] Reso"..., 61) = 61 Looking at the code I see that si_pid of the signal will be ignored and instead wait(2) will be called for all processes. This doesn't seem to scale well enough for my intended use case. I think what could be done is one of the following: * Switch to signalfd for the event notification? * Take si_pid and instead of just notifying that work is there.. inform about the PID that exited? * Use wait(-1,... if there can be only one SIGCHLD handler to collect any dead child ---------- components: asyncio messages: 311692 nosy: asvetlov, holger+lp, yselivanov priority: normal severity: normal status: open title: asyncio SIGCHLD scalability problems _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 5 20:34:03 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Tue, 06 Feb 2018 01:34:03 +0000 Subject: [New-bugs-announce] [issue32777] subprocess: child_exec() uses _Py_set_inheritable() which is not async-signal-safe Message-ID: <1517880843.05.0.467229070634.issue32777@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : _Py_set_inheritable() raises a Python-level exception on error and thus is not async-signal-safe, but child_exec() must use only async-signal-safe functions because it's executed between fork() and exec(). Since a non-raising version is already implemented in Python/fileutils.c for internal use (set_inheritable), I suggest to simply expose it via another public function (similar to _Py_open_noraise(), etc.). ---------- components: Library (Lib) messages: 311699 nosy: izbyshev, vstinner priority: normal severity: normal status: open title: subprocess: child_exec() uses _Py_set_inheritable() which is not async-signal-safe type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 5 21:16:15 2018 From: report at bugs.python.org (REBECCA VICKERS) Date: Tue, 06 Feb 2018 02:16:15 +0000 Subject: [New-bugs-announce] [issue32778] Hi Message-ID: <6B48E606-FEC1-421A-994D-3C03D696E1DF@icloud.com> New submission from REBECCA VICKERS : Sent from my iPhone ---------- messages: 311701 nosy: REBECCAVICKERS0000000 priority: normal severity: normal status: open title: Hi _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 5 23:48:49 2018 From: report at bugs.python.org (Paul Fisher) Date: Tue, 06 Feb 2018 04:48:49 +0000 Subject: [New-bugs-announce] [issue32779] urljoining an empty query string doesn't clear query string Message-ID: <1517892529.44.0.467229070634.issue32779@psf.upfronthosting.co.za> New submission from Paul Fisher : urljoining with '?' will not clear a query string: ACTUAL: >>> import urllib.parse >>> urllib.parse.urljoin('http://a/b/c?d=e', '?') 'http://a/b/c?d=e' EXPECTED: 'http://a/b/c' (optionally, with a ? at the end) WhatWG's URL standard expects a relative URL consisting of only a ? to replace a query string: https://url.spec.whatwg.org/#relative-state Seen in versions 3.6 and 2.7, but probably also affects later versions. ---------- components: Library (Lib) messages: 311704 nosy: Paul Fisher priority: normal severity: normal status: open title: urljoining an empty query string doesn't clear query string type: behavior versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 6 00:27:59 2018 From: report at bugs.python.org (Eric Wieser) Date: Tue, 06 Feb 2018 05:27:59 +0000 Subject: [New-bugs-announce] [issue32780] ctypes: memoryview gives incorrect PEP3118 format strings for both packed and unpacked structs Message-ID: <1517894879.39.0.467229070634.issue32780@psf.upfronthosting.co.za> New submission from Eric Wieser : Discovered [here](https://github.com/numpy/numpy/issues/10528) Consider the following structure, and a memoryview created from it: class foo(ctypes.Structure): _fields_ = [('one', ctypes.c_uint8), ('two', ctypes.c_uint32)] f = foo() mf = memoryview(f) We'd expect this to insert padding, and it does: >>> mf.itemsize 8 But that padding doesn't show up in the format string: >>> mf.format 'T{ No padding is added when using non-native size and alignment, e.g. with ?? But ctypes doesn't even get it right for packed structs: class foop(ctypes.Structure): _fields_ = [('one', ctypes.c_uint8), ('two', ctypes.c_uint32)] _pack_ = 1 f = foo() mf = memoryview(f) The size is what we'd expect: >>> mf.itemsize 5 But the format is garbage: >>> mf.format 'B' # sizeof(byte) == 5!? ---------- components: ctypes messages: 311705 nosy: Eric.Wieser priority: normal severity: normal status: open title: ctypes: memoryview gives incorrect PEP3118 format strings for both packed and unpacked structs versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 6 03:17:53 2018 From: report at bugs.python.org (Po-Hsu Lin) Date: Tue, 06 Feb 2018 08:17:53 +0000 Subject: [New-bugs-announce] [issue32781] lzh_tw is missing in locale.py Message-ID: <1517905073.42.0.467229070634.issue32781@psf.upfronthosting.co.za> New submission from Po-Hsu Lin : The lzh_tw locale (Literary Chinese) is not available in Lib/locale.py This issue will cause error like: Traceback (most recent call last): File "/usr/share/apport/apport-gtk", line 598, in app.run_argv() File "/usr/lib/python3/dist-packages/apport/ui.py", line 694, in run_argv return self.run_crashes() File "/usr/lib/python3/dist-packages/apport/ui.py", line 245, in run_crashes logind_session[1] > self.report.get_timestamp(): File "/usr/lib/python3/dist-packages/apport/report.py", line 1684, in get_timestamp orig_ctime = locale.getlocale(locale.LC_TIME) File "/usr/lib/python3.6/locale.py", line 581, in getlocale return _parse_localename(localename) File "/usr/lib/python3.6/locale.py", line 490, in _parse_localename raise ValueError('unknown locale: %s' % localename) ValueError: unknown locale: lzh_TW This can be easily reproduced in Ubuntu 17.10, with English selected as the default language, but Timezone set to Taipei. This will set the locale to: $ locale LANG=en_US.UTF-8 LANGUAGE= LC_CTYPE="en_US.UTF-8" LC_NUMERIC=lzh_TW LC_TIME=lzh_TW LC_COLLATE="en_US.UTF-8" LC_MONETARY=lzh_TW LC_MESSAGES="en_US.UTF-8" LC_PAPER=lzh_TW LC_NAME=lzh_TW LC_ADDRESS=lzh_TW LC_TELEPHONE=lzh_TW LC_MEASUREMENT=lzh_TW LC_IDENTIFICATION=lzh_TW LC_ALL= And when running some python script to call locale.py, you will see the error message above. ---------- components: Library (Lib) messages: 311717 nosy: cypressyew priority: normal severity: normal status: open title: lzh_tw is missing in locale.py versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 6 13:53:21 2018 From: report at bugs.python.org (Eric Wieser) Date: Tue, 06 Feb 2018 18:53:21 +0000 Subject: [New-bugs-announce] [issue32782] ctypes: memoryview gives incorrect PEP3118 itemsize for array of length zero Message-ID: <1517943201.04.0.467229070634.issue32782@psf.upfronthosting.co.za> New submission from Eric Wieser : Take the following simple structure: class Foo(ctypes.Structure): _fields_ = [('f', ctypes.uint32_t)] And construct some arrays with it: def get_array_view(N): return memoryview((Foo * N)()) In most cases, this works as expected, returning the size of one item: >>> get_array_view(10).itemsize 4 >>> get_array_view(1).itemsize 4 But when N=0, it returns the wrong result >>> get_array_view(0).itemsize 0 Which contradicts its `.format`, which still describes a 4-byte struct >>> get_array_view(0).format 'T{>I:one:}' This causes a downstream problem in numpy: >>> np.array(get_array_view(0)) RuntimeWarning: Item size computed from the PEP 3118 buffer format string does not match the actual item size. ---------- components: ctypes messages: 311740 nosy: Eric.Wieser priority: normal severity: normal status: open title: ctypes: memoryview gives incorrect PEP3118 itemsize for array of length zero type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 6 15:47:53 2018 From: report at bugs.python.org (Matanya Stroh) Date: Tue, 06 Feb 2018 20:47:53 +0000 Subject: [New-bugs-announce] [issue32783] ln(2) isn't accurate in _math.c in cpython Message-ID: <1517950073.47.0.467229070634.issue32783@psf.upfronthosting.co.za> New submission from Matanya Stroh : In cpython/Modules/_math.c there is definition of as const ln2 the value that in that const isn't correct (at least not accurate) This is value that in the file: ln2 = 0.693147180559945286227 (cpython) but when calculating the value in wolframalpha, this is the value we get. ln2 = 0.6931471805599453094172321214581 (wolframalpha) and this is the value from Wikipedia ln2 = 0.693147180559945309417232121458 (wikipedia) also this is the thread in stackoverflow regarding this issue https://stackoverflow.com/questions/48644767/ln2-const-value-in-math-c-in-cpython ---------- components: Library (Lib) messages: 311749 nosy: matanya.stroh priority: normal severity: normal status: open title: ln(2) isn't accurate in _math.c in cpython type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 6 20:51:05 2018 From: report at bugs.python.org (cowlinator) Date: Wed, 07 Feb 2018 01:51:05 +0000 Subject: [New-bugs-announce] [issue32784] Wrong argument name for csv.DictReader in documentation Message-ID: <1517968265.54.0.467229070634.issue32784@psf.upfronthosting.co.za> New submission from cowlinator : The documentation for the csv.DictReader constructor (and presumably csv.DictWriter also) has the wrong name written for the first argument. This prevents the argument from being called by name. >>> file = open("file.txt", 'a') >>> csv.DictReader(csvfile=file) Traceback (most recent call last): File "", line 1, in TypeError: __init__() takes at least 2 arguments (1 given) >>> csv.DictReader(f=file) >>> # how could i have known it was named 'f'? Please change the documentation. ---------- messages: 311759 nosy: cowlinator priority: normal severity: normal status: open title: Wrong argument name for csv.DictReader in documentation versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 7 01:56:02 2018 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Wirtel?=) Date: Wed, 07 Feb 2018 06:56:02 +0000 Subject: [New-bugs-announce] [issue32785] Change the name of the 'f' argument of csv.DictReader and csv.DictWriter Message-ID: <1517986562.49.0.467229070634.issue32785@psf.upfronthosting.co.za> New submission from St?phane Wirtel : The name of the first argument of csv.DictReader and csv.DictWriter is 'f', I propose to rename it to 'csvfile' to be more consistent. It's just a small refactoring based on the issue #32784 ---------- messages: 311766 nosy: matrixise priority: normal severity: normal status: open title: Change the name of the 'f' argument of csv.DictReader and csv.DictWriter versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 7 02:37:54 2018 From: report at bugs.python.org (Myungseo Kang) Date: Wed, 07 Feb 2018 07:37:54 +0000 Subject: [New-bugs-announce] [issue32786] Didnot work strftime() when hangeul in format sting Message-ID: <1517989074.34.0.467229070634.issue32786@psf.upfronthosting.co.za> New submission from Myungseo Kang : Good morning. I have a question while writing Python code in Django. I got empty string when I ran code in a screenshot. But it works in python interprerter(python manage.py shell). I don't know why it works or didn't work. I think maybe hangeul is related. Please confirm this issue. I used Django 1.11, Python 3.4.4 version. ---------- components: Argument Clinic files: ?????????? 2018-02-07 ???? 3.24.05.png messages: 311771 nosy: larry, leop0ld priority: normal severity: normal status: open title: Didnot work strftime() when hangeul in format sting type: behavior versions: Python 3.4 Added file: https://bugs.python.org/file47427/?????????? 2018-02-07 ???? 3.24.05.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 7 06:46:10 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 07 Feb 2018 11:46:10 +0000 Subject: [New-bugs-announce] [issue32787] Better error handling in ctypes Message-ID: <1518003970.21.0.467229070634.issue32787@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The proposed patch makes unexpected errors raised when look up an attribute or a key in a dict (like MemoryError, KeyboardInterrupt, etc) be leaked to a user instead of be overridden by TypeError or AttributeError. ---------- components: Extension Modules, ctypes messages: 311785 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Better error handling in ctypes type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 7 07:01:54 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 07 Feb 2018 12:01:54 +0000 Subject: [New-bugs-announce] [issue32788] Better error handling in sqlite3 Message-ID: <1518004914.65.0.467229070634.issue32788@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The proposed patch makes unexpected errors raised when look up an attribute or a key in a dict (like MemoryError, KeyboardInterrupt, etc) be leaked to a user instead of be overridden by TypeError or AttributeError. ---------- components: Extension Modules messages: 311786 nosy: ghaering, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Better error handling in sqlite3 type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 7 10:53:39 2018 From: report at bugs.python.org (Pedro) Date: Wed, 07 Feb 2018 15:53:39 +0000 Subject: [New-bugs-announce] [issue32789] Note missing from logging.debug() docs Message-ID: <1518018819.58.0.467229070634.issue32789@psf.upfronthosting.co.za> New submission from Pedro : The docs for Logger.debug() (https://docs.python.org/3/library/logging.html#logging.Logger.debug) specify that exc_info can take an exception instance as of version 3.5. However, the docs for logging.debug() (https://docs.python.org/3/library/logging.html#logging.debug) do not, even though logging.debug() redirects to an instance of Logger and so can take the same types of arguments. ---------- assignee: docs at python components: Documentation messages: 311792 nosy: docs at python, pgacv2 priority: normal severity: normal status: open title: Note missing from logging.debug() docs type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 7 21:08:36 2018 From: report at bugs.python.org (=?utf-8?q?Severin_W=C3=BCnsch?=) Date: Thu, 08 Feb 2018 02:08:36 +0000 Subject: [New-bugs-announce] [issue32790] Keep trailing zeros in precision for string format option g Message-ID: <1518055716.96.0.467229070634.issue32790@psf.upfronthosting.co.za> New submission from Severin W?nsch : The documentation starts the the string format parameter 'g': General format. For a given precision p >= 1, this rounds the number to **p significant digits** and then formats the result in either fixed-point format or in scientific notation, depending on its magnitude. I think the behavior of format is inconsistent here: >>> format(0.1949, '.2g') returns '0.19' as expected but >>> format(0.1950, '.2g') returns '0.2' instead of '0.20' This behavior for float is in my opinion the correct one here >>> format(0.1950, '.2f') returns '0.20' ---------- messages: 311813 nosy: sk1d priority: normal severity: normal status: open title: Keep trailing zeros in precision for string format option g type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 7 22:25:27 2018 From: report at bugs.python.org (WilliamM) Date: Thu, 08 Feb 2018 03:25:27 +0000 Subject: [New-bugs-announce] [issue32791] Add\Remove Programs entry is not created. Message-ID: <1518060327.94.0.467229070634.issue32791@psf.upfronthosting.co.za> New submission from WilliamM : I've been writing a script for the Python 3.X install package and encountering some issues due to that. I'm using the installation property InstallAllUsers=1 but find that Python is still placing it's Add/Remove programs entry in user context. Applications usually write system level installs to HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall or HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall What I'm encountering with Python is that it's writing them to HKCU:\Software\Microsoft\Windows\CurrentVersion\Uninstall This is causing some difficulty as it prevents software such as SCCM, KACE, Altiris from being able to detect the main program install or a user with elevated privileges from being able to remove it. Deploying the software will end up with the entry located the System Account's registry within HKU\S-5-1-18\Software\Microsoft\WIndows\CurrentVersion\Uninstall\{Prodct GUID} To add a bit more info, only the dependencies show up in HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall Python 3.6.4 TCL/TK support (32-Bit) Python 3.6.4 Development Libraries (32-Bit) Python 3.6.5 Documentation (32-Bit) Python 3.6.4 Utility Scripts (32-Bit) Python 3.6.4 Executables (32-Bit) Python launcher (32-Bit) Python 3.6.4 Test Suite (32-Bit) Python 3.6.4 Core Interpreter (32-Bit) Python 3.6.4 Standard Library (32-Bit) However the "Python 3.6.4 (32-Bit)" Install that has the installation and bundled uninstall package is located in HKU\S-1-5-18\Software\Microsoft\WIndows\CurrentVersion\Uninstall\{9218130b-5ad0-4cf7-82be-6993cfd6cb84} Is there a known workaround/solution for this or something that can be resolved in a later build? ---------- components: Windows messages: 311814 nosy: WilliamM, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add\Remove Programs entry is not created. type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 03:46:17 2018 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 08 Feb 2018 08:46:17 +0000 Subject: [New-bugs-announce] [issue32792] ChainMap should preserve order of underlying mappings Message-ID: <1518079577.65.0.467229070634.issue32792@psf.upfronthosting.co.za> New submission from Raymond Hettinger : This also applies to 3.6 because ChainMap can be used with OrderedDict. ---------- components: Library (Lib) messages: 311817 nosy: rhettinger priority: normal severity: normal status: open title: ChainMap should preserve order of underlying mappings type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 04:05:18 2018 From: report at bugs.python.org (TaoQingyun) Date: Thu, 08 Feb 2018 09:05:18 +0000 Subject: [New-bugs-announce] [issue32793] smtplib: duplicated debug message Message-ID: <1518080718.92.0.467229070634.issue32793@psf.upfronthosting.co.za> New submission from TaoQingyun <845767657 at qq.com>: ``` if self.debuglevel > 0: self._print_debug('connect:', (host, port)) ``` The above both in _get_socket and connect method, and connect also invoke _get_socket. ---------- components: Library (Lib) files: smtp.patch keywords: patch messages: 311818 nosy: qingyunha priority: normal severity: normal status: open title: smtplib: duplicated debug message versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47428/smtp.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 04:07:52 2018 From: report at bugs.python.org (Mark Dickinson) Date: Thu, 08 Feb 2018 09:07:52 +0000 Subject: [New-bugs-announce] [issue32794] PyNumber_Float counterpart that doesn't accept strings Message-ID: <1518080872.68.0.467229070634.issue32794@psf.upfronthosting.co.za> New submission from Mark Dickinson : Our abstract objects layer in the C-API includes PyNumber_Float [1], which is equivalent to a Python-level `float` call, including the behaviour of accepting strings. I'm not convinced that it's a particularly useful function: I suspect that it's more common to need to either convert a string to a float, or to convert a float-y object (i.e., something whose type implements __float__) to a float, but not both at once. The second need is precisely the one that most of the math module has: accept anything that implements __float__, but don't accept strings. Yesterday I found myself writing the following code for a 3rd-party extension module: static PyObject * _validate_float(PyObject *value) { double value_as_double; /* Fast path: avoid creating a new object if it's not necessary. */ if (PyFloat_CheckExact(value)) { Py_INCREF(value); return value; } value_as_double = PyFloat_AsDouble(value); if (value_as_double == -1.0 && PyErr_Occurred()) { return NULL; } return PyFloat_FromDouble(value_as_double); } Would it be worth adding a new C-API level function that does essentially the above? The semantics of such a function seem clear cut. The major problem would be figuring out what to call it, since to me PyNumber_Float is the one obvious name for such behaviour, but it's already taken. :-) Please note that I'm not suggesting removing / deprecating / changing the existing PyNumber_Float. That would amount to gratuitous breakage. [1] https://docs.python.org/3.6/c-api/number.html#c.PyNumber_Float ---------- components: Interpreter Core messages: 311819 nosy: mark.dickinson priority: normal severity: normal status: open title: PyNumber_Float counterpart that doesn't accept strings type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 04:11:07 2018 From: report at bugs.python.org (Mike Lewis) Date: Thu, 08 Feb 2018 09:11:07 +0000 Subject: [New-bugs-announce] [issue32795] subprocess.check_output() with timeout does not exit if child process does not generate output after timeout Message-ID: <1518081067.06.0.467229070634.issue32795@psf.upfronthosting.co.za> New submission from Mike Lewis : When using a timeout with check_output(), the call does not terminate unless the child process generates output after the timeout. Looking at the code, it appears there is a second call to communicate() after the timeout has happened, presumably to retrieve any remaining output. This call appears to hang until the child process generates output. I have two test cases (for Python 2.7 / subprocess32 and Python 3 / subprocess respectively). They show the same behaviour, the Python 2.7 version has been reproduced on Ubuntu 16.04.3 and Centos 7 and the Python 3 version on Ubuntu 16.043. Each test case has a first example where bash executes a long sleep before generating output and where the timeout is not respected, and a second example that generates output at intervals and the timeout is respected. Relevant code also attached below for reference: then = time.time() print("Subprocess with idle stdout at timeout: start at {}".format(then)) try: output = subprocess.check_output(["bash", "-c", "echo Subcall; sleep 5; echo Done;"], stderr=subprocess.STDOUT, timeout=1) now = time.time() print("Finish at: {}, {:.0f} seconds".format(now, now-then)) print(output) except subprocess.TimeoutExpired as te: now = time.time() print("Timed out at: {}, {:.0f} seconds".format(now, now-then)) then = time.time() print("Generating stdout from subprocess: start at ", then) try: output = subprocess.check_output(["bash", "-c", "echo Subcall; for i in 1 2 3 4 5; do sleep 1; echo $i; done; echo Done;"], stderr=subprocess.STDOUT, timeout=1) now = time.time() print("Finish at: {}, {:.0f} seconds".format(now, now-then)) print(output) except subprocess.TimeoutExpired as te: now = time.time() print("Timed out at: {}, {:.0f} seconds".format(now, now-then)) ---------- components: Library (Lib) files: timeout_examples.tgz messages: 311820 nosy: Mike Lewis priority: normal severity: normal status: open title: subprocess.check_output() with timeout does not exit if child process does not generate output after timeout type: behavior versions: Python 2.7, Python 3.5 Added file: https://bugs.python.org/file47429/timeout_examples.tgz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 05:00:42 2018 From: report at bugs.python.org (Jensen Taylor) Date: Thu, 08 Feb 2018 10:00:42 +0000 Subject: [New-bugs-announce] [issue32796] 3.8-dev definition not available Message-ID: <1518084042.14.0.467229070634.issue32796@psf.upfronthosting.co.za> New submission from Jensen Taylor : I can't build with Python 3.8 yet on Travis-CI because of this ---------- messages: 311825 nosy: Jensen Taylor priority: normal severity: normal status: open title: 3.8-dev definition not available versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 06:55:24 2018 From: report at bugs.python.org (Jeroen Demeyer) Date: Thu, 08 Feb 2018 11:55:24 +0000 Subject: [New-bugs-announce] [issue32797] Tracebacks from Cython modules no longer work Message-ID: <1518090924.55.0.467229070634.issue32797@psf.upfronthosting.co.za> New submission from Jeroen Demeyer : Displaying the source code in tracebacks for Cython-compiled extension modules in IPython no longer works due to PEP 302. Various fixes are possible, the two most obvious ones are: 1. linecache should continue searching for the source file even if loader.get_source() returns None. 2. the method ExtensionFileLoader.get_source() should be removed (instead of being implemented and returning None). Now why was this broken and how do the above fix that? When IPython needs to display a traceback, it uses linecache.getlines() to get the source code to display. For Cython extensions, the filename is a correct *relative* filename (it must be a relative filename since Cython does not know where the sources will be after installation). Since the filename is relative, linecache does not immediately find it, so it looks for a PEP 302 loader. For extension modules (Cython or not), it finds an instance of ExtensionFileLoader. If the loader has a get_source() method, then it uses that to get the sources. Since ExtensionFileLoader.get_source() returns None, linecache stops looking further for sources. Instead, what should happen is that linecache continues looking for the sources in sys.path where it has a chance of finding them (if they are installed somewhere in sys.path). ---------- components: Library (Lib) messages: 311829 nosy: erik.bray, jdemeyer priority: normal severity: normal status: open title: Tracebacks from Cython modules no longer work _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 10:06:59 2018 From: report at bugs.python.org (Byron Hawkins) Date: Thu, 08 Feb 2018 15:06:59 +0000 Subject: [New-bugs-announce] [issue32798] mmap.flush() on Linux does not accept the "offset" and "size" args Message-ID: <1518102419.01.0.467229070634.issue32798@psf.upfronthosting.co.za> New submission from Byron Hawkins : open_file = open("file.txt", "r+b") file_map = mmap.mmap(open_file, 0) file_map.seek(offset) file_map.write("foobar") # success file_map.flush() # success file_map.flush(offset, len("foobar")) # Fails with "errno 22" This example passes correct arguments to mmap.flush(), yet it fails with errno 22. So the arguments are not valid on linux. If the bug cannot be fixed, then all reference to those two arguments should be removed from the code and documentation related to Linux. ---------- components: Library (Lib) messages: 311832 nosy: byronhawkins priority: normal severity: normal status: open title: mmap.flush() on Linux does not accept the "offset" and "size" args type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 10:28:27 2018 From: report at bugs.python.org (Eli Ribble) Date: Thu, 08 Feb 2018 15:28:27 +0000 Subject: [New-bugs-announce] [issue32799] returned a result with an error set Message-ID: <1518103707.85.0.467229070634.issue32799@psf.upfronthosting.co.za> New submission from Eli Ribble : We've had about 200 occurrences of this error in various parts of our code. I have captured stack traces using sentry so I may be able to provide more detail if requested. The ultimate problem is that a SystemError is raised from within contextlib. The message of the SystemError is: " returned a result with an error set" The code, according to sentry, that is emitting this error is: python3.6/contextlib.py in helper at line 159 """ @wraps(func) def helper(*args, **kwds): > return _GeneratorContextManager(func, args, kwds) return helper I'm reporting this bug primarily because of the documentation at https://docs.python.org/3/library/exceptions.html#SystemError and I'm using CPython ---------- components: Library (Lib) files: Screen Shot 2018-02-08 at 8.28.00 AM.png messages: 311836 nosy: EliRibble priority: normal severity: normal status: open title: returned a result with an error set type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file47430/Screen Shot 2018-02-08 at 8.28.00 AM.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 10:33:14 2018 From: report at bugs.python.org (sblondon) Date: Thu, 08 Feb 2018 15:33:14 +0000 Subject: [New-bugs-announce] [issue32800] Replace deprecated link to new page on w3c site Message-ID: <1518103994.08.0.467229070634.issue32800@psf.upfronthosting.co.za> New submission from sblondon : The documentation about namespace of ElemenTree (https://docs.python.org/3/library/xml.etree.elementtree.html#parsing-xml-with-namespaces) has a link for 'default namespace' to 'https://www.w3.org/TR/2006/REC-xml-names-20060816/#defaulting'. This page displays a big red message because the page is outdated. It can be replaced by the new one: https://www.w3.org/TR/xml-names/#defaulting The content of the paragraph in the new version is equivalent to the old one. I can provide a PR if you are interested. ---------- assignee: docs at python components: Documentation messages: 311837 nosy: docs at python, sblondon priority: normal severity: normal status: open title: Replace deprecated link to new page on w3c site type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 11:09:49 2018 From: report at bugs.python.org (=?utf-8?b?0JTQuNC70Y/QvSDQn9Cw0LvQsNGD0LfQvtCy?=) Date: Thu, 08 Feb 2018 16:09:49 +0000 Subject: [New-bugs-announce] [issue32801] Lib/_strptime.py: utilize all() Message-ID: <1518106189.47.0.467229070634.issue32801@psf.upfronthosting.co.za> New submission from ????? ???????? : diff --git a/Lib/_strptime.py b/Lib/_strptime.py --- a/Lib/_strptime.py +++ b/Lib/_strptime.py @@ -238,10 +238,7 @@ class TimeRE(dict): """ to_convert = sorted(to_convert, key=len, reverse=True) - for value in to_convert: - if value != '': - break - else: + if all(value == '' for value in to_convert): return '' regex = '|'.join(re_escape(stuff) for stuff in to_convert) regex = '(?P<%s>%s' % (directive, regex) ---------- components: Library (Lib) messages: 311838 nosy: dilyan.palauzov priority: normal severity: normal status: open title: Lib/_strptime.py: utilize all() versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 13:44:47 2018 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Wirtel?=) Date: Thu, 08 Feb 2018 18:44:47 +0000 Subject: [New-bugs-announce] [issue32802] Travis does not compile Python if there is one change in the Documentation Message-ID: <1518115487.35.0.467229070634.issue32802@psf.upfronthosting.co.za> New submission from St?phane Wirtel : If there is one .rst file in a commit, Travis does not compile Python, since this commit https://github.com/python/cpython/commit/b2ec3615c81ca4f3c938245842a45956da8d5acb Here is a fix. ---------- messages: 311841 nosy: matrixise priority: normal severity: normal status: open title: Travis does not compile Python if there is one change in the Documentation versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 18:25:09 2018 From: report at bugs.python.org (jasen betts) Date: Thu, 08 Feb 2018 23:25:09 +0000 Subject: [New-bugs-announce] [issue32803] smtplib: LMTP broken in the case of multiple RCPT Message-ID: <1518132309.75.0.467229070634.issue32803@psf.upfronthosting.co.za> New submission from jasen betts : smtplib's LMTP support is broken, LMTP returns multiple responses at and of data if there have been multiple successful RCPT TO commands, but smtplib::data() only looks for a single response. see the example conversation on page 3 of RFC2033 This makes LMTP unusable if there is more than one RCPT ---------- components: Library (Lib) messages: 311854 nosy: jasen betts priority: normal severity: normal status: open title: smtplib: LMTP broken in the case of multiple RCPT type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 8 20:33:21 2018 From: report at bugs.python.org (Julian O) Date: Fri, 09 Feb 2018 01:33:21 +0000 Subject: [New-bugs-announce] [issue32804] urllib.retrieve documentation doesn't mention context parameter Message-ID: <1518140001.56.0.467229070634.issue32804@psf.upfronthosting.co.za> New submission from Julian O : In 2014, urlretrieve was changed to accept a context parameter to configure, for example, SSL settings. Ref: https://github.com/python/cpython/commit/b206473ef8a7abe9abf5ab8776ea3bcb90adc747 However, 2.7.14 documentation does not mention this parameter. > urllib.urlretrieve(url[, filename[, reporthook[, data]]]) Ref: https://docs.python.org/2/library/urllib.html ---------- assignee: docs at python components: Documentation messages: 311861 nosy: docs at python, julian_o priority: normal severity: normal status: open title: urllib.retrieve documentation doesn't mention context parameter versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 9 04:27:39 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 09 Feb 2018 09:27:39 +0000 Subject: [New-bugs-announce] [issue32805] Possible integer overflow when call PyDTrace_GC_DONE() Message-ID: <1518168459.08.0.467229070634.issue32805@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : PyDTrace_GC_DONE() accepts the argument of type int. But it is called with the sum of collected and uncollectable objects which has type Py_ssize_t and can be larger that maximal int. This produces a compiler warning on Windows: ..\Modules\gcmodule.c(978): warning C4244: 'function': conversion from 'Py_ssize_t' to 'int', possible loss of data [D:\buildarea\3.x.bolen-windows10\build\PCbuild\pythoncore.vcxproj] and looks as not false alarm. ---------- components: Interpreter Core messages: 311870 nosy: lukasz.langa, serhiy.storchaka priority: normal severity: normal status: open title: Possible integer overflow when call PyDTrace_GC_DONE() type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 9 04:47:46 2018 From: report at bugs.python.org (Yuri Kanivetsky) Date: Fri, 09 Feb 2018 09:47:46 +0000 Subject: [New-bugs-announce] [issue32806] locally imported modules are unaccessible in lambdas in pdb Message-ID: <1518169666.43.0.467229070634.issue32806@psf.upfronthosting.co.za> New submission from Yuri Kanivetsky : Consider the following script: # import pdb; pdb.set_trace() # import re def f(): import re print((lambda: re.findall('a', 'aaa'))()) import pdb; pdb.set_trace() print('test') f() When you run it and try to evaluate `(lambda: re.findall('a', 'aaa'))()`, you get: ['a', 'a', 'a'] > /home/yuri/_/1.py(7)f() -> print('test') (Pdb) (lambda: re.findall('a', 'aaa'))() *** NameError: name 're' is not defined (Pdb) import re (Pdb) (lambda: re.findall('a', 'aaa'))() *** NameError: name 're' is not defined (Pdb) With the commented out breakpoint it works: > /home/yuri/_/a.py(3)() -> def f(): (Pdb) import re (Pdb) (lambda: re.findall('a', 'aaa'))() ['a', 'a', 'a'] (Pdb) Also it works with uncommented global import and second breakpoint: ['a', 'a', 'a'] > /srv/http/sl/makosh/a.py(7)f() -> print('test') (Pdb) (lambda: re.findall('a', 'aaa'))() ['a', 'a', 'a'] (Pdb) >From what I can see the issue occurs when there's no `re` in `globals` argument here: https://github.com/python/cpython/blob/v3.6.4/Lib/pdb.py#L376 I've run into it when trying to grep some object's attribute names, like: !list(filter(lambda x: re.search('class', x), dir(__name__))) ---------- messages: 311871 nosy: Yuri Kanivetsky priority: normal severity: normal status: open title: locally imported modules are unaccessible in lambdas in pdb type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 9 05:44:05 2018 From: report at bugs.python.org (Arka) Date: Fri, 09 Feb 2018 10:44:05 +0000 Subject: [New-bugs-announce] [issue32807] Add Message-ID: <1518173045.02.0.467229070634.issue32807@psf.upfronthosting.co.za> Change by Arka : ---------- nosy: Arka priority: normal severity: normal status: open title: Add _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 9 05:52:11 2018 From: report at bugs.python.org (Christian Heigele) Date: Fri, 09 Feb 2018 10:52:11 +0000 Subject: [New-bugs-announce] [issue32808] subprocess.check_output opens an unwanted command line window after fall creator update Message-ID: <1518173531.96.0.467229070634.issue32808@psf.upfronthosting.co.za> New submission from Christian Heigele : Hi, I have two machines, both Windows 10, both with python 2.7.12 (bug is also reproducible with 2.7.14), one of them has the Fall creator update (-> version 1709) and one doesn't (->version 1607). When I execute the checkout on some executable that is available on both machines, I get different behaviours: One the one without the fall creator update I get the output of that executable as expected as the return value. One the machine with the update I see a new command line window popping up, the executable runs through, and the return value of check_output is an empty string. I'll use it like follow: f = subprocess.check_output(['svn.exe', '--help']) ---------- components: Windows messages: 311875 nosy: Christian Heigele, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: subprocess.check_output opens an unwanted command line window after fall creator update versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 9 08:20:30 2018 From: report at bugs.python.org (Michal Niklas) Date: Fri, 09 Feb 2018 13:20:30 +0000 Subject: [New-bugs-announce] [issue32809] Python 3.6 on Windows problem with source encoding header Message-ID: <1518182430.26.0.467229070634.issue32809@psf.upfronthosting.co.za> New submission from Michal Niklas : I have strange error with source encoding header. I usually use it from template which looks like: #!/usr/bin/env python # -*- coding: utf8 -*- This works well on Linux machines with Python 2.x and 3.x, but on Windows machines it works well only with Python 2.x. When I use Python 3.6 it often works, but for some sources interpreter reports: SyntaxError: encoding problem: utf8 It is easy to "correct": you can change "utf8" to "utf-8". Strange thing is that even on Windows with Python 3.6 it works well with the same source encoding header but with little edited source file. I tried to create simplest file that shows this error but works well when I delete one empty line. Error on Windows with Python 3.6: c:\temp>C:\Users\mn\AppData\Local\Programs\Python\Python36-32\python.exe zz2_err .py File "zz2_err.py", line 2 SyntaxError: encoding problem: utf8 Works well with Python 2.7: c:\temp>zz2_err.py Python: 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)] ver: $Id: zz.py 3367 2018-02-07 07:26:19Z mn $ Works well when I delete one empty line: c:\temp>C:\Users\mn\AppData\Local\Programs\Python\Python36-32\python.exe zz2_err.py Python: 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [MSC v.1900 32 bit (Intel)] ver: $Id: zz.py 3367 2018-02-07 07:26:19Z mn $ SHA1 sum of source that breaks Python 3.6: c:\temp>fciv -sha1 zz2_err.py // // File Checksum Integrity Verifier version 2.05. // 4176921690c9ea9259c57c9fcc3cda84aa51015e zz2_err.py The same source on Linux works well with both Python 2.7 and Python 3.6: [mn:] sha1sum zz2_err.py 4176921690c9ea9259c57c9fcc3cda84aa51015e zz2_err.py [mn:] python zz2_err.py Python: 2.7.13 (default, Dec 1 2017, 09:21:53) [GCC 6.4.1 20170727 (Red Hat 6.4.1-1)] ver: $Id: zz.py 3367 2018-02-07 07:26:19Z mn $ [mn:] python3 zz2_err.py Python: 3.5.4 (default, Oct 9 2017, 12:07:29) [GCC 6.4.1 20170727 (Red Hat 6.4.1-1)] ver: $Id: zz.py 3367 2018-02-07 07:26:19Z mn $ [mn:] ll zz2_err.py -rw-rw-r-- 1 mn mn 266 02-09 14:12 zz2_err.py ---------- components: Interpreter Core files: zz2_err.zip messages: 311883 nosy: mniklas priority: normal severity: normal status: open title: Python 3.6 on Windows problem with source encoding header type: compile error versions: Python 3.6 Added file: https://bugs.python.org/file47432/zz2_err.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 9 14:49:33 2018 From: report at bugs.python.org (David Beazley) Date: Fri, 09 Feb 2018 19:49:33 +0000 Subject: [New-bugs-announce] [issue32810] Expose ags_gen and agt_gen in asynchronous generators Message-ID: <1518205773.11.0.467229070634.issue32810@psf.upfronthosting.co.za> New submission from David Beazley : Libraries such as Curio and asyncio provide a debugging facility that allows someone to view the call stack of generators/coroutines. For example, the _task_get_stack() function in asyncio/base_tasks.py. This works by manually walking up the chain of coroutines (by following cr_frame and gi_frame links as appropriate). The only problem is that it doesn't work if control flow falls into an async generator because an "async_generator_asend" instance is encountered and there is no meaningful way to proceed any further with stack inspection. This problem could be fixed if "async_generator_asend" and "async_generator_athrow" instances exposed the underlying "ags_gen" and "agt_gen" attribute that's held inside the corresponding C structures in Objects/genobject.c. Note: I made a quick and dirty "hack" to Python to extract "ags_gen" and verified that having this information would allow me to get complete stack traces in Curio. ---------- messages: 311906 nosy: dabeaz priority: normal severity: normal status: open title: Expose ags_gen and agt_gen in asynchronous generators type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 9 19:29:54 2018 From: report at bugs.python.org (Alexander Mohr) Date: Sat, 10 Feb 2018 00:29:54 +0000 Subject: [New-bugs-announce] [issue32811] test_os.py fails when run in docker container on OSX host Message-ID: <1518222594.14.0.467229070634.issue32811@psf.upfronthosting.co.za> New submission from Alexander Mohr : This test fails when run in a debian docker container from a OSX host with the following error: test test_os failed -- Traceback (most recent call last): File "/build/Python-3.6.3/Lib/test/test_os.py", line 3273, in test_attributes self.check_entry(entry, 'dir', True, False, False) File "/build/Python-3.6.3/Lib/test/test_os.py", line 3228, in check_entry os.stat(entry.path, follow_symlinks=False).st_ino) AssertionError: 3018467 != 1419357 works fine when run on ubuntu host. If this is a docker problem I'd be happy to report there. ---------- components: Build messages: 311921 nosy: thehesiod priority: normal severity: normal status: open title: test_os.py fails when run in docker container on OSX host type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 9 21:45:17 2018 From: report at bugs.python.org (john) Date: Sat, 10 Feb 2018 02:45:17 +0000 Subject: [New-bugs-announce] [issue32812] edited code only runs after closing and re-opening Python. Message-ID: <1518230717.87.0.467229070634.issue32812@psf.upfronthosting.co.za> New submission from john : For some reason, whenever I make a change to my code in Python, I have to restart Python for the change to take effect. Otherwise Python just runs whatever the code was when I opened the .py file. Worth noting is that this has not always been true, and is not always true. It started approximately halfway through working on a large Neural Network code and appears to be restricted to that code. (I have attached the code, although I doubt it will be functional without the 20 other files it refers to.) This has become a huge pain in the neck, wasting an estimated 23.7% of my life. Please send help. Thanks! ^.^ ---------- files: train.py messages: 311927 nosy: johnschwarcz priority: normal severity: normal status: open title: edited code only runs after closing and re-opening Python. versions: Python 3.6 Added file: https://bugs.python.org/file47433/train.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 10 02:04:15 2018 From: report at bugs.python.org (Steffen Ullrich) Date: Sat, 10 Feb 2018 07:04:15 +0000 Subject: [New-bugs-announce] [issue32813] SSL shared_ciphers implementation wrong - returns configured but not shared ciphers Message-ID: <1518246255.91.0.467229070634.issue32813@psf.upfronthosting.co.za> New submission from Steffen Ullrich : The current implementation of shared_ciphers uses the SSL_get_ciphers method. This method returns the list of configured ciphers (i.e. from the context) and not the list of ciphers shared between client and server. To get this list one can use the documented SSL_get_client_ciphers for OpenSSL >= 1.1.0, access ssl->sessions->ciphers directly or parse the result from the undocumented SSL_get_shared_ciphers for older versions of OpenSSL. See also https://stackoverflow.com/questions/48717497/python-ssl-shared-ciphers-not-as-documented/48718081#48718081 ---------- messages: 311940 nosy: noxxi priority: normal severity: normal status: open title: SSL shared_ciphers implementation wrong - returns configured but not shared ciphers type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 10 05:46:21 2018 From: report at bugs.python.org (Segev Finer) Date: Sat, 10 Feb 2018 10:46:21 +0000 Subject: [New-bugs-announce] [issue32814] smtplib.send_message mishandles 8BITMIME RFC 6152 Message-ID: <1518259581.24.0.467229070634.issue32814@psf.upfronthosting.co.za> New submission from Segev Finer : According to https://tools.ietf.org/html/rfc6152 you should only send an 8bit email body when the server advertises 8BITMIME in the EHLO, and you should declare it with BODY=8BITMIME in the MAIL command. The default cte_type is 8bit for the new default email policy which is what will be used by smtplib.send_message when it calls BytesGenerator.flatten, but it will not set BODY=8BITMIME even if the message has any 8-bit characters. It will only set BODY=8BITMIME if the message uses any utf-8 mail/from addresses. This means that a message with ASCII addresses and an 8-bit utf-8 body will be sent without BODY=8BITMIME (I think that's actually quite common). What should be done is checking if the server advertises 8BITMIME, and if it doesn't use cte_type=7bit; if it does we should set BODY=8BITMIME if the policy is using cte_type=8bit. I don't know whether any real email server chokes on this. But it's best to follow the RFC, I think. ---------- components: Library (Lib), email messages: 311947 nosy: Segev Finer, barry, r.david.murray priority: normal severity: normal status: open title: smtplib.send_message mishandles 8BITMIME RFC 6152 type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 10 08:23:01 2018 From: report at bugs.python.org (Segev Finer) Date: Sat, 10 Feb 2018 13:23:01 +0000 Subject: [New-bugs-announce] [issue32815] Document text parameter to subprocess.Popen Message-ID: <1518268981.78.0.467229070634.issue32815@psf.upfronthosting.co.za> New submission from Segev Finer : The new text parameter in subprocess was documented for subprocess.run but not for subprocess.Popen in the Sphinx documentation. It is documented in the docstring for both though. ---------- assignee: docs at python components: Documentation messages: 311949 nosy: Segev Finer, docs at python priority: normal severity: normal status: open title: Document text parameter to subprocess.Popen type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 10 10:59:52 2018 From: report at bugs.python.org (Korabelnikov Aleksandr) Date: Sat, 10 Feb 2018 15:59:52 +0000 Subject: [New-bugs-announce] [issue32816] Python's json dumps/loads make integer keys of the dict str Message-ID: <1518278392.29.0.467229070634.issue32816@psf.upfronthosting.co.za> New submission from Korabelnikov Aleksandr : when i serialize and deserialize python built-in structure I'm expect output same as input arr2 = [1,2,'3'] arr2_json = json.dumps(arr2) json.loads(arr2_json) Out[16]: [1, 2, '3'] BUT when I'm tring do it with dict I got str keys instead of integer dict1 = {0: 'object0', '1': 'object2'} json1 = json.dumps(dict1) json.loads(json1) Out[6]: {'0': 'object0', '1': 'object2'} Notice keys must be [0, '1'] but actually are ['0', '1'] ---------- components: Library (Lib) messages: 311951 nosy: solin priority: normal severity: normal status: open title: Python's json dumps/loads make integer keys of the dict str type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 10 14:40:51 2018 From: report at bugs.python.org (LaurentiuS) Date: Sat, 10 Feb 2018 19:40:51 +0000 Subject: [New-bugs-announce] [issue32817] MethodType ImportError, can't use random.py Message-ID: <1518291651.45.0.467229070634.issue32817@psf.upfronthosting.co.za> New submission from LaurentiuS : Whenever I'd convert .py to .exe with pyinstaller, this error would occur. I have already reinstalled Python, and yet the error remains. When i import random to Python IDLE, it runs it without any problems. ---------- components: Windows files: qwerqrwe.PNG messages: 311955 nosy: LaurentiuS, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: MethodType ImportError, can't use random.py type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file47435/qwerqrwe.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 10 23:27:54 2018 From: report at bugs.python.org (Ma Lin) Date: Sun, 11 Feb 2018 04:27:54 +0000 Subject: [New-bugs-announce] [issue32818] multiprocessing segmentfault under Windows compatibility mode Message-ID: <1518323274.11.0.467229070634.issue32818@psf.upfronthosting.co.za> New submission from Ma Lin : Reproduce: Right click python.exe -> properties -> compatibility tab -> enable compatibility mode (e.g. Windows 7) Then run this test will get a segmentfault: test_multiprocessing_main_handling.py CPython 3.6 is Ok. ---------- components: Windows messages: 311980 nosy: Ma Lin, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: multiprocessing segmentfault under Windows compatibility mode type: crash versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 11 08:16:21 2018 From: report at bugs.python.org (Christian Heimes) Date: Sun, 11 Feb 2018 13:16:21 +0000 Subject: [New-bugs-announce] [issue32819] match_hostname() error reporting bug Message-ID: <1518354981.31.0.467229070634.issue32819@psf.upfronthosting.co.za> New submission from Christian Heimes : Since bpo #23033, ssl.match_hostname() no longer supports partial wildcard matching, e.g. "www*.example.org". In case of a partial match, _dnsname_match() fails with a confusing/wrong error message: >>> import ssl >>> ssl._dnsname_match('www*.example.com', 'www1.example.com') Traceback (most recent call last): File "", line 1, in File ".../cpython/Lib/ssl.py", line 198, in _dnsname_match "wildcard can only be present in the leftmost segment: " + repr(dn)) ssl.SSLCertVerificationError: ("wildcard can only be present in the leftmost segment: 'www*.example.com'",) The wildcard *is* in the leftmost segment. But it's not a full match but a partial match. The error message applies to a SAN dNSName like "*.*.example.org" or "www.*.example.com", however the function does not raise an error for multiple or non left-most wildcards: # multiple wildcards return None >>> ssl._dnsname_match('*.*.example.com', 'www.sub.example.com') # single wildcard in another label returns False >>> ssl._dnsname_match('www.*.example.com', 'www.sub.example.com') False ---------- assignee: christian.heimes components: SSL messages: 311996 nosy: christian.heimes priority: normal severity: normal status: open title: match_hostname() error reporting bug type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 11 10:45:54 2018 From: report at bugs.python.org (Eric Osborne) Date: Sun, 11 Feb 2018 15:45:54 +0000 Subject: [New-bugs-announce] [issue32820] Add binary methods to ipaddress Message-ID: <1518363954.67.0.467229070634.issue32820@psf.upfronthosting.co.za> New submission from Eric Osborne : This issue adds two things to the ipaddress lib: * an __index__ method to allow bin(), oct(), hex() to work * a property method to get the fully padded (32-bit or 128-bit) address as a string ---------- components: Library (Lib) messages: 311997 nosy: ewosborne priority: normal severity: normal status: open title: Add binary methods to ipaddress type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 11 12:08:39 2018 From: report at bugs.python.org (Mario Corchero) Date: Sun, 11 Feb 2018 17:08:39 +0000 Subject: [New-bugs-announce] [issue32821] Add snippet on how to configure a "split" stream for console Message-ID: <1518368919.27.0.467229070634.issue32821@psf.upfronthosting.co.za> New submission from Mario Corchero : As discussed in python-ideas, it would be good to have a recipe on how to configure the logging stack to be able to log ``ERROR`` and above to stderr and ``INFO`` and below to stdout. It was proposed to add it into the cookbook rather than on the standard library. An alternative proposal was a "ConsoleHandler" that would do this by default. (https://github.com/mariocj89/cpython/commit/501cfcd0f4cad1e04d87b89784988c52a77a80ad) ---------- assignee: docs at python components: Documentation messages: 312003 nosy: docs at python, mariocj89 priority: normal severity: normal status: open title: Add snippet on how to configure a "split" stream for console versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 11 17:18:17 2018 From: report at bugs.python.org (David Rebbe) Date: Sun, 11 Feb 2018 22:18:17 +0000 Subject: [New-bugs-announce] [issue32822] finally block doesn't re-raise exception if return statement exists inside Message-ID: <1518387497.4.0.467229070634.issue32822@psf.upfronthosting.co.za> New submission from David Rebbe : According to the docs: "When an exception has occurred in the try clause and has not been handled by an except clause (or it has occurred in an except or else clause), it is re-raised after the finally clause has been executed." https://docs.python.org/2/tutorial/errors.html#defining-clean-up-actions This seems to not be the case if return inside a finally block, the exception needing to be thrown looks like its tossed out. I'm not sure if this is intended behavior and the docs need to be updated or python isn't doing the correct behavior. Python 3.4.4 (v3.4.4:737efcadf5a6, Dec 20 2015, 19:28:18) [MSC v.1600 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. ---------- components: Interpreter Core files: finally_test.py messages: 312014 nosy: David Rebbe priority: normal severity: normal status: open title: finally block doesn't re-raise exception if return statement exists inside type: behavior versions: Python 3.4 Added file: https://bugs.python.org/file47438/finally_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 11 18:30:54 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 11 Feb 2018 23:30:54 +0000 Subject: [New-bugs-announce] [issue32823] Regression in test -j behavior and time in 3.7.0b1 Message-ID: <1518391854.25.0.467229070634.issue32823@psf.upfronthosting.co.za> New submission from Terry J. Reedy : For me, on Windows, recent changes, since 3.7.0a4, have greatly reduced the benefit of j0 (= j14 on my system). For 3.6.4 (installed), the times are around 13:30 and 2:20. The times for 3.7.0a4 (installed) are about the same parallel and would have been for serial if not for a crash. For 3.7.0b1 and master, compiled with debug on, the times are around 24:00 and 11:10. As shown by the increase for serial from 13:30 to 24:00, debug by itself slows tests down by nearly half. For parallel, the slowdown from 2:20 to 11:40, is increased by an extra factor of more than 2. The reason is that parallel tests are done in batches of n (at least on windows). Before, a new test started in one of the 14 processes whenever the test in a process finished. All 12 cpus were kept busy until less than 12 tests remained. Now, 14 new tests are started when all the previous 14 finish. Therefore, cpus wait while the slowest test in a batch of 14 finishes. Beginning of old log: Run tests in parallel using 14 child processes 0:00:00 [ 1/413] test__opcode passed 0:00:00 [ 2/413] test__locale passed 0:00:00 [ 3/413] test__osx_support passed 0:00:00 [ 4/413] test_abc passed 0:00:01 [ 5/413] test_abstract_numbers passed 0:00:01 [ 6/413] test_aifc passed 0:00:02 [ 7/413] test_array passed 0:00:02 [ 8/413] test_asdl_parser skipped test_asdl_parser skipped -- test irrelevant for an installed Python 0:00:03 [ 9/413] test_argparse passed 0:00:04 [ 10/413] test_ast passed 0:00:04 [ 11/413] test_asyncgen passed 0:00:05 [ 12/413] test_unittest passed 0:00:06 [ 13/413] test_asynchat passed 0:00:06 [ 14/413] test_atexit passed 0:00:06 [ 15/413] test_audioop passed 0:00:06 [ 16/413] test_augassign passed 0:00:07 [ 17/413] test_asyncore passed 0:00:07 [ 18/413] test_baseexception passed 0:00:07 [ 19/413] test_base64 passed ... 0:00:29 [107/413] test_enum passed 0:00:30 [108/413] test_enumerate passed 0:00:30 [109/413] test_eof passed 0:00:30 [110/413] test_epoll skipped test_epoll skipped -- test works only on Linux 2.6 0:00:30 [111/413] test_errno passed Beginning of new log: Run tests in parallel using 14 child processes running: test_grammar (30 sec), test_opcodes (30 sec), test_dict (30 sec), test_builtin (30 sec), test_exceptions (30 sec), test_types (30 sec), test_unittest (30 sec), test_doctest (30 sec), test_doctest2 (30 sec), test_support (30 sec), test___all__ (30 sec), test___future__ (30 sec), test__locale (30 sec), test__opcode (30 sec) 0:00:41 [ 1/414] test_support passed -- running: test_grammar (41 sec), test_opcodes (41 sec), test_dict (41 sec), test_builtin (41 sec), test_exceptions (41 sec), test_types (41 sec), test_doctest (41 sec), test___all__ (41 sec), test___future__ (41 sec), test__locale (41 sec), test__opcode (41 sec) 0:00:41 [ 2/414] test_doctest2 passed -- running: test_grammar (41 sec), test___all__ (41 sec), test__locale (41 sec) 0:00:41 [ 3/414] test_unittest passed -- running: test___all__ (41 sec), test__locale (41 sec) 0:00:41 [ 4/414] test__opcode passed 0:00:41 [ 5/414] test_dict passed 0:00:41 [ 6/414] test_types passed 0:00:41 [ 7/414] test___future__ passed 0:00:41 [ 8/414] test_builtin passed 0:00:41 [ 9/414] test_doctest passed 0:00:41 [ 10/414] test_opcodes passed 0:00:41 [ 11/414] test_exceptions passed 0:00:41 [ 12/414] test_grammar passed 0:00:41 [ 13/414] test___all__ passed (40 sec) 0:00:41 [ 14/414] test__locale passed The slowest test took 40 sec, and the rest of the cpus waited. [snip list of running tests] 0:01:25 [ 15/414] test_audioop passed 0:01:25 [ 17/414] test_abstract_numbers passed 0:01:25 [ 18/414] test_abc passed 0:01:25 [ 19/414] test_aifc passed 0:01:25 [ 20/414] test_asdl_parser passed 0:01:25 [ 21/414] test_asyncgen passed 0:01:25 [ 22/414] test_atexit passed 0:01:25 [ 23/414] test_asyncio passed (42 sec) [snip output] 0:01:25 [ 25/414] test_ast passed 0:01:25 [ 26/414] test_asynchat passed 0:01:25 [ 27/414] test_array passed 0:01:25 [ 28/414] test__osx_support passed 28 tests done in 85 seconds versus 111 in 30 seconds. I think whatever caused this should be reversed. ---------- components: Tests messages: 312020 nosy: ncoghlan, terry.reedy, vstinner priority: normal severity: normal status: open title: Regression in test -j behavior and time in 3.7.0b1 type: performance versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 11 18:42:34 2018 From: report at bugs.python.org (Frank Griswold) Date: Sun, 11 Feb 2018 23:42:34 +0000 Subject: [New-bugs-announce] [issue32824] Docs: Using Python on a Macintosh has bad info per Apple site Message-ID: <1518392554.24.0.467229070634.issue32824@psf.upfronthosting.co.za> New submission from Frank Griswold : This chunk of docs has bad info in both Python2 and Python3 docs: 4.1.3. Configuration Python on OS X honors all standard Unix environment variables such as PYTHONPATH, but setting these variables for programs started from the Finder is non-standard as the Finder does not read your .profile or .cshrc at startup. You need to create a file ~/.MacOSX/environment.plist. See Apple?s Technical Document QA1067 for details. If you search for QA1067, you are informed that the document is legacy and unsupported, with a suggestion for where to look now. That suggested link leads to a 404. Searching the apple site, I find that at least some thoughtful developers think that configuring the environment isn't even possible, generally; and isn't considered good form even if so. Here: https://forums.developer.apple.com/message/217422 I have no problem setting things for my terminal, as a longtime (unix) user, but for others, this section probably needs a complete examination with an eye toward making it current. quite possibly by reorganizing it. ---------- assignee: docs at python components: Documentation messages: 312023 nosy: docs at python, griswolf priority: normal severity: normal status: open title: Docs: Using Python on a Macintosh has bad info per Apple site type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 05:14:02 2018 From: report at bugs.python.org (mps) Date: Mon, 12 Feb 2018 10:14:02 +0000 Subject: [New-bugs-announce] [issue32825] warn user of creation of multiple Tk instances Message-ID: <1518430442.75.0.467229070634.issue32825@psf.upfronthosting.co.za> New submission from mps : tkinter is the first GUI interface used by novices. They often get in trouble when they create a new Tk instance instead of a Toplevel. It would be helpful to output a warning message in this case (i.e. checking _default_root is not None and _support_default_root is True in the Tk initialization). Thank for your attention and best regards. - mps. ---------- components: Tkinter messages: 312036 nosy: mps priority: normal severity: normal status: open title: warn user of creation of multiple Tk instances type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 05:43:23 2018 From: report at bugs.python.org (Matthias Klose) Date: Mon, 12 Feb 2018 10:43:23 +0000 Subject: [New-bugs-announce] [issue32826] idle test fails at least on the 3.6 branch Message-ID: <1518432203.1.0.467229070634.issue32826@psf.upfronthosting.co.za> New submission from Matthias Klose : seen with the 3.6 branch 20180212, last known succeeding test is the 3.6.4 release. test test_idle failed -- Traceback (most recent call last): File "/home/packages/python/3.6/python3.6-3.6.4/Lib/idlelib/idle_test/test_help_about.py", line 78, in test_file_buttons self.assertEqual(f.readline().strip(), get('1.0', '1.end')) File "/home/packages/python/3.6/python3.6-3.6.4/build-debug/../Lib/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1540: ordinal not in range(128) ---------- assignee: terry.reedy components: IDLE messages: 312037 nosy: doko, terry.reedy priority: normal severity: normal status: open title: idle test fails at least on the 3.6 branch versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 06:47:03 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 12 Feb 2018 11:47:03 +0000 Subject: [New-bugs-announce] [issue32827] Fix incorrect usage of _PyUnicodeWriter_Prepare() Message-ID: <1518436023.7.0.467229070634.issue32827@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Based on discussion in PR 660. _PyUnicodeWriter_Prepare() is used incorrectly in unicode_decode_call_errorhandler_writer(), _PyUnicode_DecodeUnicodeEscape() and PyUnicode_DecodeRawUnicodeEscape() in Objects/unicodeobject.c. The second argument is the number of characters that should be reserved after the current position. But in these places the total minimal size is passed to _PyUnicodeWriter_Prepare(). This can lead to allocating more memory than necessary. ---------- components: Interpreter Core, Unicode messages: 312038 nosy: ezio.melotti, serhiy.storchaka, vstinner, xiang.zhang priority: normal severity: normal status: open title: Fix incorrect usage of _PyUnicodeWriter_Prepare() type: resource usage versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 06:54:27 2018 From: report at bugs.python.org (=?utf-8?b?0JTQuNC70Y/QvSDQn9Cw0LvQsNGD0LfQvtCy?=) Date: Mon, 12 Feb 2018 11:54:27 +0000 Subject: [New-bugs-announce] [issue32828] compress "True if bool(x) else False" expressions Message-ID: <1518436467.62.0.467229070634.issue32828@psf.upfronthosting.co.za> New submission from ????? ???????? : diff --git a/Lib/_strptime.py b/Lib/_strptime.py --- a/Lib/_strptime.py +++ b/Lib/_strptime.py @@ -525,7 +525,7 @@ def _strptime(data_string, format="%a %b %d %H:%M:%S %Y"): # out the Julian day of the year. if julian is None and weekday is not None: if week_of_year is not None: - week_starts_Mon = True if week_of_year_start == 0 else False + week_starts_Mon = week_of_year_start == 0 julian = _calc_julian_from_U_or_W(year, week_of_year, weekday, week_starts_Mon) elif iso_year is not None and iso_week is not None: diff --git a/Lib/email/generator.py b/Lib/email/generator.py --- a/Lib/email/generator.py +++ b/Lib/email/generator.py @@ -59,7 +59,7 @@ class Generator: """ if mangle_from_ is None: - mangle_from_ = True if policy is None else policy.mangle_from_ + mangle_from_ = policy is None or policy.mangle_from_ self._fp = outfp self._mangle_from_ = mangle_from_ self.maxheaderlen = maxheaderlen diff --git a/Lib/test/test_buffer.py b/Lib/test/test_buffer.py --- a/Lib/test/test_buffer.py +++ b/Lib/test/test_buffer.py @@ -576,7 +576,7 @@ def rand_aligned_slices(maxdim=5, maxshape=16): minshape = 0 elif n >= 90: minshape = 1 - all_random = True if randrange(100) >= 80 else False + all_random = randrange(100) >= 80 lshape = [0]*ndim; rshape = [0]*ndim lslices = [0]*ndim; rslices = [0]*ndim diff --git a/Lib/test/test_decimal.py b/Lib/test/test_decimal.py --- a/Lib/test/test_decimal.py +++ b/Lib/test/test_decimal.py @@ -117,7 +117,7 @@ skip_expected = not os.path.isdir(directory) EXTENDEDERRORTEST = False # Test extra functionality in the C version (-DEXTRA_FUNCTIONALITY). -EXTRA_FUNCTIONALITY = True if hasattr(C, 'DecClamped') else False +EXTRA_FUNCTIONALITY = hasattr(C, 'DecClamped') requires_extra_functionality = unittest.skipUnless( EXTRA_FUNCTIONALITY, "test requires build with -DEXTRA_FUNCTIONALITY") skip_if_extra_functionality = unittest.skipIf( @@ -1455,7 +1455,7 @@ class ArithmeticOperatorsTest(unittest.TestCase): for x, y in qnan_pairs + snan_pairs: for op in order_ops + equality_ops: got = op(x, y) - expected = True if op is operator.ne else False + expected = op is operator.ne self.assertIs(expected, got, "expected {0!r} for operator.{1}({2!r}, {3!r}); " "got {4!r}".format( @@ -1468,7 +1468,7 @@ class ArithmeticOperatorsTest(unittest.TestCase): for x, y in qnan_pairs: for op in equality_ops: got = op(x, y) - expected = True if op is operator.ne else False + expected = op is operator.ne self.assertIs(expected, got, "expected {0!r} for " "operator.{1}({2!r}, {3!r}); " diff --git a/Lib/test/test_winreg.py b/Lib/test/test_winreg.py --- a/Lib/test/test_winreg.py +++ b/Lib/test/test_winreg.py @@ -20,13 +20,13 @@ except (IndexError, ValueError): # tuple of (major, minor) WIN_VER = sys.getwindowsversion()[:2] # Some tests should only run on 64-bit architectures where WOW64 will be. -WIN64_MACHINE = True if machine() == "AMD64" else False +WIN64_MACHINE = machine() == "AMD64" # Starting with Windows 7 and Windows Server 2008 R2, WOW64 no longer uses # registry reflection and formerly reflected keys are shared instead. # Windows 7 and Windows Server 2008 R2 are version 6.1. Due to this, some # tests are only valid up until 6.1 -HAS_REFLECTION = True if WIN_VER < (6, 1) else False +HAS_REFLECTION = WIN_VER < (6, 1) # Use a per-process key to prevent concurrent test runs (buildbot!) from # stomping on each other. ---------- components: Build messages: 312039 nosy: dilyan.palauzov priority: normal severity: normal status: open title: compress "True if bool(x) else False" expressions type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 07:06:40 2018 From: report at bugs.python.org (=?utf-8?b?0JTQuNC70Y/QvSDQn9Cw0LvQsNGD0LfQvtCy?=) Date: Mon, 12 Feb 2018 12:06:40 +0000 Subject: [New-bugs-announce] [issue32829] Lib/ be more pythonic Message-ID: <1518437200.03.0.467229070634.issue32829@psf.upfronthosting.co.za> New submission from ????? ???????? : diff --git a/Lib/distutils/command/sdist.py b/Lib/distutils/command/sdist.py --- a/Lib/distutils/command/sdist.py +++ b/Lib/distutils/command/sdist.py @@ -251,14 +251,11 @@ class sdist(Command): for fn in standards: if isinstance(fn, tuple): alts = fn - got_it = False for fn in alts: if self._cs_path_exists(fn): - got_it = True self.filelist.append(fn) break - - if not got_it: + else: self.warn("standard file not found: should have one of " + ', '.join(alts)) else: diff --git a/Lib/email/_header_value_parser.py b/Lib/email/_header_value_parser.py --- a/Lib/email/_header_value_parser.py +++ b/Lib/email/_header_value_parser.py @@ -567,14 +567,7 @@ class DisplayName(Phrase): @property def value(self): - quote = False - if self.defects: - quote = True - else: - for x in self: - if x.token_type == 'quoted-string': - quote = True - if quote: + if self.defects or any(x.token_type == 'quoted-string' for x in self): pre = post = '' if self[0].token_type=='cfws' or self[0][0].token_type=='cfws': pre = ' ' diff --git a/Lib/idlelib/config.py b/Lib/idlelib/config.py --- a/Lib/idlelib/config.py +++ b/Lib/idlelib/config.py @@ -402,7 +402,7 @@ class IdleConf: because setting 'name' to a builtin not defined in older IDLEs to display multiple error messages or quit. See https://bugs.python.org/issue25313. - When default = True, 'name2' takes precedence over 'name', + When default is True, 'name2' takes precedence over 'name', while older IDLEs will just use name. When default = False, 'name2' may still be set, but it is ignored. """ ---------- components: Build messages: 312040 nosy: dilyan.palauzov priority: normal severity: normal status: open title: Lib/ be more pythonic type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 07:51:45 2018 From: report at bugs.python.org (ChrisRands) Date: Mon, 12 Feb 2018 12:51:45 +0000 Subject: [New-bugs-announce] [issue32830] tkinter documentation suggests "from tkinter import *", contradicting PEP8 Message-ID: <1518439905.14.0.467229070634.issue32830@psf.upfronthosting.co.za> New submission from ChrisRands : Issue arose from this SO post: https://stackoverflow.com/questions/48746351/documentation-is-contradicting-pep8 The tkinter documentation suggests: from tkinter import * https://docs.python.org/3/library/tkinter.html But this obviously contradicts PEP8: "Wildcard imports (from import *) should be avoided" https://www.python.org/dev/peps/pep-0008/#imports Is tkinter a valid exception or is this a documentation bug? The commit of this line to the documentation is >10 years old (at least Python 2.4 I think): https://github.com/python/cpython/commit/116aa62bf54a39697e25f21d6cf6799f7faa1349#diff-05a258c160de90c51c1948689f788ef7R53 ---------- assignee: docs at python components: Documentation messages: 312045 nosy: ChrisRands, docs at python priority: normal severity: normal status: open title: tkinter documentation suggests "from tkinter import *", contradicting PEP8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 08:08:11 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Mon, 12 Feb 2018 13:08:11 +0000 Subject: [New-bugs-announce] [issue32831] IDLE: Add docstrings and tests for codecontext Message-ID: <1518440891.7.0.467229070634.issue32831@psf.upfronthosting.co.za> New submission from Cheryl Sabella : Add docstrings and tests for codecontext.py. ---------- assignee: terry.reedy components: IDLE messages: 312046 nosy: csabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Add docstrings and tests for codecontext type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 08:49:39 2018 From: report at bugs.python.org (Sergey B Kirpichev) Date: Mon, 12 Feb 2018 13:49:39 +0000 Subject: [New-bugs-announce] [issue32832] doctest should support custom ps1/ps2 prompts Message-ID: <1518443379.42.0.467229070634.issue32832@psf.upfronthosting.co.za> New submission from Sergey B Kirpichev : The Python stdlib allows override of sys.ps1/ps2 (to make IPython-like dynamic prompts and so on). I believe it would be a good idea to support this in doctest too, to cover cases when given application uses different from defaults settings for the interpreter. Probably, we could add ps1/2 optional arguments for DoctestParser.__init__(). Some projects already patch doctest module for this, e.g. IPython: https://github.com/ipython/ipython/blob/master/IPython/testing/plugin/ipdoctest.py It shouldn't be too hard to port this feature. ---------- components: Library (Lib) messages: 312053 nosy: Sergey.Kirpichev priority: normal severity: normal status: open title: doctest should support custom ps1/ps2 prompts _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 10:07:34 2018 From: report at bugs.python.org (=?utf-8?q?Krzysztof_Leszczy=C5=84ski?=) Date: Mon, 12 Feb 2018 15:07:34 +0000 Subject: [New-bugs-announce] [issue32833] argparse doesn't recognise two option aliases as equal Message-ID: <1518448054.3.0.467229070634.issue32833@psf.upfronthosting.co.za> Change by Krzysztof Leszczy?ski : ---------- components: Library (Lib) nosy: Krzysztof Leszczy?ski priority: normal severity: normal status: open title: argparse doesn't recognise two option aliases as equal type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 14:43:13 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 12 Feb 2018 19:43:13 +0000 Subject: [New-bugs-announce] [issue32834] test_gdb fails with Posix locale in 3.7 Message-ID: <1518464593.71.0.467229070634.issue32834@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : $ LC_ALL=C ./python -We -m test -vuall -m test_strings test_gdb ... ====================================================================== FAIL: test_strings (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of unicode strings ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython3.7/Lib/test/test_gdb.py", line 331, in test_strings check_repr('\u2620') File "/home/serhiy/py/cpython3.7/Lib/test/test_gdb.py", line 323, in check_repr self.assertGdbRepr(text) File "/home/serhiy/py/cpython3.7/Lib/test/test_gdb.py", line 271, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: "'\\u2620'" != "'?'" - '\u2620' + '?' : "'\\u2620'" did not equal expected "'?'"; full output was: Breakpoint 1 at 0x11a4df: file Python/bltinmodule.c, line 1215. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Breakpoint 1, builtin_id (self=self at entry=, v='\u2620') at Python/bltinmodule.c:1215 1215 { #0 builtin_id (self=, v='\u2620') at Python/bltinmodule.c:1215 ---------------------------------------------------------------------- This looks be related to PEP 538. ---------- messages: 312069 nosy: ncoghlan, serhiy.storchaka priority: normal severity: normal status: open title: test_gdb fails with Posix locale in 3.7 type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 16:12:23 2018 From: report at bugs.python.org (Jay Yin) Date: Mon, 12 Feb 2018 21:12:23 +0000 Subject: [New-bugs-announce] [issue32835] Add documention mentioning that Cygwin isn't fully compatible Message-ID: <1518469943.71.0.467229070634.issue32835@psf.upfronthosting.co.za> New submission from Jay Yin : I didn't find any documentation stating that Cygwin isn't currently compatible with building, I was wondering if it would be good to add documentation stating this and that it would be an area requiring help. ---------- assignee: docs at python components: Documentation messages: 312080 nosy: docs at python, jayyin11043 priority: normal severity: normal status: open title: Add documention mentioning that Cygwin isn't fully compatible versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 16:33:51 2018 From: report at bugs.python.org (Martijn Pieters) Date: Mon, 12 Feb 2018 21:33:51 +0000 Subject: [New-bugs-announce] [issue32836] Symbol table for comprehensions (list, dict, set) still includes temporary _[1] variable Message-ID: <1518471231.56.0.467229070634.issue32836@psf.upfronthosting.co.za> New submission from Martijn Pieters : In Python 2.6, a list comprehension was implemented in the current scope using a temporary _[1] variable to hold the list object: >>> import dis >>> dis.dis(compile('[x for x in y]', '?', 'exec')) 1 0 BUILD_LIST 0 3 DUP_TOP 4 STORE_NAME 0 (_[1]) 7 LOAD_NAME 1 (y) 10 GET_ITER >> 11 FOR_ITER 13 (to 27) 14 STORE_NAME 2 (x) 17 LOAD_NAME 0 (_[1]) 20 LOAD_NAME 2 (x) 23 LIST_APPEND 24 JUMP_ABSOLUTE 11 >> 27 DELETE_NAME 0 (_[1]) 30 POP_TOP 31 LOAD_CONST 0 (None) 34 RETURN_VALUE Nick Cochlan moved comprehensions into a separate scope in #1660500, and removed the need for a temporary variable in the process (the list / dict / set lives only on the stack). However, the symbol table generates the _[1] name: >>> import symtable >>> symtable.symtable('[x for x in y]', '?', 'exec').get_children()[0].get_symbols() [, , ] Can this be dropped? I think all temporary variable handling can be ripped out. ---------- messages: 312081 nosy: mjpieters priority: normal severity: normal status: open title: Symbol table for comprehensions (list, dict, set) still includes temporary _[1] variable _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 12 17:07:39 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 12 Feb 2018 22:07:39 +0000 Subject: [New-bugs-announce] [issue32837] IDLE: require encoding argument for textview.view_file Message-ID: <1518473259.79.0.467229070634.issue32837@psf.upfronthosting.co.za> New submission from Terry J. Reedy : Built-in open has an encoding parameter whose default value depends on the system: 'ascii' for some POSIX locales; 'latin1' or similar for most Windows sold in the USA or western Europe; and ???. In idlelib.textview, the signature for view_file currently includes 'encoding=None'. There have been 2 issues, #32826 and another, about tests using the default failing because of 'L?wis' on line 27 of CREDITS.txt. It therefore seems an error for a global cross-platform application to use the default encoding. To prevent this, we should remove '=None' from the encoding part of the view_file definition and make view_file calls explicitly pass an encoding. For IDLE itself, this will be 'ascii' or 'utf-8'. This expands upon a note by Cheryl Sabella in #32826 about one of the three calls that fail with the change until fixed. I will not default to 'utf-8' because 'ascii' catches erroneous non-ascii characters in ascii-only files. For instance, a draft of README.txt was prepared with an editor that replaced ascii " and " with left and right quotes. I will not restricting the encoding otherwise because there might be external uses of the file that use other encodings. PR to follow as soon as I get bpo number. ---------- assignee: terry.reedy components: IDLE messages: 312085 nosy: terry.reedy priority: normal severity: normal stage: commit review status: open title: IDLE: require encoding argument for textview.view_file type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 13 06:09:44 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 13 Feb 2018 11:09:44 +0000 Subject: [New-bugs-announce] [issue32838] Fix Python versions in the table of magic numbers Message-ID: <1518520184.53.0.467229070634.issue32838@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : All changes to the magic number in Lib/importlib/_bootstrap_external.py are attributed with 3.7a0. But they were made at different stages. There is also inconsistency in specifying Python version for older changes. Some authors specify the first Python version after bumping the magic number (i.e. the version that was released with this magic number). Others specify the Python version after which the bumping was made. I think the former is correct. The proposed PR fixes Python versions and adds issue numbers if known. I have tracked only changes in Python 3.x. ---------- messages: 312115 nosy: benjamin.peterson, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: Fix Python versions in the table of magic numbers versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 13 08:38:15 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Tue, 13 Feb 2018 13:38:15 +0000 Subject: [New-bugs-announce] [issue32839] Add after_info as a function to tkinter Message-ID: <1518529095.33.0.467229070634.issue32839@psf.upfronthosting.co.za> New submission from Cheryl Sabella : In tkinter, after_cancel has a call to after info: data = self.tk.call('after', 'info', id) Since this is a supported command, there should be a function to access it directly. https://www.tcl.tk/man/tcl8.6/TclCmd/after.htm ---------- components: Tkinter messages: 312119 nosy: csabella priority: normal severity: normal status: open title: Add after_info as a function to tkinter type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 13 15:10:38 2018 From: report at bugs.python.org (J. Morton) Date: Tue, 13 Feb 2018 20:10:38 +0000 Subject: [New-bugs-announce] [issue32840] Must install python 3.6.3 when 3.6.4 already installed Message-ID: <1518552638.75.0.467229070634.issue32840@psf.upfronthosting.co.za> New submission from J. Morton : Got a 0x8070666 "Setup Failed - another version installed" popup message when installing 3.6.3 with 3.6.4 and 3.5.1 already installed (all are "just for me" installs). The problem is independent of word length. It should be possible to install any/every earlier version (within reason) provided the earlier version is "virtualized" by being placed in it's own folder (which I was what I was doing). Please accomodate this capability as it is often needed to support users using earlier versions of Python. Environment: 64 bit Win7 Enterprise SP1 on a IM controlled machine. Possible red herring: IM recently upgraded java on this machine to "1.8.0_151". 32 and 64 bit versions of all releases are installed on this machine. ---------- components: Installation messages: 312138 nosy: NaCl priority: normal severity: normal status: open title: Must install python 3.6.3 when 3.6.4 already installed type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 13 16:30:02 2018 From: report at bugs.python.org (Bar Harel) Date: Tue, 13 Feb 2018 21:30:02 +0000 Subject: [New-bugs-announce] [issue32841] Asyncio.Condition prevents cancellation Message-ID: <1518557402.2.0.467229070634.issue32841@psf.upfronthosting.co.za> New submission from Bar Harel : Hey guys, A week after a serious asyncio.Lock bug, I found another bug that makes asyncio.Condition ignore and prevent cancellation in some cases due to an "except: pass" which tbh is a little embarrassing. What happens is that during the time it takes to get back a conditional lock after notifying it, asyncio completely ignores all cancellations sent to the waiting task. le_bug.py: Contains the bug le_patch.diff: Contains a very simple fix (will send a pull on Github too) le_meme.jpg: Contains my face after debugging this for 4 hours Yuri, I hope you didn't miss me during this week ;-) -- Bar ---------- components: asyncio files: le_bug.py messages: 312140 nosy: asvetlov, bar.harel, yselivanov priority: normal severity: normal status: open title: Asyncio.Condition prevents cancellation type: behavior versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47439/le_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 13 21:06:41 2018 From: report at bugs.python.org (YoSTEALTH) Date: Wed, 14 Feb 2018 02:06:41 +0000 Subject: [New-bugs-announce] [issue32842] Fixing epoll timeout logics Message-ID: <1518574001.96.0.467229070634.issue32842@psf.upfronthosting.co.za> New submission from YoSTEALTH : # current if timeout is None: timeout = -1 elif timeout <= 0: timeout = 0 # changed if timeout is None: timeout = -1 elif timeout < -1: timeout = 0 what if "timeout=-1" ? - currently it would result in being timeout=0 ---------- messages: 312150 nosy: YoSTEALTH priority: normal severity: normal status: open title: Fixing epoll timeout logics type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 14 06:18:06 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Wed, 14 Feb 2018 11:18:06 +0000 Subject: [New-bugs-announce] [issue32843] More revisions to test.support docs Message-ID: <1518607086.66.0.467229070634.issue32843@psf.upfronthosting.co.za> New submission from Cheryl Sabella : Serhiy had made the following comments on the pull request for issue11015, but that PR was merged before applying his requested changes. This issue is to address his concerns. TESTFN_NONASCII - How different from TESTFN_UNICODE? PGO - value? True/False? TEST_SUPPORT_DIR - Not used. max_memuse - Not used. MISSING_C_DOCSTRINGS - Remove `Return` since it's a constant. HAVE_DOCSTRINGS - Remove `Check` since it's a constant. Possibly group the next 3 to avoid repetition: unlink(filename) - Add an explanation why this is needed (due to antiviruses that can hold files open and prevent thy deletion). rmdir(filename) - Add an explanation why this is needed (due to antiviruses that can hold files open and prevent thy deletion). rmtree(path) - Add an explanation why this is needed (due to antiviruses that can hold files open and prevent thy deletion). make_legacy_pyc(source) - Add markup references to PEPs. system_must_validate_cert(f) - This is a decorator and change text to "Skip the test on TLS certificate validation failures.". match_test(test) and set_match_tests(patterns) - Used internally to regrtest. Remove or ask Victor to document as the current documentation is useless. check_impl_detail(**guards) - Document that a boolean is returned and check the docstring to include more information. get_original_stdout - Add (). strip_python_strerr(stderr) - Typo `strip_python_stderr` and make it clear that *stderr* is a byte string. disable_faulthandler() - Wrong description. disable_gc() - Add that it works only if it was enabled. start_threads(threads, unlock=None) - Document that *threads* is a sequence of threads and document what unlock is. calcobjsize(fmt) - Add the purpose of this - calcobjsize() returns the size of Python object (PyObject) whose structure is defined by fmt with account of Python object header. calcvobjsize(fmt) - Same as above for PyVarObject. requires_freebsd_version(*min_version) - Change to skip the test instead of Raise. cpython_only(test) - Remove argument. no_tracing(func) - Remove argument. refcount_test(test) - Remove argument. reap_threads(func) - Remove argument. bigaddrspacetest(f) - Remove argument. import_module(name, deprecated=False, *, required_on()) - Missed = check_free_after_iterating(test, iter, cls, args=()) - This description is misleading. This function doesn't test iter. iter is either iter or reversed, cls is a base iterable class, args are arguments for its constructor. The true description is too complex, I suggest to remove this function from the documentation. missing_compiler_executable(cmd_names=[]) - It is used only in distutils tests and should be in distutils.tests.support. CleanImport(*module_names) - Format DeprecationWarning as a link. DirsOnSysPath(*paths) - Format first sys.path as a link, Keeps the first one (in the above sentence) and format other mentions as ``sys.path`` or :data:`!sys.path`. SaveSignals() - How to use it? Matcher() - This class is used only in test_logging, and only with TestHandler. These classes should be documented together and with references to logging. I'm not sure they should be in test.support. BasicTestRunner() - This is an internal class used for implementing run_unittest(). No need to expose it. TestHandler(logging.handlers.BufferingHandler) - I'm not sure it should be in test.support rather of test_logging. assert_python_ok(*args, **env_vars) - Since this is a keyword argument name, it should be formatter as *__cleanenv*. ---------- assignee: docs at python components: Documentation messages: 312170 nosy: csabella, docs at python, ncoghlan, serhiy.storchaka, terry.reedy priority: normal severity: normal status: open title: More revisions to test.support docs type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 14 16:10:48 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Wed, 14 Feb 2018 21:10:48 +0000 Subject: [New-bugs-announce] [issue32844] subprocess may incorrectly redirect a low fd to stderr if another low fd is closed Message-ID: <1518642648.87.0.467229070634.issue32844@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : When redirecting, subprocess attempts to achieve the following state: each fd to be redirected to is less than or equal to the fd it is redirected from, which is necessary because redirection occurs in the ascending order of destination descriptors. It fails to do so if a low fd (< 2) is redirected to stderr and another low fd is closed, which may lead to an incorrect redirection, for example: $ cat test.py import os import subprocess import sys os.close(0) subprocess.call([sys.executable, '-c', 'import sys; print("Hello", file=sys.stderr)'], stdin=2, stderr=1) $ python3 test.py 2>/dev/null $ python3 test.py >/dev/null Hello Expected behavior: $ python3 test.py >/dev/null $ python3 test.py 2>/dev/null Hello ---------- components: Extension Modules, Library (Lib) messages: 312181 nosy: gregory.p.smith, izbyshev priority: normal severity: normal status: open title: subprocess may incorrectly redirect a low fd to stderr if another low fd is closed type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 14 18:11:46 2018 From: report at bugs.python.org (Eric Cousineau) Date: Wed, 14 Feb 2018 23:11:46 +0000 Subject: [New-bugs-announce] [issue32845] Mac: Local modules can shadow builtins (e.g. a local `math.py` can shadow `math`) Message-ID: <1518649906.98.0.467229070634.issue32845@psf.upfronthosting.co.za> New submission from Eric Cousineau : We ran into an issue with our library; we build a submodule named `math.so`, and have a test `math_test` which is ran in the same directory as this module (since this is how we've written it with Bazel's generated Python binaries): https://github.com/RobotLocomotion/drake/issues/8041 This happens for us on HighSierra / Sierra MacOS (for *.py and *.so modules); Ubuntu Linux seems fine. I have a reproduction script just for *.py files here: https://github.com/EricCousineau-TRI/repro/blob/b08bc47/bug/mac_builtin_shadow/test.sh#L38 (attached this as `test.sh`) The module is exposed on the path given that it neighbors the first execution of the `import_test.py` script. The second execution (where the `import_test.py` script is moved up one level) succeeds on Mac (s.t. the `math.py` is not in `sys.path`). This behavior seems to violate the documentation: https://docs.python.org/2/tutorial/modules.html#the-module-search-path ---------- components: macOS files: test.sh messages: 312183 nosy: Eric Cousineau, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Mac: Local modules can shadow builtins (e.g. a local `math.py` can shadow `math`) type: behavior versions: Python 2.7 Added file: https://bugs.python.org/file47442/test.sh _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 14 19:23:33 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 15 Feb 2018 00:23:33 +0000 Subject: [New-bugs-announce] [issue32846] Deletion of large sets of strings is extra slow Message-ID: <1518654213.84.0.467229070634.issue32846@psf.upfronthosting.co.za> New submission from Terry J. Reedy : https://metarabbit.wordpress.com/2018/02/05/pythons-weak-performance-matters/, a blog post on cpython speed, clains "deleting a set of 1 billion strings takes >12 hours". (No other details provided.) I don't have the 100+ gigabytes of ram needed to confirm this, but with installed 64 bit 3.7.0b1 with Win10 and 12 gigabyes, I confirmed that there is a pronounced super-linear growth in string set deletion (unlike with an integer set). At least half of ram was available. Seconds to create and delete sets millions integers strings of items create delete create delete 1 .08 .02 .36 .08 2 .15 .03 .75 .17 4 .30 .06 1.55 .36 8 .61 .12 3.18 .76 16 1.22 .24 6.48 1.80 < slightly more than double 32 2.4 .50 13.6 5.56 < more than triple 64 4.9 1.04 28 19 < nearly quadruple 128 10.9 2.25 100 56 80 < quadruple with 1.5 x size For 100 million strings, I got about the same 56 and 80 seconds when timing with a clock, without the timeit gc suppression. I interrupted the 128M string run after several minutes. Even if there is swapping to disk during creation, I would not expect it during deletion. The timeit code: import timeit for i in (1,2,4,8,16,32,64,128): print(i, 'int') print(timeit.Timer(f's = {{n for n in range({i}*1000000)}}') .timeit(number=1)) print(timeit.Timer('del s', f's = {{n for n in range({i}*1000000)}}') .timeit(number=1)) for i in (1,2,4,8,16,32,64,100): print(i, 'str') print(timeit.Timer(f's = {{str(n) for n in range({i}*1000000)}}') .timeit(number=1)) print(timeit.Timer('del s', f's = {{str(n) for n in range({i}*1000000)}}') .timeit(number=1)) Raymond, I believe you monitor the set implementation, and I know Victor is interested in timing and performance. ---------- messages: 312188 nosy: rhettinger, terry.reedy, vstinner priority: normal severity: normal stage: needs patch status: open title: Deletion of large sets of strings is extra slow type: performance versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 14 22:11:57 2018 From: report at bugs.python.org (Barry A. Warsaw) Date: Thu, 15 Feb 2018 03:11:57 +0000 Subject: [New-bugs-announce] [issue32847] Add DirectoryNotEmptyError subclass of OSError Message-ID: <1518664317.8.0.467229070634.issue32847@psf.upfronthosting.co.za> New submission from Barry A. Warsaw : I just ran across errno 39 (Directory not empty) when using Path.rename(newdir) when newdir already existed and was not empty. I realized that OSError exceptions don't have a DirectoryNotEmptyError subclass. Maybe we should add one? Probably not important or common enough to add for 3.7 given we're in beta now. ---------- messages: 312192 nosy: barry priority: normal severity: normal status: open title: Add DirectoryNotEmptyError subclass of OSError versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 15 02:20:06 2018 From: report at bugs.python.org (Arun Solomon) Date: Thu, 15 Feb 2018 07:20:06 +0000 Subject: [New-bugs-announce] [issue32848] Valgrind error with python Message-ID: <1518679206.96.0.467229070634.issue32848@psf.upfronthosting.co.za> New submission from Arun Solomon : Hi... I am facing the problem python along with valgrind. I was getting default valgrind warnings messages. I have created the sample.py. sample.py file does not have any code. I created empty py file to confirm that my code does not have any memory leaks. Following is the valgrind command that I am using in command line: valgrind --leak-check=full --show-reachable=yes --error-limit=no --gen-suppressions=all --log-file=msm_suppress.log -v /home/arunspra/py_src/Python-2.7.5/python sample.py I was getting plenty of valgrind warnings. I surfed in the google and I got to know that i need to configure python by disabling pymalloc. As said by technical peoples, If pymalloc was disabled, we would not got any memory related errors. But i was getting the memory related error. Following is my command that i used to disable pymalloc: ./configure --without-pymalloc --with-pydebug make Then I ran above said valgrind command. I was getting 1299 valgrind warnings. If i enable pymalloc, I was getting only 108 valgrind warnings. Following is my software versions: Cent os: 7.3 Python: 2.7.5 Valgrind: 3.12.0 PS: If i configure and build python, i was getting an Import Error: No module named netifaces. I am using netifaces in my project. If i use system inbuilt python, i am not getting netifaces import error. Can anyone please advice me to resolve this issue. ---------- components: Build files: suppress.zip messages: 312193 nosy: Arun Solomon, vstinner priority: normal severity: normal status: open title: Valgrind error with python type: compile error versions: Python 2.7 Added file: https://bugs.python.org/file47443/suppress.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 15 04:20:30 2018 From: report at bugs.python.org (Rudolph Froger) Date: Thu, 15 Feb 2018 09:20:30 +0000 Subject: [New-bugs-announce] [issue32849] Fatal Python error: Py_Initialize: can't initialize sys standard streams Message-ID: <1518686430.4.0.467229070634.issue32849@psf.upfronthosting.co.za> New submission from Rudolph Froger : Sometimes a new Python 3.6.4 process is aborted by the kernel (FreeBSD 11.1) (before loading my Python files). Found in syslog: kernel: pid 22433 (python3.6), uid 2014: exited on signal 6 (core dumped) Fatal Python error: Py_Initialize: can't initialize sys standard streams OSError: [Errno 9] Bad file descriptor I've been able to run ktrace on such a Python process, see attachment. See around line 940: "RET fstat -1 errno 9 Bad file descriptor" ---------- files: alles-23978 messages: 312194 nosy: rudolphf priority: normal severity: normal status: open title: Fatal Python error: Py_Initialize: can't initialize sys standard streams versions: Python 3.6 Added file: https://bugs.python.org/file47444/alles-23978 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 15 09:02:32 2018 From: report at bugs.python.org (Matthias Urlichs) Date: Thu, 15 Feb 2018 14:02:32 +0000 Subject: [New-bugs-announce] [issue32850] Run gc_collect() before complaining about dangling threads Message-ID: <1518703352.66.0.467229070634.issue32850@psf.upfronthosting.co.za> New submission from Matthias Urlichs : Lib/test/support/__init__.py::threading_cleanup() complains about dangling threads even if the reference in question would be cleaned up by the garbage collector. This is not useful, esp. when the list of referrers to the "dangling" thread looks like this: [, , ] Thus I propose to check, run gc, check again, and only *then* complain-and-wait. Hence the attached patch for your consideration. ---------- components: Tests files: gc.patch keywords: patch messages: 312206 nosy: smurfix priority: normal severity: normal status: open title: Run gc_collect() before complaining about dangling threads type: resource usage versions: Python 3.7 Added file: https://bugs.python.org/file47445/gc.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 15 11:11:29 2018 From: report at bugs.python.org (TROUVERIE Joachim) Date: Thu, 15 Feb 2018 16:11:29 +0000 Subject: [New-bugs-announce] [issue32851] Complete your registration to Python tracker -- key uxOb1XizINE32OvAnH7tUiKMx4tFqGdK In-Reply-To: <20180215160836.49F95153B4E@psf.upfronthosting.co.za> Message-ID: <8a9a259a27901b596aca1e6bfec0951f@mailserver.linoame.fr> New submission from TROUVERIE Joachim : Thanks Le 15-02-2018 17:08, Python tracker a ?crit : > To complete your registration of the user "joack" with > Python tracker, please do one of the following: > > - send a reply to report at bugs.python.org and maintain the subject line as is (the > reply's additional "Re:" is ok), > > - or visit the following URL: > > https://bugs.python.org/?@action=confrego&otk=uxOb1XizINE32OvAnH7tUiKMx4tFqGdK [1] Links: ------ [1] https://bugs.python.org/?@action=confrego&otk=uxOb1XizINE32OvAnH7tUiKMx4tFqGdK ---------- messages: 312212 nosy: joack priority: normal severity: normal status: open title: Complete your registration to Python tracker -- key uxOb1XizINE32OvAnH7tUiKMx4tFqGdK _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 15 11:52:04 2018 From: report at bugs.python.org (Eli_B) Date: Thu, 15 Feb 2018 16:52:04 +0000 Subject: [New-bugs-announce] [issue32853] struct's docstring implies alignment is always performed Message-ID: <1518713524.32.0.467229070634.issue32853@psf.upfronthosting.co.za> New submission from Eli_B : In module struct's docstring, it says: "... The optional first format char indicates byte order, size and alignment:\n @: native order, size & alignment (default)\n =: native order, std. size & alignment\n <: little-endian, std. size & alignment\n >: big-endian, std. size & alignment\n !: same as >\n ..." The wording sounds like either native or standard alignment is performed, regardless of the optional first format char. In comparison, the table in https://docs.python.org/3.8/library/struct.html#byte-order-size-and-alignment states that in all modes other than the default, no alignment is performed. This is the actual behavior of the module. ---------- assignee: docs at python components: Documentation messages: 312214 nosy: Eli_B, docs at python priority: normal severity: normal status: open title: struct's docstring implies alignment is always performed type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 15 11:39:05 2018 From: report at bugs.python.org (Kyle Altendorf) Date: Thu, 15 Feb 2018 16:39:05 +0000 Subject: [New-bugs-announce] [issue32852] trace changes sys.argv from list to tuple Message-ID: <1518712745.86.0.467229070634.issue32852@psf.upfronthosting.co.za> New submission from Kyle Altendorf : Normally sys.argv is a list but when using the trace module sys.argv gets changed to a tuple. In my case this caused an issue with running an entry point due to the line: sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0]) When researching I found: https://stackoverflow.com/questions/47688568/trace-sys-argv-args-typeerror-tuple-object-does-not-support-item-assig They point out where trace assigns a tuple to sys.argv. https://github.com/python/cpython/blob/master/Lib/trace.py#L708 I'll see what I can do to put together a quick patch. $ cat t.py import sys print(sys.version) print(type(sys.argv)) $ /home/altendky/.pyenv/versions/3.7.0a2/bin/python t.py 3.7.0a2 (default, Feb 15 2018, 11:20:36) [GCC 6.3.0 20170516] $ /home/altendky/.pyenv/versions/3.7.0a2/bin/python -m trace --trace t.py --- modulename: t, funcname: t.py(1): import sys t.py(3): print(sys.version) 3.7.0a2 (default, Feb 15 2018, 11:20:36) [GCC 6.3.0 20170516] t.py(5): print(type(sys.argv)) --- modulename: trace, funcname: _unsettrace trace.py(71): sys.settrace(None) ---------- components: Library (Lib) messages: 312213 nosy: altendky priority: normal severity: normal status: open title: trace changes sys.argv from list to tuple versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 15 13:10:32 2018 From: report at bugs.python.org (John Crawford) Date: Thu, 15 Feb 2018 18:10:32 +0000 Subject: [New-bugs-announce] [issue32854] Add ** Map Unpacking Support for namedtuple Message-ID: <1518718232.91.0.467229070634.issue32854@psf.upfronthosting.co.za> New submission from John Crawford : At present, `collections.namedtuple` does not support `**` map unpacking despite being a mapping style data structure. For example: >>> from collections import namedtuple >>> A = namedtuple("A", "a b c") >>> a = A(10, 20, 30) >>> def t(*args, **kwargs): ... print(f'args={args!r}, kwargs={kwargs!r}') ... >>> t(*a) args=(10, 20, 30), kwargs={} >>> t(**a) Traceback (most recent call last): File "", line 1, in TypeError: t() argument after ** must be a mapping, not A >>> No doubt, the lack of current support is due to namespace conflicts that result from trying to provide a `keys` method amidst also supporting attribute-style access to the `namedtuple` class. As we can see, providing a `keys` attribute in the `namedtuple` produces an interesting result: >>> Record = namedtuple("Record", "title keys description") >>> t(**Record(1, 2, 3)) Traceback (most recent call last): File "", line 1, in TypeError: attribute of type 'int' is not callable >>> To me, this calls out a design flaw in the `**` unpacking operator where it depends on a method that uses a non-system naming convention. It would make far more sense for the `**` operator to utilize a `__keys__` method rather than the current `keys` method. After all, the `____` naming convention was introduced to avoid namespace conflict problems like this one. ---------- components: Library (Lib) messages: 312218 nosy: John Crawford priority: normal severity: normal status: open title: Add ** Map Unpacking Support for namedtuple type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 15 21:33:36 2018 From: report at bugs.python.org (Jay Yin) Date: Fri, 16 Feb 2018 02:33:36 +0000 Subject: [New-bugs-announce] [issue32855] Add documention stating supported Platforms Message-ID: <1518748416.8.0.467229070634.issue32855@psf.upfronthosting.co.za> New submission from Jay Yin : we can probably add a section that includes all supported platforms and possibly "partially" supported platforms, and maybe include platforms that currently aren't supported but want/plan to be supported. ---------- assignee: docs at python components: Documentation messages: 312225 nosy: docs at python, jayyin11043 priority: normal severity: normal status: open title: Add documention stating supported Platforms _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 16 03:41:23 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 16 Feb 2018 08:41:23 +0000 Subject: [New-bugs-announce] [issue32856] Optimize the `for y in [x]` idiom in comprehensions Message-ID: <1518770483.24.0.467229070634.issue32856@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : There were a number of discussions about adding new syntax for temporary variables in comprehensions. The last was started yesterday on Python-Ideas (https://mail.python.org/pipermail/python-ideas/2018-February/048971.html). The problem is that all syntaxes proposed before are ugly. There are common solutions of this problem (calculating common subexpression only once): using internal comprehension or generator, or refactoring the inner expression as a local function where local variables can be used. For example [f(x) + g(f(x)) for x in range(10)] can be rewritten as f_samples = (f(x) for x in range(10)) [y+g(y) for y in f_samples] or def func(x): y = f(x) return y + g(y) [func(x) for x in range(10)] Stephan Houben suggested other idea (https://mail.python.org/pipermail/python-ideas/2018-February/048971.html): perform an assignment by iterating a one-element list. [y + g(y) for x in range(10) for y in [f(x)]] I never seen this idiom before, but seems it is well known for some other developers, and it looks less clumsy than other solutions with current syntax. Its advantage over hypothetical syntax ideas is that it is an existing syntax. Its disadvantage over hypothetical syntax ideas is that iterating a one-element list is slightly slower that a simple assignment. The proposed PR makes `for y in [f(x)]` in comprehensions as fast as just an assignment `y = f(x)`. This will make this idiom more preferable for performance reasons. Other existing solutions, iterating an inner generator and calling a local function in a loop, have an overhead. ---------- components: Interpreter Core messages: 312228 nosy: benjamin.peterson, brett.cannon, ncoghlan, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Optimize the `for y in [x]` idiom in comprehensions type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 16 08:20:14 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Fri, 16 Feb 2018 13:20:14 +0000 Subject: [New-bugs-announce] [issue32857] tkinter after_cancel does not behave correctly when called with id=None Message-ID: <1518787214.25.0.467229070634.issue32857@psf.upfronthosting.co.za> New submission from Cheryl Sabella : This was discovered while working on issue32839. An old comment added for issue763637 resulted in further research and it was determined that the comment: ! data = self.tk.call('after', 'info', id) ! # In Tk 8.3, splitlist returns: (script, type) ! # In Tk 8.4, splitlist returns: (script,) wasn't correct. The underlying difference is that the call to 'after info' returns different results depending on the way it's called. If it's called with an 'id', it will return a tuple of (script, type) if the id is valid or a TclError if the id isn't valid. However, if id is None, then 'after info' returns a tuple of all the event ids for the widget. In the case of the original bug in issue763637, the reported message shows the return value of `('after#53',)`, which is definitely an after event id and not a (script,) tuple. Serhiy mentions on issue32839 that the current code also deletes the script for the first event if after_cancel is called with None. https://bugs.python.org/issue32839#msg312199 ---------- components: Tkinter messages: 312231 nosy: csabella priority: normal severity: normal status: open title: tkinter after_cancel does not behave correctly when called with id=None type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 16 11:03:17 2018 From: report at bugs.python.org (=?utf-8?q?Stefan_R=C3=BCster?=) Date: Fri, 16 Feb 2018 16:03:17 +0000 Subject: [New-bugs-announce] [issue32858] Improve OpenSSL ECDH support Message-ID: <1518796997.98.0.467229070634.issue32858@psf.upfronthosting.co.za> New submission from Stefan R?ster : Tested with OpenSSL v1.1.0g, Python does not support selection of curve Curve25519 with _ssl.ctx.set_ecdh_curve("X25519"). Additionally the DH key exchange parameters (which curve has been chosen, what DH bit size was used) are not available through any SSL or Context method. ---------- components: Library (Lib) messages: 312237 nosy: Stefan R?ster priority: normal severity: normal status: open title: Improve OpenSSL ECDH support type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 16 18:10:17 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Fri, 16 Feb 2018 23:10:17 +0000 Subject: [New-bugs-announce] [issue32859] os.dup2() tests dup3() availability on each call Message-ID: <1518822617.35.0.467229070634.issue32859@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : os.dup2() tests for dup3() system call availability at runtime, but doesn't remember the result across calls, repeating the test on each call with inheritable=False even if the test fails. Judging by the code, 'dup3_works' was intended to be static (the first time its value is checked it's always an initial one). ---------- components: Extension Modules, Library (Lib) messages: 312256 nosy: benjamin.peterson, izbyshev, vstinner priority: normal severity: normal status: open title: os.dup2() tests dup3() availability on each call type: performance versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 16 19:06:49 2018 From: report at bugs.python.org (Roger Erens) Date: Sat, 17 Feb 2018 00:06:49 +0000 Subject: [New-bugs-announce] [issue32860] Definition of iglob does not mention single star (unlike glob) Message-ID: <1518826009.15.0.467229070634.issue32860@psf.upfronthosting.co.za> New submission from Roger Erens : Although both glob and iglob have the same arity in Lib/glob.py: def glob(pathname, *, recursive=False) def iglob(pathname, *, recursive=False): the documentation only mentions for glob the single star https://docs.python.org/3/library/glob.html#glob.iglob: glob.glob(pathname, *, recursive=False) glob.iglob(pathname, recursive=False) ---------- assignee: docs at python components: Documentation messages: 312258 nosy: Roger Erens, docs at python priority: normal severity: normal status: open title: Definition of iglob does not mention single star (unlike glob) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 16 19:52:46 2018 From: report at bugs.python.org (Michael Lazar) Date: Sat, 17 Feb 2018 00:52:46 +0000 Subject: [New-bugs-announce] [issue32861] urllib.robotparser: incomplete __str__ methods Message-ID: <1518828766.79.0.467229070634.issue32861@psf.upfronthosting.co.za> New submission from Michael Lazar : Hello, I have stumbled upon a couple of inconsistencies in urllib.robotparser's __str__ methods. These appear to be unintentional omissions; basically the code was modified but the string methods were never updated. 1. The RobotFileParser.__str__ method doesn't include the default (*) User-agent entry. >>> from urllib.robotparser import RobotFileParser >>> parser = RobotFileParser() >>> text = """ ... User-agent: * ... Allow: /some/path ... Disallow: /another/path ... ... User-agent: Googlebot ... Allow: /folder1/myfile.html ... """ >>> parser.parse(text.splitlines()) >>> print(parser) User-agent: Googlebot Allow: /folder1/myfile.html >>> This is *especially* awkward when parsing a valid robots.txt that only contains a wildcard User-agent. >>> from urllib.robotparser import RobotFileParser >>> parser = RobotFileParser() >>> text = """ ... User-agent: * ... Allow: /some/path ... Disallow: /another/path ... """ >>> parser.parse(text.splitlines()) >>> print(parser) >>> 2. Support was recently added for `Crawl-delay` and `Request-Rate` lines, but __str__ does not include these. >>> from urllib.robotparser import RobotFileParser >>> parser = RobotFileParser() >>> text = """ ... User-agent: figtree ... Crawl-delay: 3 ... Request-rate: 9/30 ... Disallow: /tmp ... """ >>> parser.parse(text.splitlines()) >>> print(parser) User-agent: figtree Disallow: /tmp >>> 3. Two unnecessary trailing newlines are being appended to the string output (one for the last RuleLine and one for the last Entry) (see above examples) Taken on their own these are all minor issues, but they do make things quite confusing when using robotparser from the REPL! ---------- components: Library (Lib) messages: 312259 nosy: michael-lazar priority: normal severity: normal status: open title: urllib.robotparser: incomplete __str__ methods type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 16 22:24:02 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Sat, 17 Feb 2018 03:24:02 +0000 Subject: [New-bugs-announce] [issue32862] os.dup2(fd, fd, inheritable=False) behaves inconsistently Message-ID: <1518837842.39.0.467229070634.issue32862@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : os.dup2(fd, fd, inheritable=False) may fail or change fd inheritability in following ways: 1) POSIX without F_DUP2FD_CLOEXEC 1.1) dup3() is available (a common case for Linux): OSError (EINVAL, dup3() doesn't allow equal descriptors) 1.2) dup3() is not available: fd made non-inheritable 2) POSIX with F_DUP2FD_CLOEXEC (FreeBSD): inheritability is not changed 3) Windows: fd made non-inheritable In contrast, os.dup2(fd, fd, inheritable=True) never changes fd inheritability (same as before PEP 446 landed). I suggest to make os.dup2(fd, fd, inheritable=False) behave the same. ---------- components: Extension Modules, Library (Lib) messages: 312266 nosy: benjamin.peterson, izbyshev, vstinner priority: normal severity: normal status: open title: os.dup2(fd, fd, inheritable=False) behaves inconsistently type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 17 08:42:30 2018 From: report at bugs.python.org (Victor Domingos) Date: Sat, 17 Feb 2018 13:42:30 +0000 Subject: [New-bugs-announce] [issue32863] Missing support for Emojis in tkinter Message-ID: <1518874950.43.0.467229070634.issue32863@psf.upfronthosting.co.za> New submission from Victor Domingos : In the current Python 3.7.0b1, on macOS 10.12.6 Sierra (also on 10.11 El Capitan), which seems to include a newer Tcl/tk version, it still does not support a variety of UTF characters, including Emoji characters that nowadays are of very common use. A quick search on the web returns some hints that maybe Tcl/tk could be compiled with different options in order to unlock those characters: http://wiki.tcl.tk/515 https://core.tcl.tk/tk/tktview/6c0d7aec6713ab6a7c3e12dff7f26bff4679bc9d I am not sure if it is officially supported by now, but at least for me, as a Python and tkinter user, it would be a great improvement. Thanks in advance, With best regards, Victor Domingos My current version: Python 3.7.0b1 (v3.7.0b1:9561d7f501, Jan 30 2018, 19:10:11) [Clang 6.0 (clang-600.0.57)] on darwin Sample code that fails: import tkinter as tk import tkinter.ttk as ttk app = tk.Tk() b = ttk.Button(app, text="?") ------- Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/tkinter/ttk.py", line 614, in __init__ Widget.__init__(self, master, "ttk::button", kw) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/tkinter/ttk.py", line 559, in __init__ tkinter.Widget.__init__(self, master, widgetname, kw=kw) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/tkinter/__init__.py", line 2293, in __init__ (widgetName, self._w) + extra + self._options(cnf)) _tkinter.TclError: character U+1f4e9 is above the range (U+0000-U+FFFF) allowed by Tcl ---------- components: Tkinter, Unicode, macOS messages: 312279 nosy: Victor Domingos, ezio.melotti, gpolo, ned.deily, ronaldoussoren, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Missing support for Emojis in tkinter type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 17 09:05:42 2018 From: report at bugs.python.org (Victor Domingos) Date: Sat, 17 Feb 2018 14:05:42 +0000 Subject: [New-bugs-announce] [issue32864] Visual glitches when animating ScrolledText instances using place geometry manager Message-ID: <1518876342.16.0.467229070634.issue32864@psf.upfronthosting.co.za> New submission from Victor Domingos : In the current Python 3.7.0b1, on macOS 10.12.6 Sierra (also on 10.11 El Capitan), which seems to include a newer Tcl/tk version, there seems to be some visual glitches when using the place() geometry manager to animate a ttk.Frame containing ScrolledText widgets. These issues do not happen when running the same code on current Python 3.6. Thanks in advance, With best regards, Victor Domingos My current version: Python 3.7.0b1 (v3.7.0b1:9561d7f501, Jan 30 2018, 19:10:11) [Clang 6.0 (clang-600.0.57)] on darwin -------- Here is an excerpt of my code (animated placement of the container frame): def show_entryform(self, *event): if not self.is_entryform_visible: self.is_entryform_visible = True self.btn_add.configure(state="disabled") # Formul?rio de entrada de dados (fundo da janela) self.my_statusbar.lift() if SLOW_MACHINE: self.entryframe.place( in_=self.my_statusbar, relx=1, y=0, anchor="se", relwidth=1, bordermode="outside") else: for y in range(-30, -12, 6): self.entryframe.update() y = y**2 self.entryframe.place( in_=self.my_statusbar, relx=1, y=y, anchor="se", relwidth=1, bordermode="outside") for y in range(-12, -3, 3): self.entryframe.update() y = y**2 self.entryframe.place( in_=self.my_statusbar, relx=1, y=y, anchor="se", relwidth=1, bordermode="outside") for y in range(-3, 0, 1): self.entryframe.update() y = y**2 self.entryframe.place( in_=self.my_statusbar, relx=1, y=y, anchor="se", relwidth=1, bordermode="outside") self.entryframe.lift() ----------- And the base class definition for the problematic widgets: class LabelText(ttk.Frame): """ Generate an empty tkinter.scrolledtext form field with a text label above it. """ def __init__(self, parent, label, style=None, width=0, height=0): ttk.Frame.__init__(self, parent) if style: self.label = ttk.Label(self, text=label, style=style, anchor="w") else: self.label = ttk.Label(self, text=label, anchor="w") self.scrolledtext = ScrolledText(self, font=("Helvetica-Neue", 12), highlightcolor="LightSteelBlue2", wrap='word', width=width, height=height) self.label.pack(side="top", fill="x", expand=False) self.scrolledtext.pack(side="top", fill="both", expand=True) ---------- components: Tkinter, macOS files: issues.jpg messages: 312281 nosy: Victor Domingos, gpolo, ned.deily, ronaldoussoren, serhiy.storchaka priority: normal severity: normal status: open title: Visual glitches when animating ScrolledText instances using place geometry manager type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file47450/issues.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 17 10:51:54 2018 From: report at bugs.python.org (Eryk Sun) Date: Sat, 17 Feb 2018 15:51:54 +0000 Subject: [New-bugs-announce] [issue32865] os.pipe creates inheritable FDs with a bad internal state on Windows Message-ID: <1518882714.97.0.467229070634.issue32865@psf.upfronthosting.co.za> New submission from Eryk Sun : File descriptors in Windows are implemented by the C runtime library's low I/O layer. The CRT maps native File handles to Unix-style file descriptors. Additionally, in order to support inheritance for spawn/exec, the CRT passes inheritable FDs in a reserved field of the CreateProcess STARTUPINFO record. A spawned child process uses this information to initialize its FD table at startup. Python's implementation of os.pipe creates File handles directly via CreatePipe. These handles are non-inheritable, which we want. However, it wraps them with inheritable file descriptors via _open_osfhandle, which we don't want. The solution is to include the flag _O_NOINHERIT in the _open_osfhandle call, which works even though this flag isn't explicitly documented as supported for this function. Here's an example of the issue. >>> fdr, fdw = os.pipe() >>> fdr, fdw (3, 4) >>> msvcrt.get_osfhandle(3), msvcrt.get_osfhandle(4) (440, 444) >>> os.get_handle_inheritable(440) False >>> os.get_handle_inheritable(444) False Note that os.get_inheritable assumes that FD and handle heritability are in sync, so it only queries handle information. The following FDs are actually inheritable, while the underlying File handles are not: >>> os.get_inheritable(3) False >>> os.get_inheritable(4) False This is a flawed assumption baked into _Py_get_inheritable and _Py_set_inheritable, especially since the latter has no effect on FD heritability. The CRT has no public functions to query and modify the heritability of existing FDs. Until it does (and maybe Steve Dower can request this from the MSVC devs), I see no point in pretending something works when it doesn't. This only creates problems. Let's spawn a child to show that these FDs are inherited in a bad state, which is a potential source of bugs and data corruption: >>> os.spawnl(os.P_WAIT, sys.executable, 'python') Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os, msvcrt >>> msvcrt.get_osfhandle(3), msvcrt.get_osfhandle(4) (440, 444) As you can see, file descriptors 3 and 4 were inherited with the handle values from the parent process, but the handles themselves were not inherited. We'll be lucky if the handle values happen to be unassigned and remain so. However, they may be assigned or subsequently assigned to random kernel objects (e.g. a File, Job, Process, Thread, etc). On a related note, Python always opens files as non-inheritable, even for os.open (which IMO makes no sense; we have O_NOINHERIT or O_CLOEXEC for that). It's assumed that the FD can be made inheritable via os.set_inheritable, but this does not work on Windows, and as things stand with the public CRT API, it cannot work. For example: >>> os.open('test.txt', os.O_WRONLY) 3 >>> msvcrt.get_osfhandle(3) 460 >>> os.set_inheritable(3, True) >>> os.get_handle_inheritable(460) True >>> os.spawnl(os.P_WAIT, sys.executable, 'python') Python 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import msvcrt >>> msvcrt.get_osfhandle(3) Traceback (most recent call last): File "", line 1, in OSError: [Errno 9] Bad file descriptor The Windows handle was of course inherited, but that's not useful in this scenario, since the CRT didn't know to pass the FD in the process STARTUPINFO. It's essentially a leaked handle in the child. ---------- components: IO, Library (Lib), Windows messages: 312283 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: os.pipe creates inheritable FDs with a bad internal state on Windows type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 17 21:49:21 2018 From: report at bugs.python.org (Barry A. Warsaw) Date: Sun, 18 Feb 2018 02:49:21 +0000 Subject: [New-bugs-announce] [issue32866] zipimport loader.get_data() requires absolute zip file path Message-ID: <1518922161.55.0.467229070634.issue32866@psf.upfronthosting.co.za> New submission from Barry A. Warsaw : Over in https://gitlab.com/python-devs/importlib_resources/issues/48 we have a report of a FileNotFoundError when trying to read a resource from a zip file. Upon further debugging, I found that zipimport's loader.get_data() raises an unexpected OSError. Interestingly, if the path to the zip file is absolute, everything works as expected, but if the path is relative, then it fails. There's probably a missing abspath() in there somewhere, but as zipimport is written in C, I really didn't spend much time digging around in gdb. ---------- messages: 312296 nosy: barry priority: normal severity: normal status: open title: zipimport loader.get_data() requires absolute zip file path versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 18 00:06:43 2018 From: report at bugs.python.org (MaT1g3R) Date: Sun, 18 Feb 2018 05:06:43 +0000 Subject: [New-bugs-announce] [issue32867] argparse assertion failure with multiline metavars Message-ID: <1518930403.05.0.467229070634.issue32867@psf.upfronthosting.co.za> New submission from MaT1g3R : If I run this script with -h -----8<------------------ from argparse import ArgumentParser mapping = ['123456', '12345', '12345', '123'] p = ArgumentParser('11111111111111') p.add_argument('-v', '--verbose', help='verbose mode', action='store_true') p.add_argument('targets', help='installation targets', nargs='+', metavar='\n'.join(mapping)) p.parse_args() ---------8<-------------------- I get an error: ---------8<-------------------- Traceback (most recent call last): File "tmp.py", line 7, in p.parse_args() File "/usr/lib/python3.6/argparse.py", line 1730, in parse_args args, argv = self.parse_known_args(args, namespace) File "/usr/lib/python3.6/argparse.py", line 1762, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/usr/lib/python3.6/argparse.py", line 1968, in _parse_known_args start_index = consume_optional(start_index) File "/usr/lib/python3.6/argparse.py", line 1908, in consume_optional take_action(action, args, option_string) File "/usr/lib/python3.6/argparse.py", line 1836, in take_action action(self, namespace, argument_values, option_string) File "/usr/lib/python3.6/argparse.py", line 1020, in __call__ parser.print_help() File "/usr/lib/python3.6/argparse.py", line 2362, in print_help self._print_message(self.format_help(), file) File "/usr/lib/python3.6/argparse.py", line 2346, in format_help return formatter.format_help() File "/usr/lib/python3.6/argparse.py", line 282, in format_help help = self._root_section.format_help() File "/usr/lib/python3.6/argparse.py", line 213, in format_help item_help = join([func(*args) for func, args in self.items]) File "/usr/lib/python3.6/argparse.py", line 213, in item_help = join([func(*args) for func, args in self.items]) File "/usr/lib/python3.6/argparse.py", line 334, in _format_usage assert ' '.join(pos_parts) == pos_usage AssertionError -----8<------------------ ---------- components: Library (Lib) messages: 312299 nosy: MaT1g3R priority: normal severity: normal status: open title: argparse assertion failure with multiline metavars type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 18 11:27:43 2018 From: report at bugs.python.org (robotwizard) Date: Sun, 18 Feb 2018 16:27:43 +0000 Subject: [New-bugs-announce] [issue32868] python 3 docs' iterator example has python 2 code Message-ID: <1518971263.84.0.467229070634.issue32868@psf.upfronthosting.co.za> New submission from robotwizard : Please note this is my first issue. The documentation at https://docs.python.org/3/tutorial/classes.html#iterators on the second code block gives the example of iterating a string by creating an iterator out of it. However I tried it with python 3.5 and 3.6 and both give me a "str_iterator" which has no "next()" method. When I tried the code with python 2.7, it works. ---------- assignee: docs at python components: Documentation messages: 312312 nosy: docs at python, robotwizard, willingc priority: normal severity: normal status: open title: python 3 docs' iterator example has python 2 code versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 18 11:58:54 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Sun, 18 Feb 2018 16:58:54 +0000 Subject: [New-bugs-announce] [issue32869] Incorrect dst buffer size for MultiByteToWideChar in _Py_fopen_obj Message-ID: <1518973134.04.0.467229070634.issue32869@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : MultiByteToWideChar expects the destination buffer size to be given in wide characters, not bytes. This is currently not a real issue since _Py_fopen_obj is only used internally with mode being a short constant string in all call sites I've found. ---------- components: IO, Windows messages: 312314 nosy: izbyshev, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: Incorrect dst buffer size for MultiByteToWideChar in _Py_fopen_obj type: enhancement versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 18 12:59:36 2018 From: report at bugs.python.org (Andrew Scheller) Date: Sun, 18 Feb 2018 17:59:36 +0000 Subject: [New-bugs-announce] [issue32870] Documentation typo (2.x only) for deque.remove Message-ID: <1518976776.93.0.467229070634.issue32870@psf.upfronthosting.co.za> New submission from Andrew Scheller : https://docs.python.org/2/library/collections.html#collections.deque.remove says "Removed the first occurrence of value." I believe the "Removed" should be changed to just "Remove" ? (this has already been fixed in the 3.x documentation https://docs.python.org/3/library/collections.html#collections.deque.remove ) ---------- assignee: docs at python components: Documentation messages: 312319 nosy: docs at python, lurchman priority: normal severity: normal status: open title: Documentation typo (2.x only) for deque.remove versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 18 16:41:07 2018 From: report at bugs.python.org (Victor Porton) Date: Sun, 18 Feb 2018 21:41:07 +0000 Subject: [New-bugs-announce] [issue32871] Interrupt .communicate() on SIGTERM/INT Message-ID: <1518990067.61.0.467229070634.issue32871@psf.upfronthosting.co.za> New submission from Victor Porton : At https://docs.python.org/3/library/subprocess.html there is said nothing about what happens if our Python program terminates (by SIGTERM or SIGINT) while waiting for .communicate(). I assume to do something in this situation is just forgotten. Usually terminate of our program should also terminate the invoked script. It can be made by re-delivery SIGTERM/SIGINT or (on non-POSIX) by .terminate() method. Probably, it should be done by .terminate() method even on POSIX systems, to handle SIGTERM and SIGINT in the same way. ---------- components: Library (Lib) messages: 312328 nosy: porton priority: normal severity: normal status: open title: Interrupt .communicate() on SIGTERM/INT type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 04:03:48 2018 From: report at bugs.python.org (Matthias Klose) Date: Mon, 19 Feb 2018 09:03:48 +0000 Subject: [New-bugs-announce] [issue32872] backport of #32305 causes regressions in various packages Message-ID: <1519031028.52.0.467229070634.issue32872@psf.upfronthosting.co.za> New submission from Matthias Klose : The backport of issue #32305 causes regressions in several packaged namespace packages: https://bugs.debian.org/890621 https://bugs.debian.org/890754 while the change is intended, is it appropriate to backport it to 3.6? Please could you have a look, you might still have an appropriate chroot laying around ;) ---------- components: Library (Lib) keywords: 3.6regression messages: 312344 nosy: barry, doko, eric.smith priority: critical severity: normal status: open title: backport of #32305 causes regressions in various packages type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 05:46:45 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 19 Feb 2018 10:46:45 +0000 Subject: [New-bugs-announce] [issue32873] Pickling of typing types Message-ID: <1519037205.03.0.467229070634.issue32873@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : In 3.6 typing types are pickled by names: >>> import pickle, pickletools, typing >>> pickletools.optimize(pickle.dumps(typing.List)) b'\x80\x03ctyping\nList\n.' >>> pickletools.dis(pickletools.optimize(pickle.dumps(typing.List))) 0: \x80 PROTO 3 2: c GLOBAL 'typing List' 15: . STOP highest protocol among opcodes = 2 The side effect of this is that they are considered atomic by the copy module. In 3.7 the pickle data contains all private attributes. >>> pickletools.optimize(pickle.dumps(typing.List)) b'\x80\x03ctyping\n_GenericAlias\n)\x81}(X\x05\x00\x00\x00_inst\x89X\x08\x00\x00\x00_special\x88X\x05\x00\x00\x00_nameX\x04\x00\x00\x00ListX\n\x00\x00\x00__origin__cbuiltins\nlist\nX\x08\x00\x00\x00__args__ctyping\nTypeVar\n)\x81q\x00}(X\x04\x00\x00\x00nameX\x01\x00\x00\x00TX\x05\x00\x00\x00boundNX\x0b\x00\x00\x00constraints)X\x02\x00\x00\x00co\x89X\x06\x00\x00\x00contra\x89ub\x85X\x0e\x00\x00\x00__parameters__h\x00\x85X\t\x00\x00\x00__slots__Nub.' >>> pickletools.dis(pickletools.optimize(pickle.dumps(typing.List))) 0: \x80 PROTO 3 2: c GLOBAL 'typing _GenericAlias' 24: ) EMPTY_TUPLE 25: \x81 NEWOBJ 26: } EMPTY_DICT 27: ( MARK 28: X BINUNICODE '_inst' 38: \x89 NEWFALSE 39: X BINUNICODE '_special' 52: \x88 NEWTRUE 53: X BINUNICODE '_name' 63: X BINUNICODE 'List' 72: X BINUNICODE '__origin__' 87: c GLOBAL 'builtins list' 102: X BINUNICODE '__args__' 115: c GLOBAL 'typing TypeVar' 131: ) EMPTY_TUPLE 132: \x81 NEWOBJ 133: q BINPUT 0 135: } EMPTY_DICT 136: ( MARK 137: X BINUNICODE 'name' 146: X BINUNICODE 'T' 152: X BINUNICODE 'bound' 162: N NONE 163: X BINUNICODE 'constraints' 179: ) EMPTY_TUPLE 180: X BINUNICODE 'co' 187: \x89 NEWFALSE 188: X BINUNICODE 'contra' 199: \x89 NEWFALSE 200: u SETITEMS (MARK at 136) 201: b BUILD 202: \x85 TUPLE1 203: X BINUNICODE '__parameters__' 222: h BINGET 0 224: \x85 TUPLE1 225: X BINUNICODE '__slots__' 239: N NONE 240: u SETITEMS (MARK at 27) 241: b BUILD 242: . STOP highest protocol among opcodes = 2 Unpickling it creates a new object. And I'm not sure all invariants are satisfied. In additional to lesses efficiency and lost of preserving identity, such pickle can be incompatible with old Python versions and future Python versions if the internal representation of typing types will be changed. ---------- components: Library (Lib) messages: 312346 nosy: gvanrossum, levkivskyi, serhiy.storchaka priority: normal severity: normal status: open title: Pickling of typing types type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 10:56:06 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Mon, 19 Feb 2018 15:56:06 +0000 Subject: [New-bugs-announce] [issue32874] IDLE: Add tests for pyparse Message-ID: <1519055766.11.0.467229070634.issue32874@psf.upfronthosting.co.za> New submission from Cheryl Sabella : Add unit tests for pyparse.py in IDLE. ---------- assignee: terry.reedy components: IDLE messages: 312352 nosy: csabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Add tests for pyparse type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 13:43:31 2018 From: report at bugs.python.org (Victor Porton) Date: Mon, 19 Feb 2018 18:43:31 +0000 Subject: [New-bugs-announce] [issue32875] Add __exit__() method to event loops Message-ID: <1519065811.74.0.467229070634.issue32875@psf.upfronthosting.co.za> New submission from Victor Porton : Please add `__exit__()` method to event loops, to use them with `with`. ---------- components: asyncio messages: 312360 nosy: asvetlov, porton, yselivanov priority: normal severity: normal status: open title: Add __exit__() method to event loops type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 14:52:16 2018 From: report at bugs.python.org (Hanno Boeck) Date: Mon, 19 Feb 2018 19:52:16 +0000 Subject: [New-bugs-announce] [issue32876] HTMLParser raises exception on some inputs Message-ID: <1519069936.36.0.467229070634.issue32876@psf.upfronthosting.co.za> New submission from Hanno Boeck : I noticed that the HTMLParser will raise an exception on some inputs. I'm not sure what the expectations here are, but given that real-world HTML often contains all kinds of broken content I would assume an HTMLParser to always try to parse a document and not be interrupted by an exception if an error occurs. Here's a minified example: #!/usr/bin/env python3 import html.parser html.parser.HTMLParser().feed(" html.parser.HTMLParser().feed(" _______________________________________ From report at bugs.python.org Mon Feb 19 16:41:08 2018 From: report at bugs.python.org (ruffsl) Date: Mon, 19 Feb 2018 21:41:08 +0000 Subject: [New-bugs-announce] [issue32877] Login to bugs.python.org with Google account NOT working Message-ID: <1519076468.29.0.467229070634.issue32877@psf.upfronthosting.co.za> New submission from ruffsl : I've been unable to login to bugs.python.org using Google's oauth. After clicking the google oauth logo on the sidebar on bugs.python.org, I am either slowly redirected to a google login page (after about 1 min) to select an account, then after selecting the relevant google account or if I'm only active in it, I'm eventually given a error message after about 2 min of loading time: ``` html An error has occurred

An error has occurred

A problem was encountered processing your request. The tracker maintainers have been notified of the problem.

``` Other outputs from the browser debug console: ``` Failed to load resource: the server responded with a status of 404 (/tracker/favicon.ico) Failed to load resource: the server responded with a status of 500 (Internal Server Error) ``` This has persisted for about a week now, on multiple machines, bowsers, and ISPs. Presently, the only way I've been able to post this issue at all is due to my first and only login session on one workstation is still active. I suspect this might be a larger issue, only that no one else using google oauth to login has been able to login to file and voise the bug? Related? https://bugs.python.org/issue29544 https://bugs.python.org/issue28887 ---------- messages: 312374 nosy: ruffsl priority: normal severity: normal status: open title: Login to bugs.python.org with Google account NOT working _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 17:14:15 2018 From: report at bugs.python.org (Guido van Rossum) Date: Mon, 19 Feb 2018 22:14:15 +0000 Subject: [New-bugs-announce] [issue32878] Document value of st_ino on Windows Message-ID: <1519078455.66.0.467229070634.issue32878@psf.upfronthosting.co.za> New submission from Guido van Rossum : We received a report from a well-meaning security researcher who was confused by the non-zero and arbitrary value of st_ino in stat() results on Windows (where in Python 2 this was always zero). The researcher was worried that this was due to an uninitialized memory read. The actual cause is the way this field is filled with arbitary data: https://github.com/python/cpython/blob/master/Python/fileutils.c#L758 Let's make sure this is documented properly for all versions where we still update the docs. ---------- assignee: steve.dower components: Documentation messages: 312376 nosy: gvanrossum, steve.dower priority: normal severity: normal status: open title: Document value of st_ino on Windows _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 20:55:02 2018 From: report at bugs.python.org (Simon Bouchard) Date: Tue, 20 Feb 2018 01:55:02 +0000 Subject: [New-bugs-announce] [issue32879] Race condition in multiprocessing Queue Message-ID: <1519091702.0.0.467229070634.issue32879@psf.upfronthosting.co.za> New submission from Simon Bouchard : The clear list function call in made after the put(data) on the queue. But the data is sometime clear in the queue (randomly). Since both function are call within the same process, a race condition is not expected. ---------- files: code.py messages: 312391 nosy: TwistedSim priority: normal severity: normal status: open title: Race condition in multiprocessing Queue type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file47453/code.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 21:19:14 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 20 Feb 2018 02:19:14 +0000 Subject: [New-bugs-announce] [issue32880] IDLE: Fix and update and cleanup pyparse Message-ID: <1519093154.93.0.467229070634.issue32880@psf.upfronthosting.co.za> New submission from Terry J. Reedy : Pyparse was mostly written in the early 2000s, with the only one code change since 2007 in 2014. #32874 will add tests for pyparse 'as is' (though with some docstring and comment changes). This issue will change pyparse code, and change or add test as needed. Here are two items to fix. More will be added. Some fixes might be separate PRs or spun off into separate issues. def dump: dump this; print does the same, without hardcoding sys.__stdout__, which might be None. _synchre: Only used in find_good_parse_start. Missing 'if' (mentioned in function docstring as 'popular' and 'for' and new things like 'with'. 'async' and 'await' are not generally popular, but is not any statement beginning a good parse start? ---------- assignee: terry.reedy components: IDLE messages: 312394 nosy: csabella, terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: Fix and update and cleanup pyparse type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 19 23:48:12 2018 From: report at bugs.python.org (zhaoya) Date: Tue, 20 Feb 2018 04:48:12 +0000 Subject: [New-bugs-announce] [issue32881] pycapsule:PyObject * is NULL pointer Message-ID: <1519102092.04.0.467229070634.issue32881@psf.upfronthosting.co.za> New submission from zhaoya : i have question:call,c-->python-->c. 1.the c pointer void* abc="123" by pycapsule in the c code. ...... void* lpContext = "abc"; PyObject * lpPyContext = PyCapsule_New(lpContext, "Context",NULL); ...... PyTuple_SetItem(pArgs, 1, lpPyContext); PyObject* pyResult = PyObject_CallObject(pFunc, pArgs); 2.c call python: in the python code: import ctypes pycapsule = ctypes.windll.LoadLibrary("C:\Users\zhaoya16975\Documents\Visual Studio 2017\Projects\cpython\Release\pycapsule.dll") def test( lpContext,lpRequest,lpAnswer): print lpContext pycapsule.hello() pycapsule.GetContext(lpContext) ......... the lpContest is "" but,i can't lpContext in the pycapsule.GetContest: the capsule is null poniter,the GetContext no execute!! void* FUNCTION_CALL_MODE GetContext(PyObject *capsule) { printf(" GetContext......\n"); //if (!PyCapsule_IsValid((PyObject *)capsule, "Context")) { // printf(" the Capsule Context is no Valid\n"); // return NULL; //} //return PyCapsule_GetPointer((PyObject *)capsule, "Context"); return NULL; } i hope c call python pass void* ,and python call c pass void* ,but the capsule call no success. ---------- components: Windows messages: 312399 nosy: paul.moore, steve.dower, tim.golden, zach.ware, zhaoya priority: normal severity: normal status: open title: pycapsule:PyObject * is NULL pointer type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 04:49:54 2018 From: report at bugs.python.org (sruester) Date: Tue, 20 Feb 2018 09:49:54 +0000 Subject: [New-bugs-announce] [issue32882] SSLContext.set_ecdh_curve() not accepting x25519 Message-ID: <1519120194.11.0.467229070634.issue32882@psf.upfronthosting.co.za> New submission from sruester : Using SSLContext.set_ecdh_curve() it is neither possible to choose X25519, nor to choose a list of curves to be used for key agreement. ---------- assignee: christian.heimes components: SSL messages: 312405 nosy: christian.heimes, sruester priority: normal severity: normal status: open title: SSLContext.set_ecdh_curve() not accepting x25519 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 04:58:42 2018 From: report at bugs.python.org (sruester) Date: Tue, 20 Feb 2018 09:58:42 +0000 Subject: [New-bugs-announce] [issue32883] Key agreement parameters not accessible Message-ID: <1519120722.76.0.467229070634.issue32883@psf.upfronthosting.co.za> New submission from sruester : Using python it is not possible to retrieve information about the key exchange/agreement method that was used during session setup. A method should be added to a suitable SSL* object that allows to retrieve information such as whether ECDH with which curves, or DH, or neither was used. ---------- assignee: christian.heimes components: SSL messages: 312406 nosy: christian.heimes, sruester priority: normal severity: normal status: open title: Key agreement parameters not accessible type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 05:28:53 2018 From: report at bugs.python.org (Matanya Stroh) Date: Tue, 20 Feb 2018 10:28:53 +0000 Subject: [New-bugs-announce] [issue32884] Adding the ability for getpass to print asterisks when passowrd is typed Message-ID: <1519122533.37.0.467229070634.issue32884@psf.upfronthosting.co.za> New submission from Matanya Stroh : I saw some questions about it in stackoverflow (links below), and also find it very useful to have the ability to print asterisks. Some users, find it disturbing when they don't have any indication that password is typed, and it will be helpful to have it. I know that it's have some risks exposing the number of chars to the password, but I think it's worth it. When using Jupyter (notebook server is 4.3.1) the password does echoed as "*", but not in Python IDE in linux and Windows 1) https://stackoverflow.com/questions/10990998/how-to-have-password-echoed-as-asterisks 2) https://stackoverflow.com/questions/7838564/how-to-read-password-with-echo-in-python-console-program ---------- components: Library (Lib) messages: 312410 nosy: matanya.stroh priority: normal severity: normal status: open title: Adding the ability for getpass to print asterisks when passowrd is typed type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 06:06:48 2018 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Tue, 20 Feb 2018 11:06:48 +0000 Subject: [New-bugs-announce] [issue32885] Tools/scripts/pathfix.py leaves bunch of ~ suffixed files around with no opt-out Message-ID: <1519124808.65.0.467229070634.issue32885@psf.upfronthosting.co.za> New submission from Miro Hron?ok : We (Fedora's Python SIG) would like to promote usage of Tools/scripts/pathfix.py (we've even moved it to $PATH) in Fedora RPM build (a.k.a spec files) instead of various error prone finds + greps + seds. However when running pathfix.py, it creates backup files (with ~ suffix). This is mostly unfortunate in RPM build environment, because one needs to clean those up, otherwise one gets warnings and errors like this: error: Installed (but unpackaged) file(s) found: /usr/bin/spam~ Or the file with ~ might even get installed if a more relaxed patter is used in a %files section that lists what is part of the RPM package. %{_bindir}/spam-* We even have shebangs checks/mangling in place and the ~ suffixed file still has the wrong shebang, resulting in warnings like this: *** WARNING: mangling shebang in /usr/bin/spam~ from #!/usr/bin/python -Es to #!/usr/bin/python2 -Es. This will become an ERROR, fix it manually! Steps to Reproduce: 1. $ echo '#!python' > omg 2. $ Tools/scripts/pathfix.py -i "/usr/bin/python3" -p omg # possibly with extra flag, see bellow omg: updating 3. $ ls Actual results: omg omg~ Expected results: omg Since the backup feature was here for ages, instead of changing the behavior, I suggest a flag is added that disables this. 2to3 has exactly the proposed flag as: "-n, --nobackups Don't write backups for modified files". This doesn't necessarily need to go into all versions, but I've selected all that has this problem. Getting it to 3.6+ would be great, however if it goes to anything later, we'll backport it in the Fedora package. I have a patch ready, sill send PR. ---------- messages: 312411 nosy: hroncok priority: normal severity: normal status: open title: Tools/scripts/pathfix.py leaves bunch of ~ suffixed files around with no opt-out versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 08:32:01 2018 From: report at bugs.python.org (Sylvain Marie) Date: Tue, 20 Feb 2018 13:32:01 +0000 Subject: [New-bugs-announce] [issue32886] new Boolean ABC in numbers module + Integral>Integer renaming Message-ID: <1519133521.3.0.467229070634.issue32886@psf.upfronthosting.co.za> New submission from Sylvain Marie : This issue is created following the discussion [Python-ideas] Boolean ABC similar to what's provided in the 'numbers' module. The following items are suggested: - adding a 'Boolean' ABC class to the 'numbers' module - register python 'bool' as a virtual subclass of both 'Boolean' and 'Integral' - register numpy bool ('np.bool_') as a virtual subclass of 'Boolean' only - rename 'Integral' 'Integer' and leave 'Integral' as an alias for backwards compatibility Below is a proposal Boolean class: --------------------- class Boolean(metaclass=ABCMeta): """ An abstract base class for booleans. """ __slots__ = () @abstractmethod def __bool__(self): """Return a builtin bool instance. Called for bool(self).""" @abstractmethod def __and__(self, other): """self & other""" @abstractmethod def __rand__(self, other): """other & self""" @abstractmethod def __xor__(self, other): """self ^ other""" @abstractmethod def __rxor__(self, other): """other ^ self""" @abstractmethod def __or__(self, other): """self | other""" @abstractmethod def __ror__(self, other): """other | self""" @abstractmethod def __invert__(self): """~self""" # register bool and numpy bool_ as virtual subclasses # so that issubclass(bool, Boolean) = issubclass(np.bool_, Boolean) = True Boolean.register(bool) try: import numpy as np Boolean.register(np.bool_) except ImportError: # silently escape pass # bool is also a virtual subclass of Integral. np.bool_ is not. Integral.register(bool) ---------- components: Library (Lib) messages: 312416 nosy: smarie priority: normal severity: normal status: open title: new Boolean ABC in numbers module + Integral>Integer renaming type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 11:24:10 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Tue, 20 Feb 2018 16:24:10 +0000 Subject: [New-bugs-announce] [issue32887] os: Users of path_converter don't handle fd == -1 properly Message-ID: <1519143850.49.0.467229070634.issue32887@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : Demo: >>> import os >>> os.chdir(-1) Traceback (most recent call last): File "", line 1, in OSError: [Errno 14] Bad address: -1 >>> os.chdir(-2) Traceback (most recent call last): File "", line 1, in OSError: [Errno 9] Bad file descriptor: -2 Functions in os supporting either path or file descriptor argument (os.supports_fd) usually use the following code pattern to distinguish between those cases: if (path->fd != -1) result = fchdir(path->fd); else result = chdir(path->narrow); However, _fd_converter used by path_converter internally doesn't give any special meaning to -1 and allows any negative file descriptors. Therefore, if a user passes -1 to such function, path->narrow, which is NULL, will be used. I see two ways to fix this. 1) Make some flag in path_t indicating that it should be treated as fd and make all users check that flag. 2) Make _fd_converter raise an exception for negative descriptors. Also, I have to mention an inconsistency in reporting of bad descriptors. A handful of os functions uses fildes_converter for descriptors, which uses PyObject_AsFileDescriptor, which in turn is used in other places in Python as well (e.g. in fcntl module). PyObject_AsFileDescriptor raises a ValueError for negative descriptors instead of OSError raised by most os functions in this case. >>> os.fchdir(-1) Traceback (most recent call last): File "", line 1, in ValueError: file descriptor cannot be a negative integer (-1) ---------- messages: 312421 nosy: izbyshev priority: normal severity: normal status: open title: os: Users of path_converter don't handle fd == -1 properly type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 12:26:21 2018 From: report at bugs.python.org (Chris Angelico) Date: Tue, 20 Feb 2018 17:26:21 +0000 Subject: [New-bugs-announce] [issue32888] Improve exception message in ast.literal_eval Message-ID: <1519147581.54.0.467229070634.issue32888@psf.upfronthosting.co.za> New submission from Chris Angelico : When a non-literal is given to literal_eval, attempt to be more helpful with the message, rather than calling it 'malformed'. ---------- components: Library (Lib) messages: 312423 nosy: Rosuav priority: normal pull_requests: 5555 severity: normal status: open title: Improve exception message in ast.literal_eval versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 16:13:07 2018 From: report at bugs.python.org (Paul Price) Date: Tue, 20 Feb 2018 21:13:07 +0000 Subject: [New-bugs-announce] [issue32889] Valgrind suppressions need updating Message-ID: <1519161187.46.0.467229070634.issue32889@psf.upfronthosting.co.za> New submission from Paul Price : Using the current valgrind suppressions (Misc/valgrind-python.supp) results in a lot of noise, e.g.: ==2662549== Conditional jump or move depends on uninitialised value(s) ==2662549== at 0x4EFD734: address_in_range (obmalloc.c:1200) ==2662549== by 0x4EFD734: _PyObject_Free (obmalloc.c:1467) ==2662549== by 0x4FAA6A3: block_free (pyarena.c:95) ==2662549== by 0x4FAA6A3: PyArena_Free (pyarena.c:169) The suppressions are blocking Py_ADDRESS_IN_RANGE, but this function was renamed in 3.6. ---------- messages: 312436 nosy: Paul Price priority: normal pull_requests: 5563 severity: normal status: open title: Valgrind suppressions need updating versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 20 18:54:16 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Tue, 20 Feb 2018 23:54:16 +0000 Subject: [New-bugs-announce] [issue32890] os: Some functions may report bogus errors on Windows Message-ID: <1519170856.46.0.467229070634.issue32890@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : Demo: >>> os.execve('', ['a'], {}) Traceback (most recent call last): File "", line 1, in OSError: [WinError 0] The operation completed successfully: '' The reason is that path_error() used throughout os module always uses GetLastError() on Windows, but some functions are implemented via CRT calls which report errors via errno. It seems that commit 292c83554 caused this issue. ---------- messages: 312446 nosy: izbyshev, vstinner priority: normal severity: normal status: open title: os: Some functions may report bogus errors on Windows type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 00:34:52 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 21 Feb 2018 05:34:52 +0000 Subject: [New-bugs-announce] [issue32891] Add 'Integer' as synonym for 'Integral' in numbers module. Message-ID: <1519191292.27.0.467229070634.issue32891@psf.upfronthosting.co.za> New submission from Terry J. Reedy : Actually, 'replace' 'Integral' with 'Integer' but keep 'Integral' for back compatibility. >From python-ideas, where Guido said "Looking at https://en.wikipedia.org/wiki/Number it seems that Integer is "special" -- every other number type is listed as " numbers" (e.g. rational numbers, complex numbers) but integers are listed as "Integers". So let's just switch it to that, and keep Integral as an alias for backwards compatibility. I don't think it's a huge problem to fix this in 3.7b2, if someone wants to do the work." PR needs What's New entry. Ned, if you disagree as RM, please say so. ---------- messages: 312463 nosy: ned.deily, terry.reedy priority: high severity: normal stage: test needed status: open title: Add 'Integer' as synonym for 'Integral' in numbers module. type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 04:16:37 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 21 Feb 2018 09:16:37 +0000 Subject: [New-bugs-announce] [issue32892] Remove specific constant AST types in favor of ast.Constant Message-ID: <1519204597.27.0.467229070634.issue32892@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Currently different literals are represented by different types in AST: Num -- for int, float and complex Str -- for str Bytes -- for bytes Ellipsis -- for Ellipsis NameConstant -- for True, False and None And Constant (added in 3.6, issue26146) can represent any immutable value, including tuples and frozensets of constants. Instances of Constant are not created by the AST parser, they are created by the builtin AST optimizer and can be created manually. These AST types don't have one-to-one correspondence with Python types, since Num represents three numeric types, NameConstant represents bool and NoneType, and any constant type can be represented as Constant. I propose to get rid of Num, Str, Bytes, Ellipsis and NameConstant and always use Constant. This will simplify the code which currently needs to repeat virtually identical code for all types. I have almost ready patch, the only question is whether it is worth to keep deprecated aliases Num, Str, Bytes, Ellipsis and NameConstant. ---------- components: Interpreter Core messages: 312482 nosy: benjamin.peterson, brett.cannon, gvanrossum, ncoghlan, serhiy.storchaka, vstinner, yselivanov priority: normal severity: normal status: open title: Remove specific constant AST types in favor of ast.Constant type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 04:56:21 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 21 Feb 2018 09:56:21 +0000 Subject: [New-bugs-announce] [issue32893] ast.literal_eval() shouldn't accept booleans as numbers in AST Message-ID: <1519206981.62.0.467229070634.issue32893@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Currently ast.literal_eval() accepts AST representing expressions like "+True" or "True+2j" if constants are represented as Constant. This is because the type of the value is tested with `isinstance(left, (int, float))` and since bool is a subclass of int it passes this test. The proposed PR makes ast.literal_eval() using tests for exact type. I don't think it is worth backporting since it affects only passing AST to ast.literal_eval(). Usually ast.literal_eval() is used for evaluating strings. ---------- components: Library (Lib) messages: 312485 nosy: serhiy.storchaka priority: normal severity: normal status: open title: ast.literal_eval() shouldn't accept booleans as numbers in AST type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 07:20:59 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 21 Feb 2018 12:20:59 +0000 Subject: [New-bugs-announce] [issue32894] AST unparsing of infinity numbers Message-ID: <1519215659.19.0.467229070634.issue32894@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : AST unparsing of infinity numbers produces a string which can't be evaluated because inf and infj are not builtins. >>> from __future__ import annotations >>> def f(x: A[1e1000, 1e1000j]): pass ... >>> f.__annotations__ {'x': 'A[(inf, infj)]'} See how this problem is handled in Tools/parser/unparse.py. There is similar problem with NaN. NaN can't be a result of parsing Python sources, but it can be injected manually in AST, and it can be a result of third-party AST optimizer. ---------- components: Interpreter Core messages: 312492 nosy: gvanrossum, lukasz.langa, serhiy.storchaka priority: normal severity: normal status: open title: AST unparsing of infinity numbers type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 09:06:26 2018 From: report at bugs.python.org (Uri Elias) Date: Wed, 21 Feb 2018 14:06:26 +0000 Subject: [New-bugs-announce] [issue32895] Make general function sum() use Numpy's sum when obviously possible Message-ID: <1519221986.73.0.467229070634.issue32895@psf.upfronthosting.co.za> New submission from Uri Elias : True at least to PY2.7 and 3.5 - given x is a numpy array, say np.random.rand(int(1e6)), then sum(x) is much slower (for 1e6 elements - 2 orders of magnitude) than x.sum(). Now, while this is understandable behaviour, I wander how hard it is to add a condition that if argument is a Numpy object then use its own sum. I think many programmers aren't aware of that, so all in all it can improve the performance of a lot of existing code. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 312495 nosy: urielias priority: normal severity: normal status: open title: Make general function sum() use Numpy's sum when obviously possible type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 11:21:46 2018 From: report at bugs.python.org (John Didion) Date: Wed, 21 Feb 2018 16:21:46 +0000 Subject: [New-bugs-announce] [issue32896] Error when subclassing a dataclass with a field that uses a defaultfactory Message-ID: <1519230106.01.0.467229070634.issue32896@psf.upfronthosting.co.za> New submission from John Didion : > @dataclass > class Foo: > x: dict = field(default_factory=dict) > @dataclass > class Bar(Foo): > y: int = 1 > @dataclass > class Baz(Foo): > def blorf(self): > print('hello') > Foo().x {} > Bar().x {} > Baz().x Traceback (most recent call last): File "", line 1, in TypeError: __init__() missing 1 required positional argument: 'x' --- I understand that this is desired behavior when the subclass contains non-default attributes. But subclasses that define no additional attributes should work just the same as those that define only additional default attributes. A similar issue was raised and dismissed when dataclasses was in development on GitHub: https://github.com/ericvsmith/dataclasses/issues/112, but that only concerned the case of subclasses defining non-default attributes. ---------- messages: 312496 nosy: John Didion priority: normal severity: normal status: open title: Error when subclassing a dataclass with a field that uses a defaultfactory type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 14:33:22 2018 From: report at bugs.python.org (Amit Ghadge) Date: Wed, 21 Feb 2018 19:33:22 +0000 Subject: [New-bugs-announce] [issue32897] test_gdb failed on Fedora 27 Message-ID: <1519241602.1.0.467229070634.issue32897@psf.upfronthosting.co.za> New submission from Amit Ghadge : Hi, I get latest changes, $ git log -1 Author: Paul Price Date: Wed Feb 21 01:00:01 2018 -0500 compilation done successfully but gdb test is failing, I attached output of test_gdb ---------- components: Tests files: error.txt messages: 312503 nosy: amitg-b14 priority: normal severity: normal status: open title: test_gdb failed on Fedora 27 versions: Python 3.8 Added file: https://bugs.python.org/file47455/error.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 14:35:25 2018 From: report at bugs.python.org (Eddie Elizondo) Date: Wed, 21 Feb 2018 19:35:25 +0000 Subject: [New-bugs-announce] [issue32898] [BUILD] Using COUNT_ALLOCS breaks build Message-ID: <1519241725.31.0.467229070634.issue32898@psf.upfronthosting.co.za> New submission from Eddie Elizondo : The following build crashed: mkdir debug && cd debug ../configure --with-pydebug make EXTRA_CFLAGS="-DCOUNT_ALLOCS" The bug was introduced here: https://github.com/python/cpython/commit/25420fe290b98171e6d30edf9350292c21ef700e Fix: 1) s/inter/interp/ 2) Declare PyTypeObject ---------- components: Build messages: 312504 nosy: elizondo93 priority: normal pull_requests: 5578 severity: normal status: open title: [BUILD] Using COUNT_ALLOCS breaks build type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 15:01:23 2018 From: report at bugs.python.org (xitop) Date: Wed, 21 Feb 2018 20:01:23 +0000 Subject: [New-bugs-announce] [issue32899] Not documented: key in dict test may raise TypeError Message-ID: <1519243283.39.0.467229070634.issue32899@psf.upfronthosting.co.za> New submission from xitop : I'd like to suggest an addition to the documentation of the "key in dict" operation. Current version found at https://docs.python.org/3/library/stdtypes.html#dict says only: --- key in d Return True if d has a key key, else False. --- This is not precise. TypeError is also a possible outcome. It happens when the key is unhashable. Unsure whether the description is incomplete or the membership test has a bug I submitted the issue 32675 in January. The issue was closed with resolution of "not a bug". If it is indded the intended behaviour, I think it needs to be documented in order to prevent further misunderstandings. Before the issue 32675 I believed a membership test is failsafe, because the definition of __contains__ clearly defines the return value as either True or False. ---------- assignee: docs at python components: Documentation messages: 312506 nosy: docs at python, xitop priority: normal severity: normal status: open title: Not documented: key in dict test may raise TypeError type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 16:21:20 2018 From: report at bugs.python.org (Matthias Urlichs) Date: Wed, 21 Feb 2018 21:21:20 +0000 Subject: [New-bugs-announce] [issue32900] Teach pdb to step through asyncio et al. Message-ID: <1519248080.49.0.467229070634.issue32900@psf.upfronthosting.co.za> New submission from Matthias Urlichs : The attached patch is a proof-of-concept implementation of a way to teach pdb to "single-step" through non-interesting code that you can't skip with "n". The prime example for this is asyncio, trio et al., though decorators like @contextlib.contextmanager also benefit. A "real" implementation should allow the user to specify ranges to ignore, on the pdb command line (probably by filename and optional range of line numbers, instead of pattern matching). A visual indication of how much code has been skipped-ahead that way might also be beneficial. ---------- components: asyncio messages: 312509 nosy: asvetlov, giampaolo.rodola, njs, smurfix, yselivanov priority: normal severity: normal status: open title: Teach pdb to step through asyncio et al. type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 16:26:55 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 21 Feb 2018 21:26:55 +0000 Subject: [New-bugs-announce] [issue32901] Update Windows 3.7/8 builds to tcl/tk 8.6.8 Message-ID: <1519248415.02.0.467229070634.issue32901@psf.upfronthosting.co.za> New submission from Terry J. Reedy : This should be done ASAP for most testing. MacOS 64bit 3.7.0b1 links to 8.6.7. Whether that should be upgraded to 8.6.8 is up to Ned. I raised question on #15663. ---------- messages: 312512 nosy: serhiy.storchaka, terry.reedy, zach.ware priority: normal severity: normal status: open title: Update Windows 3.7/8 builds to tcl/tk 8.6.8 versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 19:01:28 2018 From: report at bugs.python.org (mj4int) Date: Thu, 22 Feb 2018 00:01:28 +0000 Subject: [New-bugs-announce] [issue32902] FileInput "inplace" redirects output of other threads Message-ID: <1519257688.59.0.467229070634.issue32902@psf.upfronthosting.co.za> New submission from mj4int : A pool of threads exists, all of which have started executing. Thread A has a fileinput object and is currently iterating over the files in "edit in place mode". For each file, stdout is redirected to the file. Thread A can call print and write to the file. Thread B just wants to log some things in the console. Thread B calls print and... writes to the file thread A is processing. stdout is hijacked by thread A's fileinput loop. Whether or not every thread should have an independent evaluation of stdout, certainly a fileinput object shouldn't silently redirect the prints of an innocent bystander thread? May exist in other python versions, but not checked. ---------- components: IO, Library (Lib) messages: 312516 nosy: mj4int priority: normal severity: normal status: open title: FileInput "inplace" redirects output of other threads type: behavior versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 19:37:42 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Thu, 22 Feb 2018 00:37:42 +0000 Subject: [New-bugs-announce] [issue32903] os.chdir() may leak memory on Windows Message-ID: <1519259862.55.0.467229070634.issue32903@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : 'new_path' is not freed if the new directory is a UNC path longer than MAX_PATH. ---------- components: Extension Modules, Windows messages: 312522 nosy: izbyshev, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: os.chdir() may leak memory on Windows type: resource usage versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 21 20:00:15 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Thu, 22 Feb 2018 01:00:15 +0000 Subject: [New-bugs-announce] [issue32904] os.chdir() may crash on Windows in presence of races Message-ID: <1519261215.82.0.467229070634.issue32904@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : win32_wchdir() retries GetCurrentDirectory() with a larger buffer if necessary, but doesn't check whether the new buffer is large enough. Another thread could change the current directory in meanwhile, so the buffer could turn out to be still not large enough, left in an uninitialized state and passed to SetEnvironmentVariable() afterwards. ---------- components: Extension Modules, Windows messages: 312524 nosy: izbyshev, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: os.chdir() may crash on Windows in presence of races type: crash versions: Python 2.7, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 00:06:20 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 22 Feb 2018 05:06:20 +0000 Subject: [New-bugs-announce] [issue32905] IDLE pyparse: fix initialization and remove unused code Message-ID: <1519275980.67.0.467229070634.issue32905@psf.upfronthosting.co.za> Change by Terry J. Reedy : ---------- assignee: terry.reedy components: IDLE nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE pyparse: fix initialization and remove unused code type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 01:26:34 2018 From: report at bugs.python.org (Varalakshmi) Date: Thu, 22 Feb 2018 06:26:34 +0000 Subject: [New-bugs-announce] [issue32906] AttributeError: 'NoneType' object has no attribute 'sendall' Message-ID: <1519280794.48.0.467229070634.issue32906@psf.upfronthosting.co.za> New submission from Varalakshmi : Hi , I have requirement to simulate http server on particular port we have an application which sends req to that simulated server for that port I need to get that request read the request body Validate the request body and send response back to the appln from the simulated server based on the validation I want to do all this in robot framework I'm using BaseHTTPServer module to simulate the server my code is as follows #!/usr/bin/python import sys simulator_server_ip=sys.argv[1] simulator_server_port=sys.argv[2] from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler from SocketServer import ThreadingMixIn import threading class Handler(BaseHTTPRequestHandler): def do_POST(self): print "getting data section" content_len = int(self.headers.getheader('content-length', 0)) self.req_body = self.rfile.read(content_len) return self.req_body def sendresponse_code(self,code): print "response section" self.send_response(code) self.end_headers() message = threading.currentThread().getName() self.wfile.write(message) self.wfile.write('\n') return class ThreadedHTTPServer(ThreadingMixIn, HTTPServer): """Handle requests in a separate thread.""" if __name__ == '__main__': server = ThreadedHTTPServer((simulator_server_ip,int(simulator_server_port)), Handler) req,c_a=server.get_request() hlr=Handler(req,c_a,server.server_address) print hlr.req_body # this is the request body which needs to be validated outside the code resp_code=input("send the code:") # based on the validation I need to send the response which is handled from outside hlr.sendresponse_code(resp_code) when I run the above code , in send_response section it is failing and I'm getting the following error getting data section {"push-message":"json_data_too_many_parameters_so_not_pasting_the_Data"} send the code:200 response section 10.10.30.50 - - [22/Feb/2018 19:14:44] "POST /pushnotification/v1.0/message HTTP/1.1" 200 - Traceback (most recent call last): File "PNS_200Resp.py", line 38, in hlr.sendresponse_code(resp_code) File "PNS_200Resp.py", line 22, in sendresponse_code self.send_response(code) File "/usr/lib64/python2.6/BaseHTTPServer.py", line 383, in send_response (self.protocol_version, code, message)) File "/usr/lib64/python2.6/socket.py", line 324, in write self.flush() File "/usr/lib64/python2.6/socket.py", line 303, in flush self._sock.sendall(buffer(data, write_offset, buffer_size)) AttributeError: 'NoneType' object has no attribute 'sendall' can some one Please check and let me know what is wrong with the code ---------- components: Library (Lib) messages: 312537 nosy: blvaralakshmi at gmail.com priority: normal severity: normal status: open title: AttributeError: 'NoneType' object has no attribute 'sendall' type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 04:40:02 2018 From: report at bugs.python.org (Alexey Izbyshev) Date: Thu, 22 Feb 2018 09:40:02 +0000 Subject: [New-bugs-announce] [issue32907] pathlib: test_resolve_common fails on Windows Message-ID: <1519292402.67.0.467229070634.issue32907@psf.upfronthosting.co.za> New submission from Alexey Izbyshev : ====================================================================== FAIL: test_resolve_common (test.test_pathlib.PathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\workspace\cpython-3.8a\lib\test\test_pathlib.py", line 1538, in test_resolve_common self._check_resolve_relative(p, P(d, 'foo', 'in', 'spam'), False) File "C:\workspace\cpython-3.8a\lib\test\test_pathlib.py", line 1477, in _check_resolve self.assertEqual(q, expected) AssertionError: WindowsPath('C:/Users/longusername/AppData/Local/Temp/tmpbenaiqaa-[ 13 chars]pam') != WindowsPath('C:/Users/LONGUS~1/AppData/Local/Temp/tmpbenaiqaa- dirD/foo/in/spam') ====================================================================== The problem is that the temporary directory path returned by tempfile.mkdtemp() contains the username in "short" (8.3) format, but Path.resolve() converts short names to long ones (thanks to ntpath._getfinalpathname()). Since os.path.realpath() still doesn't resolve symlinks on Windows (#9949, #14094), and users of ntpath._getfinalpathname() have to deal with '\\?\' prefix, I think I'll just use Path.resolve() for the tmp dir path as a workaround. ---------- components: Tests, Windows messages: 312545 nosy: izbyshev, paul.moore, pitrou, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: pathlib: test_resolve_common fails on Windows type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 05:02:31 2018 From: report at bugs.python.org (felix.engelmann) Date: Thu, 22 Feb 2018 10:02:31 +0000 Subject: [New-bugs-announce] [issue32908] decimal ROUND_HALF_UP not according to spec for 9.95 to 10.0 Message-ID: <1519293751.66.0.467229070634.issue32908@psf.upfronthosting.co.za> New submission from felix.engelmann : As described in https://www.python.org/dev/peps/pep-0327/#rounding-algorithms round-half-up: If the discarded digits represent greater than or equal to half (0.5) then the result should be incremented by 1; otherwise the discarded digits are ignored. Rounding 9.95 to 1 decimal with ROUND_HALD_UP results in 9.9 instead of 10.0: Decimal(9.95).quantize(Decimal('1.1'),ROUND_HALF_UP) Out[49]: Decimal('9.9') It does not matter at wich position this rounding with influence on another digit happens: Decimal(9.995).quantize(Decimal('1.11'),ROUND_HALF_UP) Out[50]: Decimal('9.99') It is a specific problem with the 5, because 9.96 works as expected Decimal(9.96).quantize(Decimal('1.1'),ROUND_HALF_UP) Out[40]: Decimal('10.0') System: Python 3.6.4 import decimal decimal.__version__ : '1.70' ---------- components: Library (Lib) messages: 312546 nosy: felix.engelmann priority: normal severity: normal status: open title: decimal ROUND_HALF_UP not according to spec for 9.95 to 10.0 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 05:55:16 2018 From: report at bugs.python.org (cbrnr) Date: Thu, 22 Feb 2018 10:55:16 +0000 Subject: [New-bugs-announce] [issue32909] ApplePersistenceIgnoreState warning on macOS Message-ID: <1519296916.2.0.467229070634.issue32909@psf.upfronthosting.co.za> New submission from cbrnr : There seems to be a problem with using certain Python packages and the application resume feature of recent macOS versions. Specifically, whenever I "import matplotlib.plyplot" or run the magic command "%matplotlib" in IPython, I get the following warning message: 2018-02-22 10:35:38.287 Python[4145:281298] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to (null) There's an issue in the matplotlib repo (https://github.com/matplotlib/matplotlib/issues/6242), but I don't think this problem can be fixed by matplotlib. Instead, according to this SO post (https://stackoverflow.com/a/21567601/1112283), the following command fixes the behavior: defaults write org.python.python ApplePersistenceIgnoreState NO Since this problem also comes up with Homebrew, I created an issue (https://github.com/Homebrew/homebrew-core/issues/24424), but the maintainers indicated that (1) this might be a Python issue and should be addressed upstream, and (2) the solution above is not a real fix and the correct behavior should be implemented programmatically by Python itself. ---------- components: macOS messages: 312550 nosy: cbrnr, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: ApplePersistenceIgnoreState warning on macOS type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 06:38:08 2018 From: report at bugs.python.org (Kirill Balunov) Date: Thu, 22 Feb 2018 11:38:08 +0000 Subject: [New-bugs-announce] [issue32910] venv: Deactivate.ps1 is not created when Activate.ps1 was used Message-ID: <1519299488.62.0.467229070634.issue32910@psf.upfronthosting.co.za> New submission from Kirill Balunov : There was a related issue, which was closed https://bugs.python.org/issue26715. If virtual environment was activated using Powershell script - Activate.ps1, the Deactivate.ps1 was not created, while the documentation says that it should. "You can deactivate a virtual environment by typing ?deactivate? in your shell. The exact mechanism is platform-specific: for example, the Bash activation script defines a ?deactivate? function, whereas on Windows there are separate scripts called deactivate.bat and Deactivate.ps1 which are installed when the virtual environment is created." Way to reproduce under Windows 10, Python 3.6.4 1. Open elevated Powershell (Administrator access). 2. Activate virtual environment using Activate.ps1. 3. There is no Deactivate.ps1 Also, when the environment was activated with Activate.ps1, `deactivate` will not work. On the other hand, if the environment was activated simply with `activate` (it works) in Powershell, `deactivate` will also work. ---------- components: Windows messages: 312551 nosy: godaygo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: venv: Deactivate.ps1 is not created when Activate.ps1 was used type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 07:20:07 2018 From: report at bugs.python.org (Mark Shannon) Date: Thu, 22 Feb 2018 12:20:07 +0000 Subject: [New-bugs-announce] [issue32911] Doc strings omitted from AST Message-ID: <1519302007.93.0.467229070634.issue32911@psf.upfronthosting.co.za> New submission from Mark Shannon : Python 3.7.0b1+ (heads/3.7:dfa1144, Feb 22 2018, 12:10:59) >>> m = ast.parse("def foo():\n 'help'") >>> m.body[0].body [] Correct behaviour (3.6 or earlier) >>> m = ast.parse("def foo():\n 'help'") >>> m.body[0].body [<_ast.Expr object at 0x7fb9eeb1d4e0>] ---------- components: Library (Lib) keywords: 3.7regression messages: 312557 nosy: Mark.Shannon priority: normal severity: normal status: open title: Doc strings omitted from AST type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 13:34:57 2018 From: report at bugs.python.org (Emanuel Barry) Date: Thu, 22 Feb 2018 18:34:57 +0000 Subject: [New-bugs-announce] [issue32912] Raise non-silent warning for invalid escape sequences Message-ID: <1519324497.82.0.467229070634.issue32912@psf.upfronthosting.co.za> New submission from Emanuel Barry : This is a follow-up to Issue27364. Back in Python 3.6, a silent warning was added for all invalid escape sequences in str and bytes. It was suggested that it would remain a silent warning (which still impacts tests, while not visually annoying the average user) for two releases (3.6 and 3.7), then would be upgraded to a non-silent warning for two subsequent releases (3.8 and 3.9) before becoming a full-on syntax error. With the 3.7 feature freeze on and going, I think it's time to evaluate the approach we take for 3.8 :) I suggest upgrading the DeprecationWarning to a SyntaxWarning, which is visible by default, for 3.8 and 3.9. I have cross-linked #27364 to this issue as well. -Em ---------- components: Interpreter Core messages: 312575 nosy: ebarry priority: normal severity: normal stage: needs patch status: open title: Raise non-silent warning for invalid escape sequences type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 15:10:00 2018 From: report at bugs.python.org (Joshua Li) Date: Thu, 22 Feb 2018 20:10:00 +0000 Subject: [New-bugs-announce] [issue32913] Improve regular expression HOWTO Message-ID: <1519330200.78.0.467229070634.issue32913@psf.upfronthosting.co.za> New submission from Joshua Li : "Python HOWTOs are documents that cover a single, specific topic, and attempt to cover it fairly completely." It would be quite helpful if the section "non-capturing-and-named-groups" in the regex HOWTO contained at least a mention and short usage example of the re.match.groupdict method, something I have found to be pythonic and useful, yet it does not appear frequently in the docs. I will be submitting a PR for this. ---------- assignee: docs at python components: Documentation messages: 312592 nosy: JoshuaRLi, docs at python priority: normal severity: normal status: open title: Improve regular expression HOWTO type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 18:21:15 2018 From: report at bugs.python.org (=?utf-8?q?Stig-=C3=98rjan_Smelror?=) Date: Thu, 22 Feb 2018 23:21:15 +0000 Subject: [New-bugs-announce] [issue32914] python3-config --ldflags gives a CMP0004 error due to a whitespace Message-ID: <1519341675.65.0.467229070634.issue32914@psf.upfronthosting.co.za> New submission from Stig-?rjan Smelror : Hi. I bumped into an interesting compilation issue when I was compiling ecFlow with Python 3 support. It turns out that python3-config --ldflags gave me this: " -L/usr/lib64 -lpython3.6m -lpthread -ldl -lutil -lm -Xlinker -export-dynamic" This caused a CMP0004 error due to the space before -L. With this patch applied, the command gives me: "-L/usr/lib64 -lpython3.6m -lpthread -ldl -lutil -lm -Xlinker -export-dynamic" Attached is the patch I made to fix this issue. It's as simple as moving $LIBPLUSED one place so that -L$libdir is first. ---------- components: Library (Lib) files: python3-3.6.2-python3-config-LIBPLUSED-cmp0004-error.patch keywords: patch messages: 312602 nosy: kekePower priority: normal severity: normal status: open title: python3-config --ldflags gives a CMP0004 error due to a whitespace type: compile error versions: Python 3.6 Added file: https://bugs.python.org/file47458/python3-3.6.2-python3-config-LIBPLUSED-cmp0004-error.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 18:41:55 2018 From: report at bugs.python.org (Trey Hunner) Date: Thu, 22 Feb 2018 23:41:55 +0000 Subject: [New-bugs-announce] [issue32915] Running Python 2 with -3 flag doesn't complain about cmp/__cmp__ Message-ID: <1519342915.53.0.467229070634.issue32915@psf.upfronthosting.co.za> New submission from Trey Hunner : I might be misunderstanding the use of the -3 flag, but it seems like cmp() and __cmp__ should result in warnings being displayed. ---------- components: 2to3 (2.x to 3.x conversion tool) files: caseless.py messages: 312604 nosy: trey priority: normal severity: normal status: open title: Running Python 2 with -3 flag doesn't complain about cmp/__cmp__ versions: Python 2.7 Added file: https://bugs.python.org/file47459/caseless.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 19:56:01 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 23 Feb 2018 00:56:01 +0000 Subject: [New-bugs-announce] [issue32916] IDLE: change 'str' to 'code' in idlelib.pyparse.PyParse and users Message-ID: <1519347361.47.0.467229070634.issue32916@psf.upfronthosting.co.za> New submission from Terry J. Reedy : Change 'str' to 'code' in pyparse and code that uses it. 'str' conflicts with the built-in name and it too general for 'the block of python code being processed'. 'code' is what the string is. The change applies to local 'str', 'self.str' references, and the 'set_str' method. The latter requires renames in other modules. From grep: F:\dev\3x\lib\idlelib\editor.py: 1305: y.set_str(rawtext) F:\dev\3x\lib\idlelib\editor.py: 1319: y.set_str(rawtext) F:\dev\3x\lib\idlelib\hyperparser.py: 47: parser.set_str(text.get(startatindex, stopatindex)+' \n') F:\dev\3x\lib\idlelib\hyperparser.py: 63: parser.set_str(text.get(startatindex, stopatindex)+' \n') editor imports pyparse and calls Parser once in y = pyparse.Parser... and never references y.str hyperparser imports pyparse and calls Parser once in parser = pyparse.Parser... and does reference the modifies parser.str once in line 67 self.rawtext = parser.str[:-2] set_str is not called within pyparse itself The existing pyparse tests are sufficient for pyparse since they execute every line containig 'str'. The hyperparser test covers the above lines in Hyperparser.__init__, but test_editor covers almost nothing and would miss the editor lines. The two files access various methods and the editor code, the C_ constants, so I am not inclined to change names that are not so actively obnoxious. Since this will impact other pyparse changes, I think it should be next. Cheryl, respond here if you want to do the PR. ---------- assignee: terry.reedy components: IDLE messages: 312606 nosy: csabella, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: change 'str' to 'code' in idlelib.pyparse.PyParse and users versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 20:29:50 2018 From: report at bugs.python.org (TitanSnow) Date: Fri, 23 Feb 2018 01:29:50 +0000 Subject: [New-bugs-announce] [issue32917] ConfigParser writes a superfluous final blank line Message-ID: <1519349390.35.0.467229070634.issue32917@psf.upfronthosting.co.za> New submission from TitanSnow : ``ConfigParser.write()`` writes a superfluous final blank line. Example:: import configparser cp = configparser.ConfigParser() cp['section1'] = {'key': 'value'} cp['section2'] = {'key': 'value'} with open('configparser.ini', 'w') as f: cp.write(f) The output file 'configparser.ini' will be:: (I added line number) 1 [section1] 2 key = value 3 4 [section2] 5 key = value 6 with a superfluous final blank line. Compare to ``GLib.KeyFile``:: import gi gi.require_version('GLib', '2.0') from gi.repository import GLib kf = GLib.KeyFile() kf.set_string('section1', 'key', 'value') kf.set_string('section2', 'key', 'value') kf.save_to_file('glib.ini') The output file 'glib.ini' will be:: (I added line number) 1 [section1] 2 key=value 3 4 [section2] 5 key=value without a superfluous final blank line. ---------- components: Library (Lib) files: final_blank_line.patch keywords: patch messages: 312608 nosy: tttnns priority: normal severity: normal status: open title: ConfigParser writes a superfluous final blank line type: behavior Added file: https://bugs.python.org/file47460/final_blank_line.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 22:41:55 2018 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 23 Feb 2018 03:41:55 +0000 Subject: [New-bugs-announce] [issue32918] IDLE: make smart indent after comment line consistent Message-ID: <1519357315.65.0.467229070634.issue32918@psf.upfronthosting.co.za> New submission from Terry J. Reedy : https://docs.python.org/3/reference/lexical_analysis.html#blank-lines says "A logical line that contains only spaces, tabs, formfeeds and possibly a comment, is ignored" but notes that REPLs might not ignore them during interactive input. The same is true of editors as lines are typed. In particular, even a no-code comment line can suggest where a smart indenter should indent. In any case, Python does not care what follows '#', and IDLE should not either (except when uncommenting-out lines). Suppose one types the following: if 1: if 2: print(3) # #x # x Currently, IDLE indents 4 columns after '#' and '# x' and 8 columns after '#x'. If one moves the comments to the margin, the indents are 0 and 8 columns. This is because pyparse.Parser._study2 ignores lines that match _junkre and _junkre only matches comments with '#' followed by a non-space (/B) character. I think that IDLE should generally assume that the comment starts on a legal column and that the comment will apply to the next line typed, which will have the same indent, and that the lack of space after '#' (there have been many such in idlelib) is preference, indifference, or error. The only exception relevant to IDLE is '##' inserted at the beginning of code lines to (temporarily) make them ignored. If one places the cursor at the end of such a line and hits return to insert new lines, some indent is likely wanted if the line above is indented. Matching '##.*\n' is easy enough. Note that smart indent always uses the line above, if there is one, because there typically is not one below, and if there is, either choice is arbitrary and being smarter would cost time. (This should be in revised doc.) ---------- messages: 312613 nosy: csabella, terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: make smart indent after comment line consistent type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 22 23:55:23 2018 From: report at bugs.python.org (Pavel Shpilev) Date: Fri, 23 Feb 2018 04:55:23 +0000 Subject: [New-bugs-announce] [issue32919] csv.reader() to support QUOTE_ALL Message-ID: <1519361723.9.0.467229070634.issue32919@psf.upfronthosting.co.za> New submission from Pavel Shpilev : It appears that in current implementation csv.QUOTE_ALL has no effect on csv. reader(), it only affects csv.writer(). I know that csv is a poorly defined format and all, but I think this might be useful to distinguish None and '' values for the sources that use such quoting. Example: "1","Noneval",,"9" "2","Emptystr","","10" "3","somethingelse","","8" Reader converts all values in the third column to empty strings. The suggestion is to adjust reader's behaviour so when quoting=csv.QUOTE_ALL that would instruct reader to convert empty values (like the one in the first row) to None instead. ---------- components: Extension Modules messages: 312617 nosy: Pavel Shpilev priority: normal severity: normal status: open title: csv.reader() to support QUOTE_ALL type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 01:44:28 2018 From: report at bugs.python.org (Eryk Sun) Date: Fri, 23 Feb 2018 06:44:28 +0000 Subject: [New-bugs-announce] [issue32920] Implement PEP 529 for os.getcwdb on Windows Message-ID: <1519368268.76.0.467229070634.issue32920@psf.upfronthosting.co.za> New submission from Eryk Sun : When reviewing issue 32904 I noticed that os.getcwdb still calls the CRT _getcwd function. Apparently this was overlooked when implementing PEP 529. For example: >>> os.getcwd() 'C:\\Temp\\Lang\\????' >>> os.getcwdb() b'C:\\Temp\\Lang\\a\xdf?d' Not only is the encoding wrong, but because the CRT uses GetFullPathNameA (the CRT's implementation of _getcwd is convoluted, IMO), the call fails if the current directory exceeds MAX_PATH. Python 3.6+ on Windows 10 otherwise supports long paths. ---------- components: Library (Lib), Unicode, Windows messages: 312620 nosy: eryksun, ezio.melotti, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal stage: needs patch status: open title: Implement PEP 529 for os.getcwdb on Windows type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 08:03:18 2018 From: report at bugs.python.org (Einar Fredriksen) Date: Fri, 23 Feb 2018 13:03:18 +0000 Subject: [New-bugs-announce] [issue32921] .pth files cannot contain folders with utf-8 names Message-ID: <1519390998.11.0.467229070634.issue32921@psf.upfronthosting.co.za> New submission from Einar Fredriksen : Add "G:\??????? ????" to a pth file and start python. it fails with -------------- Failed to import the site module Traceback (most recent call last): File "C:\Program Files\ROXAR\RMS dev_release\windows-amd64-vc_14_0-release\bin\lib\site.py", line 546, in main() File "C:\Program Files\ROXAR\RMS dev_release\windows-amd64-vc_14_0-release\bin\lib\site.py", line 532, in main known_paths = addusersitepackages(known_paths) File "C:\Program Files\ROXAR\RMS dev_release\windows-amd64-vc_14_0-release\bin\lib\site.py", line 287, in addusersitepackages addsitedir(user_site, known_paths) File "C:\Program Files\ROXAR\RMS dev_release\windows-amd64-vc_14_0-release\bin\lib\site.py", line 209, in addsitedir addpackage(sitedir, name, known_paths) File "C:\Program Files\ROXAR\RMS dev_release\windows-amd64-vc_14_0-release\bin\lib\site.py", line 165, in addpackage for n, line in enumerate(f): File "C:\Program Files\ROXAR\RMS dev_release\windows-amd64-vc_14_0-release\bin\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 8: character maps to ---------------- This might very well have sideeffects, but adding "encoding='utf-8'" to the open() call in site.py def addpackage seems to fix the issue for me ---------- components: Unicode messages: 312635 nosy: einaren, ezio.melotti, vstinner priority: normal severity: normal status: open title: .pth files cannot contain folders with utf-8 names type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 10:11:08 2018 From: report at bugs.python.org (Josh Friend) Date: Fri, 23 Feb 2018 15:11:08 +0000 Subject: [New-bugs-announce] [issue32922] dbm.open() encodes filename with default encoding rather than the filesystem encoding Message-ID: <1519398668.71.0.467229070634.issue32922@psf.upfronthosting.co.za> New submission from Josh Friend : Armin Rigo from the PyPy project pointed this out to me: https://bitbucket.org/pypy/pypy/issues/2755/dbmopen-expects-a-str-for-filename-throws This could probably lead to some weird behavior given the right filename ---------- components: Library (Lib) messages: 312636 nosy: Josh Friend priority: normal severity: normal status: open title: dbm.open() encodes filename with default encoding rather than the filesystem encoding type: behavior versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 10:30:13 2018 From: report at bugs.python.org (=?utf-8?b?0KLQuNC80L7RhNC10Lkg0KXQuNGA0YzRj9C90L7Qsg==?=) Date: Fri, 23 Feb 2018 15:30:13 +0000 Subject: [New-bugs-announce] [issue32923] Typo in documentation of unittest: whilst instead of while Message-ID: <1519399813.0.0.467229070634.issue32923@psf.upfronthosting.co.za> New submission from ??????? ???????? : Typo is on the https://docs.python.org/3/library/unittest.html page. At bottom of the page you can see text: "unittest.removeHandler(function=None) When called without arguments this function removes the control-c handler if it has been installed. This function can also be used as a test decorator to temporarily remove the handler whilst the test is being executed:" Typo is: ST instead of E in the word "whilst". ---------- assignee: docs at python components: Documentation messages: 312637 nosy: docs at python, ??????? ???????? priority: normal severity: normal status: open title: Typo in documentation of unittest: whilst instead of while versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 11:57:38 2018 From: report at bugs.python.org (Mariatta Wijaya) Date: Fri, 23 Feb 2018 16:57:38 +0000 Subject: [New-bugs-announce] [issue32924] Python 3.7 docs in docs.p.o points to GitHub's master branch Message-ID: <1519405058.43.0.467229070634.issue32924@psf.upfronthosting.co.za> New submission from Mariatta Wijaya : When viewing Python 3.7 docs in docs.python.org, the show source link is pointing to the master branch on GitHub. It should point to the 3.7 branch. I'm working on a fix, however this is something to we should remember doing when we go to 3.9+. Adding Python 3.8 and 3.9's release manager :) ---------- assignee: Mariatta components: Documentation messages: 312645 nosy: Mariatta, lukasz.langa priority: normal severity: normal status: open title: Python 3.7 docs in docs.p.o points to GitHub's master branch versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 16:44:18 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 23 Feb 2018 21:44:18 +0000 Subject: [New-bugs-announce] [issue32925] AST optimizer: Change a list into tuple in iterations and containment tests Message-ID: <1519422258.29.0.467229070634.issue32925@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : Currently a list of constants is replaced with constant tuple in `x in [1, 2]` and `for x in [1, 2]`. The resulted AST is the same as for `x in (1, 2)` and `for x in (1, 2)`. The proposed simple PR extends this optimization to lists containing non-constants. `x in [a, b]` will be changed into `x in (a, b)` and `for x in [a, b]` will be changed into `for x in (a, b)`. Since creating a tuple is faster than creating a list the latter form is a tiny bit faster. $ ./python -m timeit -s 'a, b = 1, 2' -- 'for x in [a, b]: pass' 5000000 loops, best of 5: 93.6 nsec per loop $ ./python -m timeit -s 'a, b = 1, 2' -- 'for x in (a, b): pass' 5000000 loops, best of 5: 74.3 nsec per loop ./python -m timeit -s 'a, b = 1, 2' -- '1 in [a, b]' 5000000 loops, best of 5: 58.9 nsec per loop $ ./python -m timeit -s 'a, b = 1, 2' -- '1 in (a, b)' 10000000 loops, best of 5: 39.3 nsec per loop ---------- components: Interpreter Core messages: 312670 nosy: benjamin.peterson, brett.cannon, ncoghlan, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: AST optimizer: Change a list into tuple in iterations and containment tests type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 17:08:37 2018 From: report at bugs.python.org (Victor Engmark) Date: Fri, 23 Feb 2018 22:08:37 +0000 Subject: [New-bugs-announce] [issue32926] Add public TestCase method/property to get result of current test Message-ID: <1519423717.02.0.467229070634.issue32926@psf.upfronthosting.co.za> New submission from Victor Engmark : The community has come up with multiple hacks [1] to be able to inspect the current test result in tearDown (to collect more expensive diagnostics only when the test fails). It would be great to have a documented and simple way to check whether the test currently in play was successful. To be clear, TestCase.wasSuccessful() is not useful in this case, since it only reports whether all tests run so far were successful. [1] https://stackoverflow.com/q/4414234/96588 ---------- messages: 312671 nosy: Victor Engmark priority: normal severity: normal status: open title: Add public TestCase method/property to get result of current test type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 17:18:22 2018 From: report at bugs.python.org (Victor Engmark) Date: Fri, 23 Feb 2018 22:18:22 +0000 Subject: [New-bugs-announce] [issue32927] Add typeshed documentation for unittest.TestCase._feedErrorsToResult and ._outcome Message-ID: <1519424302.16.0.467229070634.issue32927@psf.upfronthosting.co.za> New submission from Victor Engmark : Until and unless #32926 is fixed it would be good to be able to type inspect code which uses the currently least hacky way to inspect the result of the current test [1]. This requires the use of TestCase._feedErrorsToResult and TestCase._outcome. [1]: https://stackoverflow.com/a/39606065/96588 ---------- messages: 312672 nosy: Victor Engmark priority: normal severity: normal status: open title: Add typeshed documentation for unittest.TestCase._feedErrorsToResult and ._outcome type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 18:10:15 2018 From: report at bugs.python.org (William Pickard) Date: Fri, 23 Feb 2018 23:10:15 +0000 Subject: [New-bugs-announce] [issue32928] _findvs failing on Windows 10 (Release build only) Message-ID: <1519427415.69.0.467229070634.issue32928@psf.upfronthosting.co.za> New submission from William Pickard : The distutils module _findvs is failing on my Windows 10 PRO machine with the following error: OSError: Error 80070002 Note: Building Python 3.6 in debug for some reason doesn't cause the error. ---------- components: Distutils, Extension Modules, Library (Lib), Windows messages: 312673 nosy: WildCard65, dstufft, eric.araujo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: _findvs failing on Windows 10 (Release build only) type: crash versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 20:44:53 2018 From: report at bugs.python.org (Eric V. Smith) Date: Sat, 24 Feb 2018 01:44:53 +0000 Subject: [New-bugs-announce] [issue32929] Change dataclasses hashing to use unsafe_hash boolean (default to False) Message-ID: <1519436693.93.0.467229070634.issue32929@psf.upfronthosting.co.za> New submission from Eric V. Smith : See https://mail.python.org/pipermail/python-dev/2018-February/152150.html for a discussion. This will remove the @dataclass hash= tri-state parameter, and replace it with an unsafe_hash= boolean-only parameter. >From that thread: """ I propose that with @dataclass(unsafe_hash=False) (the default), a __hash__ method is added when the following conditions are true: - frozen=True (not the default) - compare=True (the default) - no __hash__ method is defined in the class If we instead use @dataclass(unsafe_hash=True), a __hash__ will be added regardless of the other flags, but if a __hash__ method is present, an exception is raised. Other values (e.g. unsafe_hash=None) are illegal for this flag. """ ---------- assignee: eric.smith components: Library (Lib) messages: 312686 nosy: eric.smith, ned.deily priority: release blocker severity: normal status: open title: Change dataclasses hashing to use unsafe_hash boolean (default to False) type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 21:17:13 2018 From: report at bugs.python.org (Tommy) Date: Sat, 24 Feb 2018 02:17:13 +0000 Subject: [New-bugs-announce] [issue32930] [help][webbrowser always opens new tab. I want to open in the same tab] Message-ID: <1519438633.35.0.467229070634.issue32930@psf.upfronthosting.co.za> New submission from Tommy : Dear manager, I'm just starting python and trying to make simple web application. As above title, I just want to open webbrowser at first and then just change web-address in the same tab, but whenever I try webbrowser.open, it always open new tab or new window. When I checked webbrowser-control options on python help page, it says webbrowser.open's 2nd argument can control window opening. but it did not work well in my tries. following is my test code. #1st try import webbrowser url = 'www.google.com' webbrowser.open(url, new=0) # this open new explorer window. #....wait #2nd try webbrowser.open(url, new=0) # this open new explorer window again. In my test, I test 2nd argument from 0 to 2, but there seems no change. always opens new tab. Is there any way to change address and open in the current opened browser window? best Regards, Tommy ---------- components: Windows messages: 312689 nosy: paul.moore, steve.dower, tim.golden, tommylim1018 at naver.com, zach.ware priority: normal severity: normal status: open title: [help][webbrowser always opens new tab. I want to open in the same tab] versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 23 23:03:24 2018 From: report at bugs.python.org (Marc Culler) Date: Sat, 24 Feb 2018 04:03:24 +0000 Subject: [New-bugs-announce] [issue32931] Python 3.70b1 specifies non-existent compiler gcc++ Message-ID: <1519445004.46.0.467229070634.issue32931@psf.upfronthosting.co.za> New submission from Marc Culler : Compiling an external module in the 3.7.0b1 prerelease on macOS High Sierra failed for me because a compiler named "gcc++" was not found. As far as I can tell there is no such compiler in the current XCode release. I don't know if there ever was one. The culprit file is: /Library/Frameworks/Python.framework//Versions/3.7/lib/python3.7/_sysconfigdata_m_darwin_darwin.py The following patch fixed the problem for me: 38c38 < 'CXX': 'gcc++', --- > 'CXX': 'g++', 484c484 < 'LDCXXSHARED': 'gcc++ -bundle -undefined dynamic_lookup', --- > 'LDCXXSHARED': 'g++ -bundle -undefined dynamic_lookup', ---------- components: macOS messages: 312697 nosy: culler, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Python 3.70b1 specifies non-existent compiler gcc++ versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 03:51:48 2018 From: report at bugs.python.org (Xiang Zhang) Date: Sat, 24 Feb 2018 08:51:48 +0000 Subject: [New-bugs-announce] [issue32932] better error message when __all__ contains non-str objects Message-ID: <1519462308.25.0.467229070634.issue32932@psf.upfronthosting.co.za> New submission from Xiang Zhang : I see people wrongly write non-str objects in __all__ and the error message for this case is simply a AttributeError which doesn't reveal the cause directly. >>> from test import * Traceback (most recent call last): File "", line 1, in TypeError: attribute name must be string, not 'C' It would be better to make the cause more obvious, like importlib._bootstrap._handle_fromlist does: Traceback (most recent call last): File "/root/cpython/Lib/test/test_importlib/import_/test_fromlist.py", line 166, in test_invalid_type_in_all self.__import__('pkg', fromlist=['*']) File "/root/cpython/Lib/importlib/_bootstrap.py", line 1094, in __import__ return _handle_fromlist(module, fromlist, _gcd_import) File "/root/cpython/Lib/importlib/_bootstrap.py", line 1019, in _handle_fromlist recursive=True) File "/root/cpython/Lib/importlib/_bootstrap.py", line 1014, in _handle_fromlist raise TypeError(f"Item in {where} must be str, " TypeError: Item in pkg.__all__ must be str, not bytes ---------- components: Interpreter Core messages: 312704 nosy: xiang.zhang priority: normal severity: normal status: open title: better error message when __all__ contains non-str objects type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 03:58:38 2018 From: report at bugs.python.org (Anthony Flury) Date: Sat, 24 Feb 2018 08:58:38 +0000 Subject: [New-bugs-announce] [issue32933] mock_open does not support iteration around text files. Message-ID: <1519462718.87.0.467229070634.issue32933@psf.upfronthosting.co.za> New submission from Anthony Flury : Using the unittest.mock helper mock_open with multi-line read data, although readlines method will work on the mocked open data, the commonly used iterator idiom on an open file returns the equivalent of an empty file. from unittest.mock import mock_open read_data = 'line 1\nline 2\nline 3\nline 4\n' with patch('builtins.open', mock_open) as mocked: with open('a.txt', 'r') as fp: assert [l for l in StringIO(read_data)] == [l for l in fp] will fail although it will work on a normal file with the same data, and using [l for l in fp.readlines()] will also work. There is a relatively simple fix which I have a working local version - but I don't know how to provide that back to the library - or even if i should. ---------- components: Library (Lib) messages: 312706 nosy: anthony-flury priority: normal severity: normal status: open title: mock_open does not support iteration around text files. versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 05:24:49 2018 From: report at bugs.python.org (Enrico Zini) Date: Sat, 24 Feb 2018 10:24:49 +0000 Subject: [New-bugs-announce] [issue32934] logging.handlers.BufferingHandler capacity is unclearly specified Message-ID: <1519467889.68.0.467229070634.issue32934@psf.upfronthosting.co.za> New submission from Enrico Zini : BufferingHandler's documentatio says "Initializes the handler with a buffer of the specified capacity." but it does not specify what capacity means. One would assume the intention is to give a bound to memory usage, and that capacity is bytes. Looking at the source instead, the check is: return (len(self.buffer) >= self.capacity) and self.buffer is initialised with an empty list, so capacity is a number of lines, which cannot be used to constrain memory usage, and for which I struggle to see a use case. I believe that the current behaviour is counterintuitive enough to deserve, if not changing, at least documenting ---------- components: Library (Lib) messages: 312709 nosy: enrico priority: normal severity: normal status: open title: logging.handlers.BufferingHandler capacity is unclearly specified versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 05:31:19 2018 From: report at bugs.python.org (Enrico Zini) Date: Sat, 24 Feb 2018 10:31:19 +0000 Subject: [New-bugs-announce] [issue32935] Documentation falsely leads to believe that MemoryHandler can be used to wrap SMTPHandler to send multiple messages per email Message-ID: <1519468279.94.0.467229070634.issue32935@psf.upfronthosting.co.za> New submission from Enrico Zini : In the handlers documentation, MemoryHandler directly follows SMTPHandler. SMTPHandler does not document that it is sending an email per every logging invocation, but one can sort of guess it. Right afterwards, there is the documentation of MemoryHandler, which seems to hint that one can use it to buffer up log lines and send all of them with SMTPHandler at flush time, by using it as a target. What really happens when trying to do that, is that at flush time an email per buffered log line is sent instead. It would have saved me significant time and frustration if I had found in SMTPHandler a note saynig that in order to buffer up all log messages and send them as a single email, one needs to reimplement BufferingHandler and all the email composition/sending logic, and the existing handlers provide no build-in facility for doing that. ---------- components: Library (Lib) messages: 312710 nosy: enrico priority: normal severity: normal status: open title: Documentation falsely leads to believe that MemoryHandler can be used to wrap SMTPHandler to send multiple messages per email versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 05:53:13 2018 From: report at bugs.python.org (Oudin) Date: Sat, 24 Feb 2018 10:53:13 +0000 Subject: [New-bugs-announce] [issue32936] RobotFileParser.parse() should raise an exception when the robots.txt file is invalid Message-ID: <1519469593.78.0.467229070634.issue32936@psf.upfronthosting.co.za> New submission from Oudin : When processing an ill-formed robots.txt file (like https://tiny.tobast.fr/robots-file ), the RobotFileParser.parse method does not instantiate the entries or the default_entry attributes. In my opinion, the method should raise an exception when no valid User-agent entry (or if there exists an invalid User-agent entry) is found in the robots.txt file. Otherwise, the only method available is to check the None-liness of default_entry, which is not documented in the documentation (https://docs.python.org/dev/library/urllib.robotparser.html). According to your opinion on this, I can implement what is necessary and create a PR on Github. ---------- components: Library (Lib) messages: 312711 nosy: Guinness priority: normal severity: normal status: open title: RobotFileParser.parse() should raise an exception when the robots.txt file is invalid type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 09:17:03 2018 From: report at bugs.python.org (Eric Gorr) Date: Sat, 24 Feb 2018 14:17:03 +0000 Subject: [New-bugs-announce] [issue32937] Multiprocessing worker functions not terminating with a large number of processes and a manager Message-ID: <1519481823.24.0.467229070634.issue32937@psf.upfronthosting.co.za> New submission from Eric Gorr : I have the following code: import multiprocessing from multiprocessing import Pool, Manager import time import random def worker_function( index, messages ): print( "%d: Entered" % index ) time.sleep( random.randint( 3, 15 ) ) messages.put( "From: %d" % index ) print( "%d: Exited" % index ) manager = Manager() messages = manager.Queue() with Pool( processes = None ) as pool: for x in range( 30 ): pool.apply_async( worker_function, [ x, messages ] ) pool.close() pool.join() It does not terminate -- all entered messages are printed, but not all exited messages are printed. If I remove all the code related to the Manager and Queue, it will terminate properly with all messages printed. If I assign processes explicitly, I can continue to increase the number assigned to processes and have it continue to work until I reach a value of 20 or 21. > 20, it fails all of the time. With a value == 20 it fails some of the time. With a value of < 20, it always succeeds. multiprocessing.cpu_count() returns 24 for my MacPro. ---------- components: Library (Lib), macOS messages: 312718 nosy: Eric Gorr, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Multiprocessing worker functions not terminating with a large number of processes and a manager versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 09:21:03 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 24 Feb 2018 14:21:03 +0000 Subject: [New-bugs-announce] [issue32938] webbrowser: Add options for private mode Message-ID: <1519482063.67.0.467229070634.issue32938@psf.upfronthosting.co.za> New submission from Cheryl Sabella : When looking at the command line option page for Mozilla (https://developer.mozilla.org/en-US/docs/Mozilla/Command_Line_Options), I noticed options for opening a private mode window (-private-window or -private-window URL). Chrome also has an --incognito switch. (https://peter.sh/experiments/chromium-command-line-switches/) Maybe it would be nice to add a flag to allow for options? ---------- components: Library (Lib) messages: 312719 nosy: csabella priority: normal severity: normal status: open title: webbrowser: Add options for private mode type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 10:52:53 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 24 Feb 2018 15:52:53 +0000 Subject: [New-bugs-announce] [issue32939] IDLE: self.use_context_ps1 defined in editor, but always False Message-ID: <1519487573.87.0.467229070634.issue32939@psf.upfronthosting.co.za> New submission from Cheryl Sabella : In the EditorWindow in editor.py, there is an attribute called `self.context_use_ps1` that is only set to False. Changed to an instance variable in: https://github.com/python/cpython/commit/6af44986029c84c4c5df62a64c60a6ed978a3693 Removed from pyshell in: https://github.com/python/cpython/commit/e86172d63af5827a3c2b55b80351cb38a26190eb ---------- assignee: terry.reedy components: IDLE messages: 312727 nosy: csabella, terry.reedy priority: normal severity: normal status: open title: IDLE: self.use_context_ps1 defined in editor, but always False type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 12:38:40 2018 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 24 Feb 2018 17:38:40 +0000 Subject: [New-bugs-announce] [issue32940] IDLE: pyparse - replace StringTranslatePseudoMappingTest with defaultdict Message-ID: <1519493920.19.0.467229070634.issue32940@psf.upfronthosting.co.za> New submission from Cheryl Sabella : Based on timing test on msg312733, StringTranslatePseudoMappingTest and a defaultdict have about the same performance, so this replaces the custom class with the stdlib functionality. ---------- assignee: terry.reedy components: IDLE messages: 312734 nosy: csabella, terry.reedy priority: normal severity: normal status: open title: IDLE: pyparse - replace StringTranslatePseudoMappingTest with defaultdict type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 16:15:23 2018 From: report at bugs.python.org (Antoine Pitrou) Date: Sat, 24 Feb 2018 21:15:23 +0000 Subject: [New-bugs-announce] [issue32941] mmap should expose madvise() Message-ID: <1519506923.52.0.467229070634.issue32941@psf.upfronthosting.co.za> New submission from Antoine Pitrou : On POSIX, mmap objects could expose a method wrapping the madvise() library call. I suggest the following API mmap_object.madvise(option[, start[, length]]) If omitted, *start* and *length* would span the whole memory area described by the mmap object. *option* must be a recognized OS option for the madvise() library call. The mmap module would expose the various MADV_* options available on the current platform. Open question: should we expose madvise() or posix_madvise()? (these are two different calls, at least on Linux) posix_madvise() is arguably more portable, but madvise() is much more powerful, so I'd lean towards madvise(). ---------- components: Library (Lib) messages: 312758 nosy: larry, ned.deily, neologix, pitrou, ronaldoussoren priority: normal severity: normal status: open title: mmap should expose madvise() type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 24 18:07:36 2018 From: report at bugs.python.org (Andrew Brezovsky) Date: Sat, 24 Feb 2018 23:07:36 +0000 Subject: [New-bugs-announce] [issue32942] Regression: test_script_helper fails Message-ID: <1519513656.28.0.467229070634.issue32942@psf.upfronthosting.co.za> New submission from Andrew Brezovsky : Test test_script_helper fails, details: Running Debug|Win32 interpreter... == CPython 3.8.0a0 (heads/master:6cdb7954b0, Feb 24 2018, 17:25:46) [MSC v.1912 32 bit (Intel)] == Windows-10-10.0.16299-SP0 little-endian == cwd: \cpython\build\test_python_7920 == CPU count: 4 == encodings: locale=cp1252, FS=utf-8 Run tests sequentially 0:00:00 [1/1] test_script_helper test_assert_python_failure (test.test_script_helper.TestScriptHelper) ... ok test_assert_python_failure_raises (test.test_script_helper.TestScriptHelper) ... ok test_assert_python_isolated_when_env_not_required (test.test_script_helper.TestScriptHelper) ... ok test_assert_python_not_isolated_when_env_is_required (test.test_script_helper.TestScriptHelper) Ensure that -I is not passed when the environment is required. ... ok test_assert_python_ok (test.test_script_helper.TestScriptHelper) ... ok test_assert_python_ok_raises (test.test_script_helper.TestScriptHelper) ... ok test_interpreter_requires_environment_details (test.test_script_helper.TestScriptHelperEnvironment) ... FAIL test_interpreter_requires_environment_false (test.test_script_helper.TestScriptHelperEnvironment) ... FAIL test_interpreter_requires_environment_true (test.test_script_helper.TestScriptHelperEnvironment) ... FAIL ====================================================================== FAIL: test_interpreter_requires_environment_details (test.test_script_helper.TestScriptHelperEnvironment) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\andor\Documents\Projects\cpython\lib\unittest\mock.py", line 1191, in patched return func(*args, **keywargs) File "C:\Users\andor\Documents\Projects\cpython\lib\test\test_script_helper.py", line 101, in test_interpreter_requires_environment_details self.assertFalse(script_helper.interpreter_requires_environment()) AssertionError: True is not false ====================================================================== FAIL: test_interpreter_requires_environment_false (test.test_script_helper.TestScriptHelperEnvironment) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\andor\Documents\Projects\cpython\lib\unittest\mock.py", line 1191, in patched return func(*args, **keywargs) File "C:\Users\andor\Documents\Projects\cpython\lib\test\test_script_helper.py", line 95, in test_interpreter_requires_environment_false self.assertFalse(script_helper.interpreter_requires_environment()) AssertionError: True is not false ====================================================================== FAIL: test_interpreter_requires_environment_true (test.test_script_helper.TestScriptHelperEnvironment) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\andor\Documents\Projects\cpython\lib\unittest\mock.py", line 1191, in patched return func(*args, **keywargs) File "C:\Users\andor\Documents\Projects\cpython\lib\test\test_script_helper.py", line 89, in test_interpreter_requires_environment_true self.assertEqual(1, mock_check_call.call_count) AssertionError: 1 != 0 ---------------------------------------------------------------------- Ran 9 tests in 0.240s FAILED (failures=3) test test_script_helper failed test_script_helper failed 1 test failed: test_script_helper Total duration: 281 ms Tests result: FAILURE ---------- components: Tests messages: 312765 nosy: abrezovsky priority: normal severity: normal status: open title: Regression: test_script_helper fails versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 00:22:05 2018 From: report at bugs.python.org (Xiang Zhang) Date: Sun, 25 Feb 2018 05:22:05 +0000 Subject: [New-bugs-announce] [issue32943] confusing error message for rot13 codec Message-ID: <1519536125.53.0.467229070634.issue32943@psf.upfronthosting.co.za> New submission from Xiang Zhang : rot13 codec does a str translate operation. But it doesn't check the input type and then the error message would be quite confusing, especially for bytes: >>> codecs.encode(b'abc', 'rot13') Traceback (most recent call last): File "/usr/local/Cellar/python3/3.6.4_2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/encodings/rot_13.py", line 15, in encode return (input.translate(rot13_map), len(input)) TypeError: a bytes-like object is required, not 'dict' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "", line 1, in TypeError: encoding with 'rot13' codec failed (TypeError: a bytes-like object is required, not 'dict') ---------- messages: 312775 nosy: serhiy.storchaka, xiang.zhang priority: normal severity: normal status: open title: confusing error message for rot13 codec type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 00:41:15 2018 From: report at bugs.python.org (LBC2525) Date: Sun, 25 Feb 2018 05:41:15 +0000 Subject: [New-bugs-announce] [issue32944] Need Guidance on Solving the Tcl problem Message-ID: <1519537275.89.0.467229070634.issue32944@psf.upfronthosting.co.za> New submission from LBC2525 : Python 3.6.4 (v3.6.4:d48ecebad5, Dec 18 2017, 21:07:28) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "copyright", "credits" or "license()" for more information. >>> WARNING: The version of Tcl/Tk (8.5.9) in use may be unstable. Visit http://www.python.org/download/mac/tcltk/ for current information. I have downloaded the Tcl and Tk files from http://www.python.org/download/mac/tcltk/ and I am still getting the error. Suggestions are appreciated. ---------- assignee: terry.reedy components: IDLE messages: 312776 nosy: LBC2525, terry.reedy priority: normal severity: normal status: open title: Need Guidance on Solving the Tcl problem versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 03:22:50 2018 From: report at bugs.python.org (Antony Lee) Date: Sun, 25 Feb 2018 08:22:50 +0000 Subject: [New-bugs-announce] [issue32945] sorted(generator) is slower than sorted(list-comprehension) Message-ID: <1519546970.72.0.467229070634.issue32945@psf.upfronthosting.co.za> New submission from Antony Lee : Consider e.g. In [2]: %timeit sorted([i for i in range(100)]) 4.74 ?s ? 24.3 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) In [3]: %timeit sorted(i for i in range(100)) 7.05 ?s ? 25.7 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) In [4]: %timeit sorted([i for i in range(1000)]) 47.2 ?s ? 1.2 ?s per loop (mean ? std. dev. of 7 runs, 10000 loops each) In [5]: %timeit sorted(i for i in range(1000)) 78.7 ?s ? 288 ns per loop (mean ? std. dev. of 7 runs, 10000 loops each) In [6]: %timeit sorted([i for i in range(10000)]) 582 ?s ? 8.29 ?s per loop (mean ? std. dev. of 7 runs, 1000 loops each) In [7]: %timeit sorted(i for i in range(10000)) 807 ?s ? 5.92 ?s per loop (mean ? std. dev. of 7 runs, 1000 loops each) It appears that sorting a generator is slower than sorting the corresponding list comprehension, by a ~constant factor. Given that the former can trivially be converted into the latter (i.e. `sorted` could just check whether its argument is a generator, and, if so, convert it to a list first), it would seem that sorting the generator should *not* be slower than sorting a list (except perhaps by a small constant). ---------- messages: 312780 nosy: Antony.Lee priority: normal severity: normal status: open title: sorted(generator) is slower than sorted(list-comprehension) versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 03:46:54 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 25 Feb 2018 08:46:54 +0000 Subject: [New-bugs-announce] [issue32946] Speed up import from non-packages Message-ID: <1519548414.92.0.467229070634.issue32946@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The proposed PR optimizes "from ... import ..." from non-package modules. $ ./python -m perf timeit 'from locale import getlocale' Unpatched: Mean +- std dev: 811 ns +- 27 ns Patched: Mean +- std dev: 624 ns +- 17 ns Currently _bootstrap._handle_fromlist() is called which does nothing if the module is not a package, but adds an overhead of calling a Python function. The PR moves this check out of _handle_fromlist and avoid calling it if not needed. ---------- components: Interpreter Core messages: 312781 nosy: brett.cannon, eric.snow, ncoghlan, serhiy.storchaka priority: normal severity: normal status: open title: Speed up import from non-packages type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 06:54:40 2018 From: report at bugs.python.org (Christian Heimes) Date: Sun, 25 Feb 2018 11:54:40 +0000 Subject: [New-bugs-announce] [issue32947] Support OpenSSL 1.1.1 Message-ID: <1519559680.67.0.467229070634.issue32947@psf.upfronthosting.co.za> New submission from Christian Heimes : I'm using this ticket as an epos to track commits and required changes for OpenSSL 1.1.1 and TLS 1.3. Fixes need to be backported to 2.7 and 3.6 to 3.8. We might have to consider backports to 3.4 and 3.5, too. If all goes to plan, OpenSSL 1.1.1 final is scheduled for 8th May 2018, https://www.openssl.org/policies/releasestrat.html . It will contain support for TLS 1.3. Python should either support TLS 1.3 by then or disable TLS 1.3 by default. Fixes: * #20995 added TLS 1.3 cipher suite support * #29136 added OP_NO_TLSv1_3 * #30622 fixes NPN guard for OpenSSL 1.1.1 Issues: * A new option OP_ENABLE_MIDDLEBOX_COMPAT is enabled by default. We need to expose the flag to make test pass. * TLS 1.3 has changed session handling. The current session code cannot handle TLS 1.3 session resumption. * Threaded echo server and asynchat based tests are failing with TLS 1.3. I haven't analyzed the issue properly. It looks like the server thread dies when a handshake error occurs. ---------- assignee: christian.heimes components: SSL messages: 312804 nosy: christian.heimes priority: normal severity: normal status: open title: Support OpenSSL 1.1.1 type: enhancement versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 08:28:39 2018 From: report at bugs.python.org (Christian Heimes) Date: Sun, 25 Feb 2018 13:28:39 +0000 Subject: [New-bugs-announce] [issue32948] clang compiler warnings on Travis Message-ID: <1519565319.33.0.467229070634.issue32948@psf.upfronthosting.co.za> New submission from Christian Heimes : I'm seeing a bunch of compile errors on 2.7 branch, https://travis-ci.org/python/cpython/jobs/345906584 /home/travis/build/python/cpython/Modules/_heapqmodule.c:600:21: warning: illegal character encoding in string literal [-Winvalid-source-encoding] [explanation by Franois Pinard]\n\ ^~~~ Include/Python.h:174:60: note: expanded from macro 'PyDoc_STRVAR' #define PyDoc_STRVAR(name,str) PyDoc_VAR(name) = PyDoc_STR(str) ^ Include/Python.h:176:24: note: expanded from macro 'PyDoc_STR' #define PyDoc_STR(str) str ^ 1 warning generated. clang -pthread -fno-strict-aliasing -OPT:Olimit=0 -g -O2 -g -O0 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -c ./Modules/pwdmodule.c -o Modules/pwdmodule.o ./Modules/posixmodule.c:363:13: warning: comparison of constant 9223372036854775807 with expression of type 'uid_t' (aka 'unsigned int') is always true [-Wtautological-constant-out-of-range-compare] if (uid <= LONG_MAX) ~~~ ^ ~~~~~~~~ ./Modules/posixmodule.c:371:13: warning: comparison of constant 9223372036854775807 with expression of type 'gid_t' (aka 'unsigned int') is always true [-Wtautological-constant-out-of-range-compare] if (gid <= LONG_MAX) ~~~ ^ ~~~~~~~~ clang -pthread -fno-strict-aliasing -OPT:Olimit=0 -g -O2 -g -O0 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -c ./Modules/_sre.c -o Modules/_sre.o clang -pthread -fno-strict-aliasing -OPT:Olimit=0 -g -O2 -g -O0 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -c ./Modules/_codecsmodule.c -o Modules/_codecsmodule.o ./Modules/pwdmodule.c:115:17: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare] if (uid < 0) ~~~ ^ ~ 1 warning generated. /home/travis/build/python/cpython/Modules/grpmodule.c:102:17: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare] if (gid < 0) ~~~ ^ ~ 1 warning generated. /cpython/Modules/nismodule.c -o build/temp.linux-x86_64-2.7-pydebug/home/travis/build/python/cpython/Modules/nismodule.o /home/travis/build/python/cpython/Modules/nismodule.c:404:15: warning: explicitly assigning value of variable of type 'nismaplist *' (aka 'struct nismaplist *') to itself [-Wself-assign] for (maps = maps; maps; maps = maps->next) { ~~~~ ^ ~~~~ 1 warning generated. /cpython/Modules/_cursesmodule.c -o build/temp.linux-x86_64-2.7-pydebug/home/travis/build/python/cpython/Modules/_cursesmodule.o /home/travis/build/python/cpython/Modules/_cursesmodule.c:1082:15: warning: implicit conversion from 'chtype' (aka 'unsigned long') to 'int' changes value from 18446744073709551615 to -1 [-Wconstant-conversion] rtn = mvwinch(self->win,y,x); ~ ^~~~~~~~~~~~~~~~~~~~~~ /usr/include/curses.h:1247:58: note: expanded from macro 'mvwinch' ...(wmove((win),(y),(x)) == ERR ? NCURSES_CAST(chtype, ERR) : winch(win)) ^~~~~~~~~~~~~~~~~~~~~~~~~ /usr/include/curses.h:222:34: note: expanded from macro 'NCURSES_CAST' #define NCURSES_CAST(type,value) (type)(value) ^~~~~~~~~~~~~ 1 warning generated. ---------- components: Extension Modules messages: 312810 nosy: christian.heimes priority: low severity: normal stage: needs patch status: open title: clang compiler warnings on Travis type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 08:43:14 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 25 Feb 2018 13:43:14 +0000 Subject: [New-bugs-announce] [issue32949] Simplify "with"-related opcodes Message-ID: <1519566194.29.0.467229070634.issue32949@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : There are some issues with "with"-related opcodes. All other opcodes has constant stack effect for particular control flow. For example FOR_ITER always has the stack effect 1 if not jump (pushes the next item) and -1 if jumps (pops the iterator). The only exceptions are WITH_CLEANUP_START which pushes 1 or 2 values depending on TOS, and WITH_CLEANUP_FINISH which pops 2 or 3 values depending on values pushed by preceding WITH_CLEANUP_START. This breaks consistency and may make debugging harder. WITH_CLEANUP_START duplicates a one of values on the stack without good reasons. Even the comment in the initial commit exposed uncertainty in this. The proposed PR simplifies WITH_CLEANUP_START and WITH_CLEANUP_FINISH. They will be now executed only when the exception is raised. In normal case they will be replaced with calling a function `__exit__(None, None, None)`. LOAD_CONST 0 ((None, None, None)) CALL_FUNCTION_EX 0 POP_TOP WITH_CLEANUP_FINISH will be merged with the following END_FINALLY. This PR is inspired by PR 5112 by Mark Shannon, but Mark goes further. In addition to simplifying the implementation and the mental model, the PR adds a tiny bit of performance gain. $ ./python -m perf timeit -s 'class CM:' -s ' def __enter__(s): pass' -s ' def __exit__(*args): pass' -s 'cm = CM()' -- 'with cm: pass' Unpatched: Mean +- std dev: 227 ns +- 6 ns Patched: Mean +- std dev: 205 ns +- 10 ns ---------- components: Interpreter Core messages: 312813 nosy: Mark.Shannon, benjamin.peterson, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: Simplify "with"-related opcodes type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 09:36:47 2018 From: report at bugs.python.org (=?utf-8?b?6rCV7ISx7J28IChMdWF2aXMp?=) Date: Sun, 25 Feb 2018 14:36:47 +0000 Subject: [New-bugs-announce] [issue32950] profiling python gc Message-ID: <1519569407.24.0.467229070634.issue32950@psf.upfronthosting.co.za> New submission from ??? (Luavis) : There is way to logging python garbage collection event. but no way to profiling it. ---------- messages: 312816 nosy: ??? (Luavis) priority: normal severity: normal status: open title: profiling python gc type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 12:18:04 2018 From: report at bugs.python.org (Christian Heimes) Date: Sun, 25 Feb 2018 17:18:04 +0000 Subject: [New-bugs-announce] [issue32951] Prohibit direct instantiation of SSLSocket and SSLObject Message-ID: <1519579084.45.0.467229070634.issue32951@psf.upfronthosting.co.za> New submission from Christian Heimes : The constructors of SSLObject and SSLSocket were never documented, tested, or meant to be used directly. Instead users were suppose to use ssl.wrap_socket or an SSLContext object. The ssl.wrap_socket() function and direct instantiation of SSLSocket has multiple issues. From my mail "No hostname matching with ssl.wrap_socket() and SSLSocket() constructor" to PSRT: The ssl module has three ways to create a SSLSocket object: 1) ssl.wrap_socket() [1] 2) ssl.SSLSocket() can be instantiated directly without a context [2] 3) SSLContext.wrap_socket() [3] Variant (1) and (2) are old APIs with insecure default settings. Variant (3) is the new and preferred way. With ssl.create_default_context() or ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) the socket is configured securely with hostname matching and cert validation enabled. While Martin Panter was reviewing my documentation improvements for the ssl module, he pointed out an issue, https://github.com/python/cpython/pull/3530#discussion_r170407478 . ssl.wrap_socket() and ssl.SSLSocket() default to CERT_NONE but PROTOCOL_TLS_CLIENT is documented to set CERT_REQUIRED. After a closer look, it turned out that the code is robust and refuses to accept PROTOCOL_TLS_CLIENT + default values with "Cannot set verify_mode to CERT_NONE when check_hostname is enabled.". I consider the behavior a feature. However ssl.SSLSocket() constructor and ssl.wrap_socket() have more fundamental security issues. I haven't looked at the old legacy APIs in a while and only concentrated on SSLContext. To my surprise both APIs do NOT perform or allow hostname matching. The wrap_socket() function does not even take a server_hostname argument, so it doesn't send a SNI TLS extension either. These bad default settings can lead to suprising security bugs in 3rd party code. This example doesn't fail although the hostname doesn't match the certificate: --- import socket import ssl cafile = ssl.get_default_verify_paths().cafile with socket.socket() as sock: ssock = ssl.SSLSocket( sock, cert_reqs=ssl.CERT_REQUIRED, ca_certs=cafile, server_hostname='www.python.org' ) ssock.connect(('www.evil.com', 443)) --- I don't see a way to fix the issue in a secure way while keeping backwards compatibility. We could either modify the default behavior of ssl.wrap_socket() and SSLSocket() constructor, or drop both features completely. Either way it's going to break software that uses them. Since I like to get rid of variants (1) and (2), I would favor to remove them in favor of SSLContext.wrap_socket(). At least we should implement my documentation bug 28124 [4] and make ssl.wrap_socket() less prominent. I'd appreciate any assistance. By the way, SSLObject is sane because it always goes through SSLContext.wrap_bio(). Thanks Benjamin! Regards, Christian [1] https://docs.python.org/3/library/ssl.html#ssl.wrap_socket [2] https://docs.python.org/3/library/ssl.html#ssl.SSLSocket [3] https://docs.python.org/3/library/ssl.html#ssl.SSLContext.wrap_socket [4] https://bugs.python.org/issue28124 ---------- assignee: christian.heimes components: SSL messages: 312823 nosy: alex, christian.heimes, dstufft, janssen, pitrou priority: normal severity: normal stage: needs patch status: open title: Prohibit direct instantiation of SSLSocket and SSLObject type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 14:52:11 2018 From: report at bugs.python.org (Sergey Kostyuk) Date: Sun, 25 Feb 2018 19:52:11 +0000 Subject: [New-bugs-announce] [issue32952] Add __qualname__ for attributes of Mock instances Message-ID: <1519588331.93.0.467229070634.issue32952@psf.upfronthosting.co.za> New submission from Sergey Kostyuk : Good day. I have a question (or proposal, if you like) For now Mocks from unittest.mock module allows to mimic an interface of a some class or object instance. They pass isinstance checks, they allows to wrap callables with respect to their arguments. But there is a thing they don't mimic: a value of the __qualname__ attribute for a mock itself and its mocked attributes. So, here is the proposal: copy the value of __qualname__ attribute from the wrapped (mocked) instance for all of the attributes of a Mock. I don't know if it's reasonable enough to be implemented at all but it can be handy in some situations. An example of the current and desired behaviour is provided in the attached file. And sorry for my English ---------- components: Tests files: qualname_for_mocks.py messages: 312849 nosy: s_kostyuk priority: normal severity: normal status: open title: Add __qualname__ for attributes of Mock instances type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file47462/qualname_for_mocks.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 25 17:23:51 2018 From: report at bugs.python.org (Eric V. Smith) Date: Sun, 25 Feb 2018 22:23:51 +0000 Subject: [New-bugs-announce] [issue32953] Dataclasses: frozen should not be inherited Message-ID: <1519597431.47.0.467229070634.issue32953@psf.upfronthosting.co.za> New submission from Eric V. Smith : Reported by Raymond Hettinger: When working on the docs for dataclasses, something unexpected came up. If a dataclass is specified to be frozen, that characteristic is inherited by subclasses which prevents them from assigning additional attributes: >>> @dataclass(frozen=True) class D: x: int = 10 >>> class S(D): pass >>> s = S() >>> s.cached = True Traceback (most recent call last): File "", line 1, in s.cached = True File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/dataclasses.py", line 448, in _frozen_setattr raise FrozenInstanceError(f'cannot assign to field {name!r}') dataclasses.FrozenInstanceError: cannot assign to field 'cached' Other immutable classes in Python don't behave the same way: >>> class T(tuple): pass >>> t = T([10, 20, 30]) >>> t.cached = True >>> class F(frozenset): pass >>> f = F([10, 20, 30]) >>> f.cached = True >>> class B(bytes): pass >>> b = B() >>> b.cached = True Raymond ---------- assignee: eric.smith components: Library (Lib) messages: 312866 nosy: eric.smith, rhettinger priority: normal severity: normal status: open title: Dataclasses: frozen should not be inherited type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 26 05:19:19 2018 From: report at bugs.python.org (Arcadiy Ivanov) Date: Mon, 26 Feb 2018 10:19:19 +0000 Subject: [New-bugs-announce] [issue32954] Lazy Literal String Interpolation (PEP-498-based fl-strings) Message-ID: <1519640359.67.0.467229070634.issue32954@psf.upfronthosting.co.za> New submission from Arcadiy Ivanov : I'd like to start a discussion on/gauge interest for introducing an enhancement to PEP-498 in a form of delayed/lazy/lambda f-string. The proposed change is, conceptually, minor and would not differ from PEP-498 syntactically at all except for string prefix. E.g. extra = f'{extra},waiters:{len(self._waiters)}' becomes extra = fl'{extra},waiters:{len(self._waiters)}' The proposal would produce a lambda-like object x('format'), where x.__str__() == f'format'. This should come extremely useful in all cases where delayed or conditional string formatting and concatenation are desired, such as in cases of logging. As an example, currently, the logger.debug("Format %s string", value) cannot be used with an f-string as follows logger.debug(f"Format {value} string") without an unconditional evaluation of all parameters due to current compilation prior to logging checking whether it's even enabled for debug: >>> b = 1 >>> def a(x): ... return f"Foo {x} bar {b}" ... >>> dis.dis(a) 2 0 LOAD_CONST 1 ('Foo ') 2 LOAD_FAST 0 (x) 4 FORMAT_VALUE 0 6 LOAD_CONST 2 (' bar ') 8 LOAD_GLOBAL 0 (b) 10 FORMAT_VALUE 0 12 BUILD_STRING 4 14 RETURN_VALUE Additional great optimizations may be rendered by introducing an a fl-string is a case where foo = "foo" s1 = fl"S1 value ${foo}" s2 = fl"S2 value of ${s1}" print(s2) may produce only one BUILD_STRING instruction, potentially dramatically increasing performance in heavy string-concat based applications. Even when a compiler will not be able to statically prove that a particular value is in fact an fl-string, an interpreter-level check or a JIT-based Python implementation may be able to optimize such concats ad-nauseam (with trap-based deopt), allowing to collapse an extremely long chains of formats into a single BUILD_STRING op without any intermediary string allocation/concatenation (very useful in cases such as web servers, templating engines etc). I'll be happy to produce patches against 3.7/3.8 for this if a general concept is found useful/desirable by the community. ---------- components: Library (Lib) messages: 312909 nosy: arcivanov priority: normal severity: normal status: open title: Lazy Literal String Interpolation (PEP-498-based fl-strings) type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 26 09:18:31 2018 From: report at bugs.python.org (zaphod424) Date: Mon, 26 Feb 2018 14:18:31 +0000 Subject: [New-bugs-announce] [issue32955] IDLE crashes when trying to save a file Message-ID: <1519654711.02.0.467229070634.issue32955@psf.upfronthosting.co.za> New submission from zaphod424 : when I click the save as button or use the keyboard shortcut, the save window appears but if I click the drop down to choose the save location, it crashes, using a Mac ---------- assignee: terry.reedy components: IDLE messages: 312926 nosy: terry.reedy, zaphod424 priority: normal severity: normal status: open title: IDLE crashes when trying to save a file type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 26 10:23:55 2018 From: report at bugs.python.org (M Hsia) Date: Mon, 26 Feb 2018 15:23:55 +0000 Subject: [New-bugs-announce] [issue32956] python 3 round bug Message-ID: <1519658635.56.0.467229070634.issue32956@psf.upfronthosting.co.za> New submission from M Hsia : import sys print(sys.version) for i in range(10): test=i+0.5 print (test,round(test,0)) ---------------------------- 3.6.3 |Anaconda custom (64-bit)| (default, Nov 8 2017, 15:10:56) [MSC v.1900 64 bit (AMD64)] 0.5 0.0 1.5 2.0 2.5 2.0 3.5 4.0 4.5 4.0 5.5 6.0 6.5 6.0 7.5 8.0 8.5 8.0 9.5 10.0 ------------------------- 2.7.14 (v2.7.14:84471935ed, Sep 16 2017, 20:19:30) [MSC v.1500 32 bit (Intel)] (0.5, 1.0) (1.5, 2.0) (2.5, 3.0) (3.5, 4.0) (4.5, 5.0) (5.5, 6.0) (6.5, 7.0) (7.5, 8.0) (8.5, 9.0) (9.5, 10.0) ---------- messages: 312931 nosy: MJH priority: normal severity: normal status: open title: python 3 round bug type: behavior versions: Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 26 10:24:42 2018 From: report at bugs.python.org (Korijn Van Golen) Date: Mon, 26 Feb 2018 15:24:42 +0000 Subject: [New-bugs-announce] [issue32957] distutils.command.install checks truthiness of .ext_modules instead of calling .has_ext_modules() Message-ID: <1519658682.25.0.467229070634.issue32957@psf.upfronthosting.co.za> New submission from Korijn Van Golen : distutils' Distribution class has a method has_ext_modules() that is used to determine if any extension modules are included in a distribution. There remains a call site in distutils.command.install where self.distribution.ext_modules is directly tested for truthiness, rather than calling has_ext_modules. This causes inconsistent behavior, e.g. when overriding has_ext_modules in a Distribution subclass. ---------- components: Distutils messages: 312932 nosy: Korijn Van Golen, dstufft, eric.araujo priority: normal severity: normal status: open title: distutils.command.install checks truthiness of .ext_modules instead of calling .has_ext_modules() type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 26 14:52:35 2018 From: report at bugs.python.org (Aaron Black) Date: Mon, 26 Feb 2018 19:52:35 +0000 Subject: [New-bugs-announce] [issue32958] Urllib proxy_bypass crashes for urls containing long basic auth strings Message-ID: <1519674755.43.0.467229070634.issue32958@psf.upfronthosting.co.za> New submission from Aaron Black : While working on a custom conda channel with authentication, I ran into the following UnicodeError: Traceback (most recent call last): File "/Users/ablack/miniconda3/lib/python3.6/site-packages/conda/core/repodata.py", line 402, in fetch_repodata_remote_request timeout=timeout) File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/sessions.py", line 521, in get return self.request('GET', url, **kwargs) File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/sessions.py", line 499, in request prep.url, proxies, stream, verify, cert File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/sessions.py", line 672, in merge_environment_settings env_proxies = get_environ_proxies(url, no_proxy=no_proxy) File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/utils.py", line 692, in get_environ_proxies if should_bypass_proxies(url, no_proxy=no_proxy): File "/Users/ablack/miniconda3/lib/python3.6/site-packages/requests/utils.py", line 676, in should_bypass_proxies bypass = proxy_bypass(netloc) File "/Users/ablack/miniconda3/lib/python3.6/urllib/request.py", line 2612, in proxy_bypass return proxy_bypass_macosx_sysconf(host) File "/Users/ablack/miniconda3/lib/python3.6/urllib/request.py", line 2589, in proxy_bypass_macosx_sysconf return _proxy_bypass_macosx_sysconf(host, proxy_settings) File "/Users/ablack/miniconda3/lib/python3.6/urllib/request.py", line 2562, in _proxy_bypass_macosx_sysconf hostIP = socket.gethostbyname(hostonly) UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long) The error can be consistently reproduced when the first substring of the url hostname is greater than 64 characters long, as in "0123456789012345678901234567890123456789012345678901234567890123.example.com". This wouldn't be a problem, except that it doesn't seem to separate out credentials from the first substring of the hostname so the entire "[user]:[secret]@XXX" section must be less than 65 characters long. This is problematic for services that use longer API keys and expect their submission over basic auth. ---------- components: Library (Lib) messages: 312947 nosy: ablack priority: normal severity: normal status: open title: Urllib proxy_bypass crashes for urls containing long basic auth strings type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 26 16:08:51 2018 From: report at bugs.python.org (Mariano M. Chouza) Date: Mon, 26 Feb 2018 21:08:51 +0000 Subject: [New-bugs-announce] [issue32959] zipimport fails when the ZIP archive contains more than 65535 files Message-ID: <1519679331.45.0.467229070634.issue32959@psf.upfronthosting.co.za> New submission from Mariano M. Chouza : When trying to import a module from a ZIP archive containing more than 65535 files, the import process fails: $ python3 -VV Python 3.6.4 (default, Jan 6 2018, 11:49:38) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] $ cat create_zips.py from zipfile import ZipFile with ZipFile('below.zip', 'w') as zfp: for i in range(65535): zfp.writestr('m%d.py' % i, '') with ZipFile('over.zip', 'w') as zfp: for i in range(65536): zfp.writestr('m%d.py' % i, '') $ python3 create_zips.py $ python -m zipfile -l below.zip | (head -2 && tail -2) File Name Modified Size m0.py 2018-02-26 20:57:32 0 m65533.py 2018-02-26 20:57:36 0 m65534.py 2018-02-26 20:57:36 0 $ python -m zipfile -l over.zip | (head -2 && tail -2) File Name Modified Size m0.py 2018-02-26 20:57:36 0 m65534.py 2018-02-26 20:57:40 0 m65535.py 2018-02-26 20:57:40 0 $ PYTHONPATH=below.zip python3 -c 'import m0' $ PYTHONPATH=over.zip python3 -c 'import m0' Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'm0' I think the problem is related to the zipimport module not handling the 'zip64 end of central directory record'. ---------- components: Library (Lib) messages: 312957 nosy: mchouza priority: normal severity: normal status: open title: zipimport fails when the ZIP archive contains more than 65535 files type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 26 18:41:26 2018 From: report at bugs.python.org (Eric V. Smith) Date: Mon, 26 Feb 2018 23:41:26 +0000 Subject: [New-bugs-announce] [issue32960] dataclasses: disallow inheritance between frozen and non-frozen classes Message-ID: <1519688486.17.0.467229070634.issue32960@psf.upfronthosting.co.za> New submission from Eric V. Smith : This is a temporary measure until we can better define how frozen inheritance should work. In the meantime, disallow: - frozen inherited from non-frozen - non-frozen inherited from frozen ---------- assignee: eric.smith components: Library (Lib) messages: 312972 nosy: eric.smith priority: normal severity: normal status: open title: dataclasses: disallow inheritance between frozen and non-frozen classes versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 27 03:55:10 2018 From: report at bugs.python.org (poornaprudhvi) Date: Tue, 27 Feb 2018 08:55:10 +0000 Subject: [New-bugs-announce] [issue32961] namedtuple displaying the internal code Message-ID: <1519721710.54.0.467229070634.issue32961@psf.upfronthosting.co.za> New submission from poornaprudhvi : >>> from collections import namedtuple >>> sample = namedtuple('Name','a','b','c') This is returning the following code as output: class Name(tuple): 'Name(a,)' __slots__ = () _fields = ('a',) def __new__(_cls, a,): 'Create new instance of Name(a,)' return _tuple.__new__(_cls, (a,)) @classmethod def _make(cls, iterable, new=tuple.__new__, len=len): 'Make a new Name object from a sequence or iterable' result = new(cls, iterable) if len(result) != 1: raise TypeError('Expected 1 arguments, got %d' % len(result)) return result def __repr__(self): 'Return a nicely formatted representation string' return 'Name(a=%r)' % self def _asdict(self): 'Return a new OrderedDict which maps field names to their values' return OrderedDict(zip(self._fields, self)) def _replace(_self, **kwds): 'Return a new Name object replacing specified fields with new values' result = _self._make(map(kwds.pop, ('a',), _self)) if kwds: raise ValueError('Got unexpected field names: %r' % kwds.keys()) return result def __getnewargs__(self): 'Return self as a plain tuple. Used by copy and pickle.' return tuple(self) __dict__ = _property(_asdict) def __getstate__(self): 'Exclude the OrderedDict from pickling' pass a = _property(_itemgetter(0), doc='Alias for field number 0') ---------- assignee: terry.reedy components: IDLE messages: 312984 nosy: poornaprudhvi, terry.reedy priority: normal severity: normal status: open title: namedtuple displaying the internal code type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 27 04:02:47 2018 From: report at bugs.python.org (Iryna Shcherbina) Date: Tue, 27 Feb 2018 09:02:47 +0000 Subject: [New-bugs-announce] [issue32962] test_gdb fails in debug build with `-mcet -fcf-protection -O0` Message-ID: <1519722167.1.0.467229070634.issue32962@psf.upfronthosting.co.za> New submission from Iryna Shcherbina : test_gdb fails on Fedora 28. This happens only in debug build, and only if built with control flow protection flags: `-mcet -fcf-protection` AND optimization `-O0`. Reproduction steps on Fedora 28 (x86_64): ./configure --with-pydebug make 'EXTRA_CFLAGS=-mcet -fcf-protection -O0' make test TESTOPTS='-v test_gdb' Actual result: Re-running test 'test_gdb' in verbose mode GDB version 8.1: GNU gdb (GDB) Fedora 8.1-8.fc28 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word". test_NULL_ob_type (test.test_gdb.PrettyPrintTests) Ensure that a PyObject* with NULL ob_type is handled gracefully ... ok test_NULL_ptr (test.test_gdb.PrettyPrintTests) Ensure that a NULL PyObject* is handled gracefully ... ok test_builtin_method (test.test_gdb.PrettyPrintTests) ... FAIL test_builtins_help (test.test_gdb.PrettyPrintTests) Ensure that the new-style class _Helper in site.py can be handled ... FAIL test_bytes (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of bytes ... FAIL test_corrupt_ob_type (test.test_gdb.PrettyPrintTests) Ensure that a PyObject* with a corrupt ob_type is handled gracefully ... ok test_corrupt_tp_flags (test.test_gdb.PrettyPrintTests) Ensure that a PyObject* with a type with corrupt tp_flags is handled ... ok test_corrupt_tp_name (test.test_gdb.PrettyPrintTests) Ensure that a PyObject* with a type with corrupt tp_name is handled ... ok test_dicts (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of dictionaries ... FAIL test_exceptions (test.test_gdb.PrettyPrintTests) ... FAIL test_frames (test.test_gdb.PrettyPrintTests) ... FAIL test_frozensets (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of frozensets ... FAIL test_getting_backtrace (test.test_gdb.PrettyPrintTests) ... ok test_int (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of various int values ... FAIL test_lists (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of lists ... FAIL test_modern_class (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of new-style class instances ... FAIL test_selfreferential_dict (test.test_gdb.PrettyPrintTests) Ensure that a reference loop involving a dict doesn't lead proxyval ... FAIL test_selfreferential_list (test.test_gdb.PrettyPrintTests) Ensure that a reference loop involving a list doesn't lead proxyval ... FAIL test_selfreferential_new_style_instance (test.test_gdb.PrettyPrintTests) ... FAIL test_selfreferential_old_style_instance (test.test_gdb.PrettyPrintTests) ... FAIL test_sets (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of sets ... FAIL test_singletons (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of True, False and None ... FAIL test_strings (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of unicode strings ... FAIL test_subclassing_list (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of an instance of a list subclass ... FAIL test_subclassing_tuple (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of an instance of a tuple subclass ... FAIL test_truncation (test.test_gdb.PrettyPrintTests) Verify that very long output is truncated ... FAIL test_tuples (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of tuples ... FAIL test_basic_command (test.test_gdb.PyListTests) Verify that the "py-list" command works ... FAIL test_one_abs_arg (test.test_gdb.PyListTests) Verify the "py-list" command with one absolute argument ... FAIL test_two_abs_args (test.test_gdb.PyListTests) Verify the "py-list" command with two absolute arguments ... FAIL test_down_at_bottom (test.test_gdb.StackNavigationTests) Verify handling of "py-down" at the bottom of the stack ... FAIL test_pyup_command (test.test_gdb.StackNavigationTests) Verify that the "py-up" command works ... FAIL test_up_at_top (test.test_gdb.StackNavigationTests) Verify handling of "py-up" at the top of the stack ... FAIL test_up_then_down (test.test_gdb.StackNavigationTests) Verify "py-up" followed by "py-down" ... FAIL test_bt (test.test_gdb.PyBtTests) Verify that the "py-bt" command works ... FAIL test_bt_full (test.test_gdb.PyBtTests) Verify that the "py-bt-full" command works ... FAIL test_gc (test.test_gdb.PyBtTests) Verify that "py-bt" indicates if a thread is garbage-collecting ... ok test_pycfunction (test.test_gdb.PyBtTests) Verify that "py-bt" displays invocations of PyCFunction instances ... ok test_threads (test.test_gdb.PyBtTests) Verify that "py-bt" indicates threads that are waiting for the GIL ... ok test_wrapper_call (test.test_gdb.PyBtTests) ... FAIL test_basic_command (test.test_gdb.PyPrintTests) Verify that the "py-print" command works ... FAIL test_print_after_up (test.test_gdb.PyPrintTests) ... FAIL test_printing_builtin (test.test_gdb.PyPrintTests) ... FAIL test_printing_global (test.test_gdb.PyPrintTests) ... FAIL test_basic_command (test.test_gdb.PyLocalsTests) ... FAIL test_locals_after_up (test.test_gdb.PyLocalsTests) ... FAIL ====================================================================== FAIL: test_builtin_method (test.test_gdb.PrettyPrintTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 617, in test_builtin_method (gdb_repr, gdb_output)) AssertionError: None is not true : Unexpected gdb representation: '' Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_builtins_help (test.test_gdb.PrettyPrintTests) Ensure that the new-style class _Helper in site.py can be handled ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 523, in test_builtins_help msg='Unexpected rendering %r' % gdb_repr) AssertionError: None is not true : Unexpected rendering '' ====================================================================== FAIL: test_bytes (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of bytes ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 307, in test_bytes self.assertGdbRepr(b'') File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: "" != "b''" - , decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8> + b'' : ", decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>" did not equal expected "b''"; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_dicts (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of dictionaries ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 295, in test_dicts self.assertGdbRepr({}) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: "" != '{}' - , decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8> + {} : ", decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>" did not equal expected '{}'; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_exceptions (test.test_gdb.PrettyPrintTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 393, in test_exceptions ''') File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 241, in get_gdb_repr import_site=import_site) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xb0 in position 0: invalid start byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xb0 in position 0: invalid start byte: ', - "Python Exception 'utf-8' codec can't decode " - 'byte 0xb0 in position 0: invalid start byte: '] ====================================================================== FAIL: test_frames (test.test_gdb.PrettyPrintTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 627, in test_frames cmds_after_breakpoint=['print (PyFrameObject*)(((PyCodeObject*)v)->co_zombieframe)'] File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ['Cannot access memory at address 0x90'] != [] First list contains 1 additional elements. First extra element 0: 'Cannot access memory at address 0x90' - ['Cannot access memory at address 0x90'] + [] ====================================================================== FAIL: test_frozensets (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of frozensets ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 379, in test_frozensets self.assertGdbRepr(frozenset(), "frozenset()") File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: '()' != 'frozenset()' - () + frozenset() : '()' did not equal expected 'frozenset()'; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=()) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=()) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_int (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of various int values ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 281, in test_int self.assertGdbRepr(42) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: "" != '42' - , decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8> + 42 : ", decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>" did not equal expected '42'; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_lists (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of lists ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 302, in test_lists self.assertGdbRepr([]) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: "" != '[]' - , decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8> + [] : ", decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>" did not equal expected '[]'; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_modern_class (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of new-style class instances ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 418, in test_modern_class msg='Unexpected new-style class rendering %r' % gdb_repr) AssertionError: None is not true : Unexpected new-style class rendering '' ====================================================================== FAIL: test_selfreferential_dict (test.test_gdb.PrettyPrintTests) Ensure that a reference loop involving a dict doesn't lead proxyval ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 542, in test_selfreferential_dict self.assertEqual(gdb_repr, "{'foo': {'bar': {...}}}") AssertionError: "" != "{'foo': {'bar': {...}}}" - , decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8> + {'foo': {'bar': {...}}} ====================================================================== FAIL: test_selfreferential_list (test.test_gdb.PrettyPrintTests) Ensure that a reference loop involving a list doesn't lead proxyval ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 530, in test_selfreferential_list self.assertEqual(gdb_repr, '[3, 4, 5, [...]]') AssertionError: '' != '[3, 4, 5, [...]]' - + [3, 4, 5, [...]] ====================================================================== FAIL: test_selfreferential_new_style_instance (test.test_gdb.PrettyPrintTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 568, in test_selfreferential_new_style_instance (gdb_repr, gdb_output)) AssertionError: None is not true : Unexpected gdb representation: '' Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_selfreferential_old_style_instance (test.test_gdb.PrettyPrintTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 555, in test_selfreferential_old_style_instance (gdb_repr, gdb_output)) AssertionError: None is not true : Unexpected gdb representation: '' Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_sets (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of sets ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 361, in test_sets self.assertGdbRepr(set(), "set()") File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: '()' != 'set()' - () + set() : '()' did not equal expected 'set()'; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=()) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=()) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_singletons (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of True, False and None ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 289, in test_singletons self.assertGdbRepr(True) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: "" != 'True' - , decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8> + True : ", decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>" did not equal expected 'True'; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_strings (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of unicode strings ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 331, in test_strings self.assertGdbRepr('') File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: "" != "''" - , decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8> + '' : ", decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>" did not equal expected "''"; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_subclassing_list (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of an instance of a list subclass ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 432, in test_subclassing_list msg='Unexpected new-style class rendering %r' % gdb_repr) AssertionError: None is not true : Unexpected new-style class rendering '' ====================================================================== FAIL: test_subclassing_tuple (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of an instance of a tuple subclass ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 447, in test_subclassing_tuple msg='Unexpected new-style class rendering %r' % gdb_repr) AssertionError: None is not true : Unexpected new-style class rendering '' ====================================================================== FAIL: test_truncation (test.test_gdb.PrettyPrintTests) Verify that very long output is truncated ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 588, in test_truncation "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, " AssertionError: '' != '[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12[993 chars]ted)' Diff is 1079 characters long. Set self.maxDiff to None to see it. ====================================================================== FAIL: test_tuples (test.test_gdb.PrettyPrintTests) Verify the pretty-printing of tuples ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 353, in test_tuples self.assertGdbRepr(tuple(), '()') File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 277, in assertGdbRepr % (gdb_repr, exp_repr, gdb_output))) AssertionError: "" != '()' - , decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8> + () : ", decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>" did not equal expected '()'; full output was: Breakpoint 1 (builtin_id) pending. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 1120 { #0 builtin_id (self=, v=, decode=, incrementalencoder=, incrementaldecoder=, streamwriter=, streamreader=) at remote 0x7ffff7e7e3b8>) at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120 ====================================================================== FAIL: test_basic_command (test.test_gdb.PyListTests) Verify that the "py-list" command works ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 643, in test_basic_command cmds_after_breakpoint=['py-list']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_one_abs_arg (test.test_gdb.PyListTests) Verify the "py-list" command with one absolute argument ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 658, in test_one_abs_arg cmds_after_breakpoint=['py-list 9']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_two_abs_args (test.test_gdb.PyListTests) Verify the "py-list" command with two absolute arguments ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 669, in test_two_abs_args cmds_after_breakpoint=['py-list 1,3']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_down_at_bottom (test.test_gdb.StackNavigationTests) Verify handling of "py-down" at the bottom of the stack ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 694, in test_down_at_bottom cmds_after_breakpoint=['py-down']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_pyup_command (test.test_gdb.StackNavigationTests) Verify that the "py-up" command works ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 683, in test_pyup_command cmds_after_breakpoint=['py-up', 'py-up']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_up_at_top (test.test_gdb.StackNavigationTests) Verify handling of "py-up" at the top of the stack ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 702, in test_up_at_top cmds_after_breakpoint=['py-up'] * 5) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_up_then_down (test.test_gdb.StackNavigationTests) Verify "py-up" followed by "py-down" ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 712, in test_up_then_down cmds_after_breakpoint=['py-up', 'py-up', 'py-down']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_bt (test.test_gdb.PyBtTests) Verify that the "py-bt" command works ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 727, in test_bt cmds_after_breakpoint=['py-bt']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_bt_full (test.test_gdb.PyBtTests) Verify that the "py-bt-full" command works ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 747, in test_bt_full cmds_after_breakpoint=['py-bt-full']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_wrapper_call (test.test_gdb.PyBtTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 866, in test_wrapper_call r") at /builddir/build/BUILD/Python-3.6.4/Python/bltinmodule.c:1120\n1120\t{\nBreakpoint 2: file /builddir/build/BUILD/Python-3.6.4/Objects/descrobject.c, line 1166.\n\nBreakpoint 2, wrapper_call (wp=, args=0x0, kwds=) at /builddir/build/BUILD/Python-3.6.4/Objects/descrobject.c:1166\n1166\t{\nTraceback (most recent call first):\n \n File "", line 4, in __init__\n File "", line 7, in \n' ====================================================================== FAIL: test_basic_command (test.test_gdb.PyPrintTests) Verify that the "py-print" command works ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 875, in test_basic_command cmds_after_breakpoint=['py-up', 'py-print args']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_print_after_up (test.test_gdb.PyPrintTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 884, in test_print_after_up cmds_after_breakpoint=['py-up', 'py-up', 'py-print c', 'py-print b', 'py-print a']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_printing_builtin (test.test_gdb.PyPrintTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 900, in test_printing_builtin cmds_after_breakpoint=['py-up', 'py-print len']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_printing_global (test.test_gdb.PyPrintTests)test test_gdb failed ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 892, in test_printing_global cmds_after_breakpoint=['py-up', 'py-print __name__']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_basic_command (test.test_gdb.PyLocalsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 909, in test_basic_command cmds_after_breakpoint=['py-up', 'py-locals']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ====================================================================== FAIL: test_locals_after_up (test.test_gdb.PyLocalsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 918, in test_locals_after_up cmds_after_breakpoint=['py-up', 'py-up', 'py-locals']) File "/builddir/build/BUILD/Python-3.6.4/Lib/test/test_gdb.py", line 219, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Python Exception 'utf-8' codec can't decode byte 0xf3 in position 0: invalid continuation byte: " + [] - ["Python Exception 'utf-8' codec can't decode " - 'byte 0xf3 in position 0: invalid continuation byte: '] ---------------------------------------------------------------------- Ran 46 tests in 20.175s FAILED (failures=37) 1 test failed again: test_gdb Total duration: 29 min 42 sec Tests result: FAILURE Expected result: no failures Original bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1541967 ---------- components: Tests messages: 312985 nosy: cstratak, ishcherb priority: normal severity: normal status: open title: test_gdb fails in debug build with `-mcet -fcf-protection -O0` versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 27 04:04:23 2018 From: report at bugs.python.org (Martijn Pieters) Date: Tue, 27 Feb 2018 09:04:23 +0000 Subject: [New-bugs-announce] [issue32963] Python 2.7 tutorial claims source code is UTF-8 encoded Message-ID: <1519722263.23.0.467229070634.issue32963@psf.upfronthosting.co.za> New submission from Martijn Pieters : Issue #29381 updated the tutorial to clarify #! use, but the 2.7 patch re-used Python 3 material that doesn't apply. See r40ba60f6 at https://github.com/python/cpython/commit/40ba60f6bf2f7192f86da395c71348d0fa24da09 It now reads: "By default, Python source files are treated as encoded in UTF-8." and " To display all these characters properly, your editor must recognize that the file is UTF-8, and it must use a font that supports all the characters in the file." This is a huge deviation from the previous text, and confusing and wrong to people new to Python 2. ---------- assignee: docs at python components: Documentation messages: 312986 nosy: docs at python, mjpieters priority: normal severity: normal status: open title: Python 2.7 tutorial claims source code is UTF-8 encoded versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 27 14:40:07 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 27 Feb 2018 19:40:07 +0000 Subject: [New-bugs-announce] [issue32964] Reuse a testing implementation of the path protocol in tests Message-ID: <1519760407.45.0.467229070634.issue32964@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : In a number of tests there are simple implementations of the path protocol. Unlike to pathlib.Path they implement only necessary minimum of methods. The proposed PR moves this implementation into test.support. It also fixes some misuses of it. I have named it SimplePath. Are there better ideas? PathLike was rejected due to possible confusion with os.PathLike. ---------- components: Tests messages: 313018 nosy: brett.cannon, ezio.melotti, serhiy.storchaka priority: normal severity: normal status: open title: Reuse a testing implementation of the path protocol in tests type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 27 16:25:48 2018 From: report at bugs.python.org (Erik Johnson) Date: Tue, 27 Feb 2018 21:25:48 +0000 Subject: [New-bugs-announce] [issue32965] Passing a bool to io.open() should raise a TypeError, not read from stdin Message-ID: <1519766748.3.0.467229070634.issue32965@psf.upfronthosting.co.za> New submission from Erik Johnson : When you open a filehandle using either True or False as the file, the open succeeds. This has been reproduced using op/io.open on Python 3.6.4, 3.5.2, and 3.4.5, as well as with io.open() on Python 2.7.14. This can be easily demonstrated in the REPL: >>> f = open(False) >>> f.read(10) Lorem ipsum dolor sit amet 'Lorem ipsu' >>> f.read(10) 'm dolor si' >>> f.read(10) 't amet\n' >>> f.read(10) '' >>> f.close() >>> % After the first read, I pasted in enough to stdin to exceed 10 bytes, and hit Enter. After the 2nd and third reads I had to hit Ctrl-d to exit back to the REPL. And, as a fun bonus, closing the filehandle quits the REPL. This doesn't look like intended behavior. It doesn't make logical sense (why not just use sys.stdin if you want to read from stdin?), and isn't documented. This should either raise a TypeError, or the behavior should be documented. From the docs: file is a path-like object giving the pathname (absolute or relative to the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.) Moreover, when you pass other values that don't match the above description, a TypeError is raised: >>> open(123.456) Traceback (most recent call last): File "", line 1, in TypeError: integer argument expected, got float >>> open(None) Traceback (most recent call last): File "", line 1, in TypeError: expected str, bytes or os.PathLike object, not NoneType >>> open(['wtf', 'am', 'i', 'doing???']) Traceback (most recent call last): File "", line 1, in TypeError: expected str, bytes or os.PathLike object, not list So, one of str, bytes, and os.PathLike are expected, right? And yet... >>> isinstance(True, (str, bytes, os.PathLike)) False >>> isinstance(False, (str, bytes, os.PathLike)) False It should also be noted that when using the open() builtin on Python 2 instead of io.open(), you get a more logical result: >>> open(False) Traceback (most recent call last): File "", line 1, in TypeError: coercing to Unicode: need string or buffer, bool found ---------- messages: 313025 nosy: terminalmage priority: normal severity: normal status: open title: Passing a bool to io.open() should raise a TypeError, not read from stdin type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 27 16:33:46 2018 From: report at bugs.python.org (Scott) Date: Tue, 27 Feb 2018 21:33:46 +0000 Subject: [New-bugs-announce] [issue32966] Python 3.6.4 - 0x80070643 - Fatal Error during installation Message-ID: <1519767226.7.0.467229070634.issue32966@psf.upfronthosting.co.za> New submission from Scott : Installing Python 3.6.4, Windows 10 64bit, exits installer and dumps the following error code to log. [18B8:4394][2018-02-27T15:41:06]i399: Apply complete, result: 0x80070643, restart: None, ba requested restart: No 0x80070643 - Fatal Error during installation Attempted Command line installation as Administrator, installation standard methods as Administrator ---------- components: Installation files: Python 3.6.4 (64-bit)_20180227154057.log messages: 313027 nosy: exceltw priority: normal severity: normal status: open title: Python 3.6.4 - 0x80070643 - Fatal Error during installation type: crash versions: Python 3.6 Added file: https://bugs.python.org/file47464/Python 3.6.4 (64-bit)_20180227154057.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 27 18:08:36 2018 From: report at bugs.python.org (Neeraj Badlani) Date: Tue, 27 Feb 2018 23:08:36 +0000 Subject: [New-bugs-announce] [issue32967] make check in devguide failing Message-ID: <1519772916.66.0.467229070634.issue32967@psf.upfronthosting.co.za> Change by Neeraj Badlani : ---------- assignee: docs at python components: Documentation nosy: docs at python, neerajbadlani priority: normal severity: normal status: open title: make check in devguide failing versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 27 22:10:51 2018 From: report at bugs.python.org (Elias Zamaria) Date: Wed, 28 Feb 2018 03:10:51 +0000 Subject: [New-bugs-announce] [issue32968] Fraction modulo infinity should behave consistently with other numbers Message-ID: <1519787451.54.0.467229070634.issue32968@psf.upfronthosting.co.za> New submission from Elias Zamaria : Usually, a positive finite number modulo infinity is itself. But modding a positive fraction by infinity produces nan: >>> from fractions import Fraction >>> from math import inf >>> 3 % inf 3.0 >>> 3.5 % inf 3.5 >>> Fraction('1/3') % inf nan Likewise, a positive number modulo negative infinity is usually negative infinity, a negative number modulo infinity is usually infinity, and a negative number modulo negative infinity is usually itself, unless the number doing the modding is a fraction, in which case it produces nan. I think fractions should behave like other numbers in cases like these. I don't think this comes up very often in practical situations, but it is inconsistent behavior that may surprise people. I looked at the fractions module. It seems like this can be fixed by putting the following lines at the top of the __mod__ method of the Fraction class: if b == math.inf: if a >= 0: return a else: return math.inf elif b == -math.inf: if a >= 0: return -math.inf else: return a If that is too verbose, it can also be fixed with these lines, although this is less understandable IMO: if math.isinf(b): return a if (a >= 0) == (b > 0) else math.copysign(math.inf, b) I noticed this in Python 3.6.4 on OS X 10.12.6. If anyone wants, I can come up with a patch with some tests. ---------- components: Interpreter Core messages: 313045 nosy: elias priority: normal severity: normal status: open title: Fraction modulo infinity should behave consistently with other numbers versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 28 03:59:00 2018 From: report at bugs.python.org (Xiang Zhang) Date: Wed, 28 Feb 2018 08:59:00 +0000 Subject: [New-bugs-announce] [issue32969] Add more constants to zlib module Message-ID: <1519808340.69.0.467229070634.issue32969@psf.upfronthosting.co.za> New submission from Xiang Zhang : Inspired by https://github.com/python/cpython/pull/5511, zlib module in Python lacks some constants exposed by C zlib library, and some constants are not documented. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 313053 nosy: docs at python, xiang.zhang priority: normal severity: normal status: open title: Add more constants to zlib module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 28 11:31:31 2018 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 28 Feb 2018 16:31:31 +0000 Subject: [New-bugs-announce] [issue32970] Improve disassembly of the MAKE_FUNCTION instruction Message-ID: <1519835491.83.0.467229070634.issue32970@psf.upfronthosting.co.za> New submission from Serhiy Storchaka : The proposed PR adds decryption of the MAKE_FUNCTION argument (it is a bits set) in the disassembler output. For example: $ echo 'def f(x, y=1, *, z=2): ...' | ./python -m dis 1 0 LOAD_CONST 6 ((1,)) 2 LOAD_CONST 1 (2) 4 LOAD_CONST 2 (('z',)) 6 BUILD_CONST_KEY_MAP 1 8 LOAD_CONST 3 (", line 1>) 10 LOAD_CONST 4 ('f') 12 MAKE_FUNCTION 3 (defaults, kwdefaults) 14 STORE_NAME 0 (f) 16 LOAD_CONST 5 (None) 18 RETURN_VALUE Disassembly of ", line 1>: 1 0 LOAD_CONST 0 (None) 2 RETURN_VALUE ---------- components: Library (Lib) messages: 313060 nosy: ncoghlan, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Improve disassembly of the MAKE_FUNCTION instruction type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 28 12:58:02 2018 From: report at bugs.python.org (NODA, Kai) Date: Wed, 28 Feb 2018 17:58:02 +0000 Subject: [New-bugs-announce] [issue32971] unittest.TestCase.assertRaises Message-ID: <1519840682.18.0.467229070634.issue32971@psf.upfronthosting.co.za> New submission from NODA, Kai : https://docs.python.org/dev/library/unittest.html#unittest.TestCase.assertRaises > If only the exception and possibly the msg arguments are given, return a context manager so that the code under test can be written inline rather than as a function: > > with self.assertRaises(SomeException): do_something() > > When used as a context manager, assertRaises() accepts the additional keyword argument msg. Perhaps we don't need the second sentence on the `msg` argument which isn't adding anything new. Ideally it should be more clear when the method operates in context manager mode. ("If only" and "possibly" don't play nicely together.) Maybe along the lines of "If no callable was passed as an argument ..." ? I haven't looked in to the actual implementation yet... ---------- assignee: docs at python components: Documentation messages: 313061 nosy: docs at python, nodakai priority: normal severity: normal status: open title: unittest.TestCase.assertRaises versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 28 13:41:57 2018 From: report at bugs.python.org (John Andersen) Date: Wed, 28 Feb 2018 18:41:57 +0000 Subject: [New-bugs-announce] [issue32972] unittest.TestCase coroutine support Message-ID: <1519843317.02.0.467229070634.issue32972@psf.upfronthosting.co.za> New submission from John Andersen : This makes unittest TestCase classes run `test_` methods which are async ---------- components: Tests, asyncio messages: 313063 nosy: asvetlov, pdxjohnny, yselivanov priority: normal pull_requests: 5708 severity: normal status: open title: unittest.TestCase coroutine support type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 28 13:51:00 2018 From: report at bugs.python.org (Thomas Wouters) Date: Wed, 28 Feb 2018 18:51:00 +0000 Subject: [New-bugs-announce] [issue32973] Importing the same extension module under multiple names breaks non-reinitialisable extension modules Message-ID: <1519843860.12.0.467229070634.issue32973@psf.upfronthosting.co.za> New submission from Thomas Wouters : This is a continuation, of sorts, of issue16421; adding most of that issue's audience to the noisy list. When importing the same extension module under multiple names that share the same basename, Python 3 will call the extension module's init function multiple times. With extension modules that do not support re-initialisation, this causes them to trample all over their own state. In the case of numpy, this corrupts CPython internal data structures, like builtin types. Simple reproducer: % python3.6 -m venv numpy-3.6 % numpy-3.6/bin/python -m pip install numpy % PYTHONPATH=./numpy-3.6/lib/python3.6/site-packages/numpy/core/ ./numpy-3.6/bin/python -c "import numpy.core.multiarray, multiarray; u'' < 1" Traceback (most recent call last): File "", line 1, in Segmentation fault (The corruption happens because PyInit_multiarray initialises subclasses of builtin types, which causes them to share some data (e.g. tp_as_number) with the base class: https://github.com/python/cpython/blob/master/Objects/typeobject.c#L5277. Calling it a second time then copies data from a different class into that shared data, corrupting the base class: https://github.com/python/cpython/blob/master/Objects/typeobject.c#L4950. The Py_TPFLAGS_READY flag is supposed to protect against this, but PyInit_multiarray resets the tp_flags value. I ran into this because we have code that vendors numpy and imports it in two different ways.) The specific case of numpy is somewhat convoluted and exacerbated by dubious design choices in numpy, but it is not hard to show that calling an extension module's PyInit function twice (if the module doesn't support reinitialisation through PEP 3121) is bad: any C globals initialised in the PyInit function will be trampled on. This was not a problem in Python 2 because the extension module cache worked based purely on filename. It was changed in response to issue16421, but the intent there appears to be to call *different* PyInit methods in the same module. However, because PyInit functions are based off of the *basename* of the module, not the full module name, a different module name does not mean a different init function name. I think the right approach is to change the extension module cache to key on filename and init function name, although this is a little tricky: the init function name is calculated much later in the process. Alternatively, key it on filename and module basename, rather than full module name. ---------- messages: 313064 nosy: Arfrever, amaury.forgeotdarc, asvetlov, brett.cannon, eric.snow, eudoxos, ncoghlan, pitrou, r.david.murray, twouters, vstinner priority: normal severity: normal status: open title: Importing the same extension module under multiple names breaks non-reinitialisable extension modules type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 28 20:13:29 2018 From: report at bugs.python.org (Kyle Agronick) Date: Thu, 01 Mar 2018 01:13:29 +0000 Subject: [New-bugs-announce] [issue32974] Add bitwise operations and other missing comparison methods to Python's IP address module Message-ID: <1519866809.51.0.467229070634.issue32974@psf.upfronthosting.co.za> New submission from Kyle Agronick : I've recently had the experience of implementing a system for managing DNS records at a Fortune 10 company with ~10,000 stores. There were a number of things that I felt were missing from the IP address module and could be added without introducing any breaking changes. The easiest changes I saw would be to implement bitwise operations without casting all the addresses to ints and then back to IP addresses. Lets say you wanted to take the last octet from one address and put it on another: Right now you need to do: IPv6Address(int(v6('2001:db8:85a3:0:0:8a2e:370:7334')) & int(IPv6Address('0:0:0:0:0:0:0:ffff')) | int(IPv6Address('1323:cf3:23df:0:0:32:44:0'))) # returns IPv6Address('1323:cf3:23df::32:44:7334') You should be able to do: (IPv6Address('2001:db8:85a3:0:0:8a2e:370:7334') & IPv6Address('0:0:0:0:0:0:0:ffff')) | IPv6Address('1323:cf3:23df:0:0:32:44:0') # returns TypeError: unsupported operand type(s) for &: 'IPv6Address' and 'IPv6Address' All that would be required is to do the casting to int automatically. The other thing I saw that I would like would be to override the methods that return generators with iterables that provide more comparison operations. Right now you can check if an IP is in a network with: IPv4Address('192.168.1.12') in IPv4Network('192.168.1.0/24') # returns True You should be able to do this with methods that return multiple networks. ipaddress.summarize_address_range() is a method that returns a generator of IPv(4|6)Networks. To see if an IP address is in one of multiple networks you need to do: any(map(lambda i: IPv4Address('192.168.1.12') in i, ipaddress.summarize_address_range(IPv4Address('192.168.1.0'), IPv4Address('192.168.2.255')))) # returns True You should be able to do: IPv4Address('192.168.1.12') in ipaddress.summarize_address_range(IPv4Address('192.168.1.0'), IPv4Address('192.168.2.255')) # returns False This should be the default for IPv(4|6)Addresses. IPv(4|6)Networks should check membership like they currently do. You should be able to subtract ranges to make ranges that include some IPs but not others: ipaddress.summarize_address_range(IPv4Address('192.168.1.0'), IPv4Address('192.168.2.255')) - ipaddress.summarize_address_range(IPv4Address('192.168.1.20'), IPv4Address('192.168.1.50')) # returns TypeError: unsupported operand type(s) for -: 'generator' and 'generator' This should return a iterable that has networks that include 192.168.1.0 to 192.168.1.20 and 192.168.1.50 to 192.168.2.255. You should be able to do addition as well without casting to a list. Methods like .hosts() should be able to be performed on the iterator instead of doing: sum([list(i.hosts()) for i in ipaddress.summarize_address_range(IPv4Address('192.168.1.0'), IPv4Address('192.168.2.255'))], []) do: ipaddress.summarize_address_range(IPv4Address('192.168.1.0'), IPv4Address('192.168.2.255')).hosts() Another great feature would be to allow division on networks and network generators giving you every xth host: (i for i in IPv4Network('192.168.1.0/24').hosts() if int(i) % x == 0) When x is 32 you get: IPv4Address('192.168.1.32'), IPv4Address('192.168.1.64'), IPv4Address('192.168.1.96'), IPv4Address('192.168.1.128'), IPv4Address('192.168.1.160'), IPv4Address('192.168.1.192'), IPv4Address('192.168.1.224') I would be happy to code this if the consensus was that this would be included. ---------- components: Library (Lib) messages: 313077 nosy: Kyle Agronick priority: normal severity: normal status: open title: Add bitwise operations and other missing comparison methods to Python's IP address module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________