From report at bugs.python.org Thu Aug 1 05:25:01 2019 From: report at bugs.python.org (David Lewis) Date: Thu, 01 Aug 2019 09:25:01 +0000 Subject: [New-bugs-announce] [issue37736] asyncio.wait_for is still confusing Message-ID: <1564651501.71.0.316479365632.issue37736@roundup.psfhosted.org> New submission from David Lewis : This issue is a follow up to previous discussions about confusing results with asyncio.wait_for. In the current implementation, it seems unintuitive that a coroutine with a timeout argument may easily wait forever. Perhaps wait_for could use an await_cancellation=True kwarg. Prior issues: a) "It's a feature, not a bug" - Guido https://github.com/python/asyncio/issues/253#issuecomment-120020018 b) "I don't feel comfortable with asyncio living with this bug till 3.8." - Yury https://bugs.python.org/issue32751#msg318065 Originally, wait_for would cancel the future and raise TimeoutError immediately. In the case of a Task, it could remain active for some time after the timeout, since its cancellation is potentially asynchronous. In (a), this behaviour was defended, since waiting on the cancellation would violate the implicit contract of the timeout argument to wait_for(). While the documentation suggests it's a poor idea, it's not illegal for a task to defer or entirely refuse cancellation. In (b), the task outliving the TimeoutError was considered a bug, and the behaviour changed to its current state. To address the issue raised in (a), the documentation for wait_for now contains the line "The function will wait until the future is actually cancelled, so the total wait time may exceed the timeout." However, we still have the case where a misbehaving Task can cause wait_for to hang indefinitely. For example, the following program doesn't terminate: import asyncio, contextlib async def bad(): while True: with contextlib.suppress(asyncio.CancelledError): await asyncio.sleep(1) print("running...") if __name__ == '__main__': asyncio.run(asyncio.wait_for(bad(), 1)) More realistically, the task may have cooperative cancellation logic that waits for something else to be tidied up: try: await wait_for(some_task(service), 10) except TimeoutError: ... finally: service.stop() ---------- components: asyncio messages: 348848 nosy: David Lewis, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio.wait_for is still confusing type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 1 09:29:49 2019 From: report at bugs.python.org (David CARLIER) Date: Thu, 01 Aug 2019 13:29:49 +0000 Subject: [New-bugs-announce] [issue37737] mmap module track anonymous page on macOS Message-ID: <1564666189.5.0.946971238191.issue37737@roundup.psfhosted.org> Change by David CARLIER : ---------- components: macOS nosy: devnexen, ned.deily, ronaldoussoren priority: normal pull_requests: 14815 severity: normal status: open title: mmap module track anonymous page on macOS versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 1 09:45:15 2019 From: report at bugs.python.org (Marius Gedminas) Date: Thu, 01 Aug 2019 13:45:15 +0000 Subject: [New-bugs-announce] [issue37738] curses.addch('a', curses.color_pair(1)) ignores the color information Message-ID: <1564667115.2.0.875397124654.issue37738@roundup.psfhosted.org> New submission from Marius Gedminas : curses.addch() ignores color information if I pass it a string of length one. Color works fine if I pass it a byte string or an int. Here's a reproducer: ### start of example ### import curses def main(stdscr): curses.start_color() curses.use_default_colors() curses.init_pair(1, curses.COLOR_RED, -1) curses.init_pair(2, curses.COLOR_GREEN, -1) curses.curs_set(0) stdscr.addch("a", curses.color_pair(1)) stdscr.addch("b", curses.color_pair(2) | curses.A_BOLD) stdscr.addch(b"c", curses.color_pair(1)) stdscr.addch(b"d", curses.color_pair(2) | curses.A_BOLD) stdscr.addch(ord("e"), curses.color_pair(1)) stdscr.addch(ord("f"), curses.color_pair(2) | curses.A_BOLD) stdscr.refresh() stdscr.getch() curses.wrapper(main) ### end of example ### On Python 2.7 this prints 'abcdef' in alternating red and green. On Python 3.5 through 3.8 this prints 'ab' in white and the rest in red/green. Note that only color pair information is lost -- the bold attribute is correctly set on the 'b'. ---------- components: Library (Lib) messages: 348855 nosy: mgedmin priority: normal severity: normal status: open title: curses.addch('a', curses.color_pair(1)) ignores the color information versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 1 12:23:02 2019 From: report at bugs.python.org (Su Zhu) Date: Thu, 01 Aug 2019 16:23:02 +0000 Subject: [New-bugs-announce] [issue37739] list(filter) returns [] ??? Message-ID: <1564676582.32.0.410232765618.issue37739@roundup.psfhosted.org> New submission from Su Zhu : The filter become empty after serving as an argument of list(). Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> a = filter(lambda x:x, [1,2,3]) >>> list(a) [1, 2, 3] >>> list(a) [] ---------- components: Demos and Tools messages: 348864 nosy: zhusu.china priority: normal severity: normal status: open title: list(filter) returns [] ??? type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 1 13:58:33 2019 From: report at bugs.python.org (mzbuild) Date: Thu, 01 Aug 2019 17:58:33 +0000 Subject: [New-bugs-announce] [issue37740] Python 3.7 sh hangs when using in threads, forking and logging Message-ID: <1564682313.44.0.793879611367.issue37740@roundup.psfhosted.org> New submission from mzbuild : Hi, We use the sh module in a threaded context to execute shell commands. When we were migrating to python 3 from python 2 we encountered some commands hanging. We created a test script that recreates the issue we are seeing. `import sh import logging import concurrent.futures.thread def execmd(exe): print(sh.ls()) execution_pool = concurrent.futures.thread.ThreadPoolExecutor(20) i = 0 thread_results = [] while i<500: i+= 1 thread_results.append(execution_pool.map(execmd, ['ls'])) execution_pool.shutdown() When running this in python 3.7 it hangs but in python 3.6 it works fine. We think it is releated to this issue https://bugs.python.org/issue36533. Installing the latest 3.7.4 didn't fix the issue. The sh module uses logging and forking and the top script uses threading so we think there is a locking issue with 3.7. Any help would be great. ---------- components: Library (Lib) messages: 348869 nosy: cagney, ebizzinfotech, gregory.p.smith, hugh, lukasz.langa, mzbuild, ned.deily priority: normal severity: normal status: open title: Python 3.7 sh hangs when using in threads, forking and logging type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 1 14:47:05 2019 From: report at bugs.python.org (Brett Cannon) Date: Thu, 01 Aug 2019 18:47:05 +0000 Subject: [New-bugs-announce] [issue37741] importlib.metadata docs not showing up in the module index Message-ID: <1564685225.86.0.948746560612.issue37741@roundup.psfhosted.org> New submission from Brett Cannon : If you look at https://docs.python.org/3.9/py-modindex.html#cap-i you will see that importlib.metadata isn't listed (same goes for the 3.8 docs). Or are you leaving it out due to it being provisional? ---------- assignee: docs at python components: Documentation messages: 348872 nosy: barry, brett.cannon, docs at python, jaraco priority: normal severity: normal status: open title: importlib.metadata docs not showing up in the module index versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 1 19:22:33 2019 From: report at bugs.python.org (Damian Yurzola) Date: Thu, 01 Aug 2019 23:22:33 +0000 Subject: [New-bugs-announce] [issue37742] logging.getLogger accepts name='root' leading to confusion Message-ID: <1564701753.98.0.180548657893.issue37742@roundup.psfhosted.org> New submission from Damian Yurzola : 'root' should be a reserved name to avoid this: >>> import logging >>> a = logging.getLogger() >>> b = logging.getLogger('root') >>> a.name 'root' >>> b.name 'root' >>> a is b False ---------- components: Library (Lib) messages: 348877 nosy: yurzo priority: normal severity: normal status: open title: logging.getLogger accepts name='root' leading to confusion type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 1 19:33:04 2019 From: report at bugs.python.org (Antony Lee) Date: Thu, 01 Aug 2019 23:33:04 +0000 Subject: [New-bugs-announce] [issue37743] How should contextmanager/ContextDecorator work with generators? Message-ID: <1564702384.75.0.948740630584.issue37743@roundup.psfhosted.org> New submission from Antony Lee : The docs for ContextDecorator (of which contextmanager is a case) describe its semantics as: ... for any construct of the following form: def f(): with cm(): # Do stuff ContextDecorator lets you instead write: @cm() def f(): # Do stuff However, when decorating a generator, the equivalence is broken: from contextlib import contextmanager @contextmanager def cm(): print("start") yield print("stop") def gen_using_with(): with cm(): yield from map(print, range(2)) @cm() def gen_using_decorator(): yield from map(print, range(2)) print("using with") list(gen_using_with()) print("==========") print("using decorator") list(gen_using_decorator()) results in using with start 0 1 stop ========== using decorator start stop 0 1 i.e., when used as a decorator, the entire contextmanager is executed first before iterating over the generator (which is unsurprising given the implementation of ContextDecorator: ContextDecorator returns a function that executes the context manager and returns the generator, which is only iterated over later). Should this be considered as a bug in ContextDecorator, and should ContextDecorator instead detect when it is used to decorate a generator (e.g. with inspect.isgeneratorfunction), and switch its implementation accordingly in that case? ---------- components: Library (Lib) messages: 348878 nosy: Antony.Lee priority: normal severity: normal status: open title: How should contextmanager/ContextDecorator work with generators? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 1 20:54:24 2019 From: report at bugs.python.org (Samuel Thibault) Date: Fri, 02 Aug 2019 00:54:24 +0000 Subject: [New-bugs-announce] [issue37744] thread native id support for GNU/Hurd Message-ID: <1564707264.79.0.323532185267.issue37744@roundup.psfhosted.org> New submission from Samuel Thibault : Hello, Here is a patch to add thread native ID support for GNU/Hurd. Samuel ---------- components: Interpreter Core files: hurd_thread_native_id.diff keywords: patch messages: 348879 nosy: samuel-thibault priority: normal severity: normal status: open title: thread native id support for GNU/Hurd versions: Python 3.8 Added file: https://bugs.python.org/file48526/hurd_thread_native_id.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 2 13:21:08 2019 From: report at bugs.python.org (Christopher Brousseau) Date: Fri, 02 Aug 2019 17:21:08 +0000 Subject: [New-bugs-announce] [issue37745] 3.8b3 - windows install gui/ inconsistent options Message-ID: <1564766468.14.0.320547199271.issue37745@roundup.psfhosted.org> New submission from Christopher Brousseau : When installing 3.8.0b3 64-bit into Win 10 using the gui following the `customize installation` path, there are duplicate and inconsistent options on three different screens for the `install for all users` checkbox. Observed Behavior: 1. first screen (Install Python) - `install launcher for all users` is marked as checked as default 2. second screen (Optional Features) - 2.1 `for all users` is also marked as checked if first screen marked. if second screen marked - this is unchecked. 2.2 layout of this checkbox is above a comment that relates only to the "py launcher" checkbox. would be more clear for user if `for all users` was located below "py launcher", or removed from this screen (per note below) 3. third screen (Advanced Options) - `Install for all users` is UNchecked in all cases, even if first & second screens are checked. Expected Behavior: 1. if first screen is checked, all follow on screens are also checked 2. feature options only appear on one screen, or at a minimum, checkbox label is exactly the same on all screens Additional Questions for the team: 1. Should all install customizations be removed from first screen, set as default options and just listed as descriptions under the `Install Now` default? 2. Should `for all users` option be removed from the 2nd screen (Optional Features)? It seems more like an "advanced option" than a feature. Screenshots (this site appears to only allow one file) first: https://imgur.com/a/0a45CBh second: https://imgur.com/a/6ZV16bV third: https://imgur.com/a/stRTh25 Link to binary used for this: https://www.python.org/ftp/python/3.8.0/python-3.8.0b3-amd64.exe ---------- components: Installation, Windows files: python_3.8b3_screen02_optional_features_2019-08-02_9-56-02.png messages: 348907 nosy: Christopher Brousseau, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: 3.8b3 - windows install gui/ inconsistent options type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48527/python_3.8b3_screen02_optional_features_2019-08-02_9-56-02.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 2 13:23:17 2019 From: report at bugs.python.org (Steve Dower) Date: Fri, 02 Aug 2019 17:23:17 +0000 Subject: [New-bugs-announce] [issue37746] Provide Windows predefined access type constants Message-ID: <1564766597.45.0.90249654152.issue37746@roundup.psfhosted.org> New submission from Steve Dower : We currently do not provide the standard access type constants anywhere, despite providing some of the specific access type flags (e.g. in `winreg`): #define DELETE (0x00010000L) #define READ_CONTROL (0x00020000L) #define WRITE_DAC (0x00040000L) #define WRITE_OWNER (0x00080000L) #define SYNCHRONIZE (0x00100000L) #define STANDARD_RIGHTS_REQUIRED (0x000F0000L) #define STANDARD_RIGHTS_READ (READ_CONTROL) #define STANDARD_RIGHTS_WRITE (READ_CONTROL) #define STANDARD_RIGHTS_EXECUTE (READ_CONTROL) #define STANDARD_RIGHTS_ALL (0x001F0000L) #define SPECIFIC_RIGHTS_ALL (0x0000FFFFL) I'm not sure where the best place to expose them would be. `os` (`nt`) seems best, but it's 99% a POSIX shim that doesn't actually have anything Windows-specific exposed, and `_winapi` is not public. They're likely already available through pywin32 or similar, but given their use with `winreg` we should probably at least make them available there. ---------- components: Windows messages: 348908 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Provide Windows predefined access type constants type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 2 14:38:27 2019 From: report at bugs.python.org (bp256r1) Date: Fri, 02 Aug 2019 18:38:27 +0000 Subject: [New-bugs-announce] [issue37747] _markupbase.py fails with TypeError on invalid keyword in marked section Message-ID: <1564771107.73.0.664658859343.issue37747@roundup.psfhosted.org> New submission from bp256r1 : Hello, I'm not sure if this a bug, but I noticed that a TypeError is raised by the parse_marked_section function of the _markupbase module in Python 3.7.4 when attempting to parse a name token of >> a='>> from bs4 import BeautifulSoup >>> BeautifulSoup(a, 'html.parser') /usr/local/lib/python3.7/site-packages/bs4/builder/_htmlparser.py:78: UserWarning: expected name token at '", line 1, in File "/usr/local/lib/python3.7/site-packages/bs4/__init__.py", line 303, in __init__ self._feed() File "/usr/local/lib/python3.7/site-packages/bs4/__init__.py", line 364, in _feed self.builder.feed(self.markup) File "/usr/local/lib/python3.7/site-packages/bs4/builder/_htmlparser.py", line 250, in feed parser.feed(markup) File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/html/parser.py", line 111, in feed self.goahead(0) File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/html/parser.py", line 179, in goahead k = self.parse_html_declaration(i) File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/html/parser.py", line 264, in parse_html_declaration return self.parse_marked_section(i) File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/_markupbase.py", line 149, in parse_marked_section sectName, j = self._scan_name( i+3, i ) TypeError: cannot unpack non-iterable NoneType object If it's not a bug, sorry, not sure. ---------- components: Library (Lib) messages: 348910 nosy: berker.peksag, bp256r1, ezio.melotti, kodial priority: normal severity: normal status: open title: _markupbase.py fails with TypeError on invalid keyword in marked section versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 3 00:48:25 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Sat, 03 Aug 2019 04:48:25 +0000 Subject: [New-bugs-announce] [issue37748] IDLE: Re-order run menu Message-ID: <1564807705.24.0.436490903651.issue37748@roundup.psfhosted.org> New submission from Terry J. Reedy : With the addition of Run Customized, the run menu looks like Python Shell Check Module Run Module Run... Customized This order resulted from Check and Run Module originally being implemented as extensions, which forced them to be below Python Shell. The situation was similar with Format Paragraph being forced below less commonly used options on the Format menu. It is now at the top. Run... Customized was tacked on below, without worrying about the appropriate order for the menu.. On issue #37627, Raymond Hettinger suggested "Move the [new] menu option up by one so that the regular F5 "run" is last -- learners we having a hard time finding and mouse targeting the more commonly used regular "run" option. The result would be Python Shell Check Module Run... Customized Run Module I would rather make Run Module, the most commonly used option even more prominent by putting it at the top, perhaps by reversing the order. Run Module Run... Customized Check Module Python Shell With Run Module at the top, remaining items could be ordered by logic, and to me, the above is plausible on that score, or by use, perhaps putting the second-most used at the bottom. I suspect that this will be Run Customized. To me, Check Module is useless, though I can imagine situation when it is not. Python Shell is only needed when there is no shell and one does not want to make one appear by running a module. To switch to an existing Shell, on should use the Windows menu. Anyway, another candidate is Run Module Check Module Python Shell Run... Customized ---------- messages: 348940 nosy: cheryl.sabella, rhettinger, taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: Re-order run menu type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 3 01:16:46 2019 From: report at bugs.python.org (Brandon James) Date: Sat, 03 Aug 2019 05:16:46 +0000 Subject: [New-bugs-announce] [issue37749] ipaddress - is_global method all multicast addresses and networks return true Message-ID: <1564809406.61.0.356559045946.issue37749@roundup.psfhosted.org> New submission from Brandon James : When using the ipaddress library, all multicast addresses and networks return True when using the is_global method for their respective classes. I believe their are two possible fixes for this. 1) In practice no multicast addresses are globally routable. If our definition of is_global means the address is globally routable, then I propose adding not is_multicast to each class's is_global logic. 2) RFC 5771 (IPv4) and RFCs 4291 and 7346 (IPv6) both have guidelines for what MAY be routed on the public internet (as mentioned above multicast is not routed globally in practice). Logic following those guidelines should be added. IPv4: 224.0.1.0/24, AD-HOC I, II and III addresses 224.0.2.0 - 224.0.255.255, 224.3.0.0 - 224.4.255.255, and 233.252.0.0 - 233.255.255.255 IPv6: Multicast addresses with 0xE in the SCOPE field The current logic is inaccurate when looking at the relevant RFCs and worse when looking at how routing is actually implemented. Github PR submitted for option 1 above. I've also submitted a thread to NANOG's mailing list (currently pending moderator approval) posing a few questions regarding the RFCs above. I think it's unlikely that multicast will ever be publicly routed on the internet, so really we just need to define global here. My definition would be addresses that are routed on the public internet. ---------- components: Library (Lib) messages: 348942 nosy: bjames priority: normal severity: normal status: open title: ipaddress - is_global method all multicast addresses and networks return true _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 3 04:54:37 2019 From: report at bugs.python.org (hai shi) Date: Sat, 03 Aug 2019 08:54:37 +0000 Subject: [New-bugs-announce] [issue37750] PyBuffer_FromContiguous not documented Message-ID: <1564822477.69.0.243179075111.issue37750@roundup.psfhosted.org> Change by hai shi : ---------- assignee: docs at python components: Documentation nosy: docs at python, shihai1991 priority: normal severity: normal status: open title: PyBuffer_FromContiguous not documented _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 3 07:34:13 2019 From: report at bugs.python.org (Jordon.X) Date: Sat, 03 Aug 2019 11:34:13 +0000 Subject: [New-bugs-announce] [issue37751] In codecs, function 'normalizestring' should convert both spaces and hyphens to underscores. Message-ID: <1564832053.37.0.298269063007.issue37751@roundup.psfhosted.org> New submission from Jordon.X <9651234 at qq.com>: In codecs.c, when _PyCodec_Lookup() call normalizestring(), both spaces and hyphens should be convered to underscores. Not convert spaces to hyphens. see:https://github.com/python/peps/blob/master/pep-0100.txt, Codecs (Coder/Decoders) Lookup ---------- components: Unicode messages: 348953 nosy: ezio.melotti, qigangxu, vstinner priority: normal severity: normal status: open title: In codecs, function 'normalizestring' should convert both spaces and hyphens to underscores. type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 3 08:10:17 2019 From: report at bugs.python.org (Jordon.X) Date: Sat, 03 Aug 2019 12:10:17 +0000 Subject: [New-bugs-announce] [issue37752] Redundant Py_CHARMASK called in normalizestring(codecs.c) Message-ID: <1564834217.7.0.242563409504.issue37752@roundup.psfhosted.org> New submission from Jordon.X <9651234 at qq.com>: In normalizestring(), ch = Py_TOLOWER(Py_CHARMASK(ch)); Where Py_TOLOWER is defined as following, #define Py_TOLOWER(c) (_Py_ctype_tolower[Py_CHARMASK(c)]) Redundant Py_CHARMASK called here. ---------- components: Unicode messages: 348955 nosy: ezio.melotti, qigangxu, vstinner priority: normal severity: normal status: open title: Redundant Py_CHARMASK called in normalizestring(codecs.c) type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 3 10:23:18 2019 From: report at bugs.python.org (Xinmeng Xia) Date: Sat, 03 Aug 2019 14:23:18 +0000 Subject: [New-bugs-announce] [issue37753] 2to3 not handing "<=" Message-ID: <1564842198.64.0.915142421848.issue37753@roundup.psfhosted.org> New submission from Xinmeng Xia : After conversion of 2to3 , run simple-example.py and the following error will happen. Traceback (most recent call last): File "/home/xxm/Desktop/instrument/datasetpy3/nflgame/simple_example.py", line 15, in plays = nflgame.combine_plays(games) File "/home/xxm/Desktop/instrument/datasetpy3/nflgame/nflgame/__init__.py", line 396, in combine_plays chain = itertools.chain(*[g.drives.plays() for g in games]) File "/home/xxm/Desktop/instrument/datasetpy3/nflgame/nflgame/__init__.py", line 396, in chain = itertools.chain(*[g.drives.plays() for g in games]) File "/home/xxm/Desktop/instrument/datasetpy3/nflgame/nflgame/game.py", line 407, in __getattr__ self.__drives = _json_drives(self, self.home, self.data['drives']) File "/home/xxm/Desktop/instrument/datasetpy3/nflgame/nflgame/game.py", line 675, in _json_drives d = Drive(game, i, home_team, data[str(drive_num)]) File "/home/xxm/Desktop/instrument/datasetpy3/nflgame/nflgame/game.py", line 516, in __init__ if self.time_end <= self.time_start \ TypeError: '<=' not supported between instances of 'GameClock' and 'GameClock' ---------- components: 2to3 (2.x to 3.x conversion tool) files: simple_example.py messages: 348961 nosy: xxm priority: normal severity: normal status: open title: 2to3 not handing "<=" type: compile error versions: Python 3.7 Added file: https://bugs.python.org/file48528/simple_example.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 4 06:32:38 2019 From: report at bugs.python.org (Vinay Sharma) Date: Sun, 04 Aug 2019 10:32:38 +0000 Subject: [New-bugs-announce] [issue37754] alter size of segment using multiprocessing.shared_memory Message-ID: <1564914758.61.0.506857998168.issue37754@roundup.psfhosted.org> New submission from Vinay Sharma : Hi, I am opening this to discuss about some possible enhancements in the multiprocessing.shared_memory module. I have recently started using multiprocessing.shared_memory and realised that currently the module provides no functionality to alter the size of the shared memory segment, plus the process has to know beforehand whether to create a segment or open an existing one, unlike shm_open in C, where segment can be automatically created if it doesn't exist. For an end user perspective I believe that these functionalities would be really helpful, and I would be happy to contribute, if you believe that they are necessary. I would also like to mention that I agree this might be by design, or because of some challenges, in which case it would be very helpful if I can know them. ---------- components: Library (Lib) messages: 348980 nosy: davin, pitrou, vinay0410 priority: normal severity: normal status: open title: alter size of segment using multiprocessing.shared_memory type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 4 08:48:20 2019 From: report at bugs.python.org (=?utf-8?q?Enrico_Tr=C3=B6ger?=) Date: Sun, 04 Aug 2019 12:48:20 +0000 Subject: [New-bugs-announce] [issue37755] pydoc topics, keywords and symbols always use pager instead of output Message-ID: <1564922900.31.0.640915952302.issue37755@roundup.psfhosted.org> New submission from Enrico Tr?ger : I noticed a probably unintended behavior in help() usage: when an output is set on pydoc.Helper(), most of its methods use this output instead of a pager. But 'True', 'False' and 'None' as well as all topics, keywords and symbols always use a pager instead of the configured output. My use case is to use the pydoc help system to display help contents in Geany (a text editor) in a graphical manner (and so I cannot make any use of a pager). Example code: #!/usr/bin/env python3 # -*- coding: utf-8 -*- from io import StringIO import pydoc import sys if __name__ == '__main__': help_text = StringIO() helper = pydoc.Helper(output=help_text) # help contents are written to help_text as expected helper.help('pydoc') # the following calls each show the help contents in a pager instead # of using the configured output helper.help('True') helper.help('False') helper.help('None') helper.help('**') # symbol example helper.help('SEQUENCES') # topic example helper.help('await') # keyword example Tested against Python 3.7.3. ---------- components: Library (Lib) messages: 348983 nosy: eht16 priority: normal severity: normal status: open title: pydoc topics, keywords and symbols always use pager instead of output type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 4 11:22:58 2019 From: report at bugs.python.org (bfbfbfb bfbfbf) Date: Sun, 04 Aug 2019 15:22:58 +0000 Subject: [New-bugs-announce] [issue37756] Error 0x80070643 when installing Message-ID: <1564932178.65.0.171539834023.issue37756@roundup.psfhosted.org> New submission from bfbfbfb bfbfbf : When installing python 3.7.4 64bit an error occurs 0x80070643, win10 ---------- components: Installation files: Python 3.7.4 (32-bit)_20190804181255.log messages: 348988 nosy: bfbfbfb bfbfbf priority: normal severity: normal status: open title: Error 0x80070643 when installing type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48529/Python 3.7.4 (32-bit)_20190804181255.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 4 20:15:02 2019 From: report at bugs.python.org (Nick Coghlan) Date: Mon, 05 Aug 2019 00:15:02 +0000 Subject: [New-bugs-announce] [issue37757] TargetScopeError not raised for comprehension scope conflict Message-ID: <1564964102.91.0.387603674091.issue37757@roundup.psfhosted.org> New submission from Nick Coghlan : While implementing PEP 572, Emily noted that the check for conflicts between assignment operators and comprehension iteration variables had not yet been implemented: https://bugs.python.org/issue35224#msg334331 Damien George came across this discrepancy while implementing assignment expressions for MicroPython. The proposed discussion regarding whether or not the PEP should be changed didn't happen, and the PEP itself misses the genuinely confusing cases where even an assignment expression that *never executes* will still make the iteration variable leak: >>> [i for i in range(5)] [0, 1, 2, 3, 4] >>> [i := 10 for i in range(5)] [10, 10, 10, 10, 10] >>> i 10 >>> [False and (i := 10) for i in range(5)] [False, False, False, False, False] >>> i 4 And that side effect happens even if the assignment expression is nested further down in an inner loop: >>> [(i, j, k) for i in range(2) for j in range(2) for k in range(2)] [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] >>> i Traceback (most recent call last): File "", line 1, in NameError: name 'i' is not defined >>> [(i, j, k) for i in range(2) for j in range(2) for k in range(2) if True or (i:=10)] [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] >>> i 1 I'm at the PyCon AU sprints today, and will be working on a PR to make these cases raise TargetScopeError as specified in the PEP. ---------- assignee: ncoghlan messages: 349012 nosy: Damien George, emilyemorehouse, ncoghlan priority: deferred blocker severity: normal stage: needs patch status: open title: TargetScopeError not raised for comprehension scope conflict type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 4 21:06:02 2019 From: report at bugs.python.org (Greg Price) Date: Mon, 05 Aug 2019 01:06:02 +0000 Subject: [New-bugs-announce] [issue37758] unicodedata checksum-tests only test 1/17th of Unicode's codepoints Message-ID: <1564967162.48.0.525882022653.issue37758@roundup.psfhosted.org> New submission from Greg Price : The unicodedata module has two test cases which run through the database and make a hash of its visible outputs for all codepoints, comparing the hash against a checksum. These are helpful regression tests for making sure the behavior isn't changed by patches that didn't intend to change it. But Unicode has grown since Python first gained support for it, when Unicode itself was still rather new. These test cases were added in commit 6a20ee7de back in 2000, and they haven't needed to change much since then... but they should be changed to look beyond the Basic Multilingual Plane (`range(0x10000)`) and cover all 17 planes of Unicode's final form. Spotted in discussion on GH-15019 (https://github.com/python/cpython/pull/15019#discussion_r308947884 ). I have a patch for this which I'll send shortly. ---------- components: Tests messages: 349014 nosy: Greg Price priority: normal severity: normal status: open title: unicodedata checksum-tests only test 1/17th of Unicode's codepoints type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 4 22:19:51 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Mon, 05 Aug 2019 02:19:51 +0000 Subject: [New-bugs-announce] [issue37759] Polish whatsnew for 3.8 Message-ID: <1564971591.59.0.825854790144.issue37759@roundup.psfhosted.org> New submission from Raymond Hettinger : Beginning significant edits to whatsnew, adding examples and motivations, improving organization and clarity. Work in progress. ---------- assignee: rhettinger components: Documentation messages: 349018 nosy: rhettinger priority: high severity: normal status: open title: Polish whatsnew for 3.8 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 4 23:55:35 2019 From: report at bugs.python.org (Greg Price) Date: Mon, 05 Aug 2019 03:55:35 +0000 Subject: [New-bugs-announce] [issue37760] Refactor makeunicodedata.py: dedupe parsing, use dataclass Message-ID: <1564977335.6.0.0909548880632.issue37760@roundup.psfhosted.org> New submission from Greg Price : I spent some time yesterday on #18236, and I have a patch for it. Most of that work happens in the script Tools/unicode/makeunicode.py , and along the way I made several changes there that I found made it somewhat nicer to work on, and I think will help other people reading that script too. I'd like to try to merge those improvements first. The main changes are: * As the script has grown over the years, it's gained many copies and reimplementations of logic to parse the standard format of the Unicode character database. I factored those out into a single place, which makes the parsing code shorter and the interesting parts stand out more easily. * The main per-character record type in the script's data structures is a length-18 tuple. Using the magic of dataclasses, I converted this so that e.g. the code says `record.numeric_value` instead of `record[8]`. There's no radical restructuring or rewrite here; this script has served us well. I've kept these changes focused where there's a high ratio of value, in future ease of development, to cost, in a reviewer's effort as well as mine. I'll send PRs of my changes shortly. ---------- components: Unicode messages: 349020 nosy: Greg Price, ezio.melotti, vstinner priority: normal severity: normal status: open title: Refactor makeunicodedata.py: dedupe parsing, use dataclass type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 03:23:40 2019 From: report at bugs.python.org (Tatsuo Sekine) Date: Mon, 05 Aug 2019 07:23:40 +0000 Subject: [New-bugs-announce] [issue37761] Inaccurate explanation of ArgumentParser.add_argument()'s name-or-flags in JA Message-ID: <1564989820.31.0.707943982318.issue37761@roundup.psfhosted.org> New submission from Tatsuo Sekine : Second sentence of name-or-flags explanation of argparse.ArgumentParser.add_argument() in ja lang. is: > ?????add_argument() ??1???????????????????????????????? #[1] while its original English is: > The first arguments passed to add_argument() must therefore be either a series of flags, or a simple argument name. # [2] If I translate Japanese language one back to English, it'd be something: The first argument passed to add_argument() must therefore be either a list of flags, or a simple argument name. Japanese one's explanation means, name-or-flags could be either - "name" - ["-f", "--flag"] So, Japanese translation could be improved to clarify that name-of-flags (i.e. first argument*s*) could be either name or *series* (not list) of flags. Note that, following sentences in the doc have many supporting explanations, and also, there is argparse HOWTO doc with many examples, so this is not critical issue at all. [1] https://docs.python.org/ja/3/library/argparse.html#name-or-flags [2] https://docs.python.org/3/library/argparse.html#name-or-flags ---------- assignee: docs at python components: Documentation messages: 349031 nosy: Tatsuo Sekine, docs at python priority: normal severity: normal status: open title: Inaccurate explanation of ArgumentParser.add_argument()'s name-or-flags in JA type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 10:11:00 2019 From: report at bugs.python.org (Bernhard Hiller) Date: Mon, 05 Aug 2019 14:11:00 +0000 Subject: [New-bugs-announce] [issue37762] IDLE very slow due to special characters Message-ID: <1565014260.07.0.714424226111.issue37762@roundup.psfhosted.org> New submission from Bernhard Hiller : After installing tensorflow, I tried to run the demo script found at https://www.tensorflow.org/tutorials? In a common python shell, the "model.fit(x_train, y_train, epochs=5)" step takes a few minutes. In IDLE, no end is in sight after half an hour. While the output looks normal in the common shell, IDLE shows some control characters (see attached screenshot). Windows Task Managers shows a "pythonw.exe" process taking up 25% of CPU (i.e. 1 of 4 cores). ---------- assignee: terry.reedy components: IDLE files: Python374-3.PNG messages: 349050 nosy: Bernie, terry.reedy priority: normal severity: normal status: open title: IDLE very slow due to special characters type: performance versions: Python 3.7 Added file: https://bugs.python.org/file48530/Python374-3.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 11:10:05 2019 From: report at bugs.python.org (Johan Herland) Date: Mon, 05 Aug 2019 15:10:05 +0000 Subject: [New-bugs-announce] [issue37763] Need setup.py to pick up -isystem flags from CPPFLAGS Message-ID: <1565017805.25.0.745177625686.issue37763@roundup.psfhosted.org> New submission from Johan Herland : First time contributor here, still learning the ropes. We're cross-compiling python in an environment where we set up CPPFLAGS, LDFLAGS, etc. to point directly to the locations where we have built Python's dependencies. For example, we will typically build Python in an environment that includes: CPPFLAGS=\ -isystem /path/to/ncurses/build/include \ -isystem /path/to/libffi/build/include \ -isystem /path/to/zlib/build/include \ -isystem /path/to/openssl/build/include \ -isystem /path/to/readline/build/include LDFLAGS=\ -L/path/to/ncurses/build/lib \ -L/path/to/libffi/build/lib \ -L/path/to/zlib/build/lib \ -L/path/to/openssl/build/lib \ -L/path/to/ciscossl-fom/build/lib \ -L/path/to/readline/build/lib setup.py already picks up our -L options from LDFLAGS and propagates them into the build commands, but the -isystem options from CPPFLAGS are currently ignored. I will post a PR that teaches setup.py to handle -isystem options in CPPFLAGS the same way it currently handles -I options. ---------- components: Cross-Build messages: 349054 nosy: Alex.Willmer, jherland priority: normal severity: normal status: open title: Need setup.py to pick up -isystem flags from CPPFLAGS type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 12:23:23 2019 From: report at bugs.python.org (My Tran) Date: Mon, 05 Aug 2019 16:23:23 +0000 Subject: [New-bugs-announce] [issue37764] email.Message.as_string infinite loop Message-ID: <1565022203.43.0.680976058043.issue37764@roundup.psfhosted.org> New submission from My Tran : The following will hang the system until it runs out of memory. import email import email.policy text = """From: user at host.com To: user at host.com Bad-Header: =?us-ascii?Q?LCSwrV11+IB0rSbSker+M9vWR7wEDSuGqmHD89Gt=ea0nJFSaiz4vX3XMJPT4vrE?= =?us-ascii?Q?xGUZeOnp0o22pLBB7CYLH74Js=wOlK6Tfru2U47qR?= =?us-ascii?Q?72OfyEY2p2=2FrA9xNFyvH+fBTCmazxwzF8nGkK6D?= Hello! """ eml = email.message_from_string(text, policy=email.policy.SMTPUTF8) eml.as_string() ---------- components: email messages: 349055 nosy: barry, mytran, r.david.murray priority: normal severity: normal status: open title: email.Message.as_string infinite loop versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 14:03:28 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Mon, 05 Aug 2019 18:03:28 +0000 Subject: [New-bugs-announce] [issue37765] Include keywords in autocomplete list for IDLE Message-ID: <1565028208.66.0.693944767751.issue37765@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Currently, the basic repl for python provides keywords as part of autocompletion but IDLE doesn't provide them. I was trying to build an async repl on top of IDLE to support top level await statements as part of IDLE since "python -m asyncio" doesn't provide a good repl and found during usage keywords like async/await being part of autocomplete to provide a good experience like the basic repl to type faster. I couldn't find any old issues with search around why keywords were excluded so I thought of filing a new one for this suggestion. ---------- assignee: terry.reedy components: IDLE messages: 349061 nosy: terry.reedy, xtreak priority: normal severity: normal status: open title: Include keywords in autocomplete list for IDLE type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 16:34:53 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 05 Aug 2019 20:34:53 +0000 Subject: [New-bugs-announce] [issue37766] IDLE autocomplete: revise fetch_completions, add htest Message-ID: <1565037293.97.0.42173755359.issue37766@roundup.psfhosted.org> New submission from Terry J. Reedy : #36419 did not cover fetch_ completions. Most of the remaining 7% of autocomplete not covered by tests is in that function. I want to rename smalll to small and bigl to big (and in test file); they are awkward to read and write. I may want to revise otherwise to aid testing. The test line referencing #36405 fails when run in autocomplete itself. I need to refresh myself as to why I added it and revise or delete. Some of the test_fetch_completion needs revision, and it should be split before being augmented. An htest would make manual testing of intended behavior changes easier. ---------- assignee: terry.reedy components: IDLE messages: 349069 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE autocomplete: revise fetch_completions, add htest type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 16:40:23 2019 From: report at bugs.python.org (Christopher Caputo) Date: Mon, 05 Aug 2019 20:40:23 +0000 Subject: [New-bugs-announce] [issue37767] TTK Treeview alternating row color not working Message-ID: <1565037623.89.0.704768758983.issue37767@roundup.psfhosted.org> New submission from Christopher Caputo : The default installation of Python3.7 on all my Win10 machines has a ttk theme file that disables treeview alternating row colors. The specific file for me is "vistaTheme.tcl" located at "C:\Program Files\Python37\tcl\tk8.6\ttk". In the #Treeview section of the file the "ttk::style map Treeview" line needed to be changed from: ttk::style map Treeview \ -background [list disabled $colors(-frame)\ {!disabled !selected} $colors(-window) \ selected SystemHighlight] \ -foreground [list disabled $colors(-disabledfg) \ {!disabled !selected} black \ selected SystemHighlightText] Changed to: ttk::style map Treeview -background [list selected SystemHighlight] -foreground [list selected SystemHighlightText] Essentially all the "disabled" parts needed to be removed. ---------- components: Tkinter messages: 349071 nosy: Mookiefer priority: normal severity: normal status: open title: TTK Treeview alternating row color not working type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 16:56:06 2019 From: report at bugs.python.org (Tal Einat) Date: Mon, 05 Aug 2019 20:56:06 +0000 Subject: [New-bugs-announce] [issue37768] IDLE: Show help(object) output in a text viewer Message-ID: <1565038566.59.0.841955505333.issue37768@roundup.psfhosted.org> New submission from Tal Einat : Currently, IDLE just writes the entire help message into the shell. If auto-squeezing is enabled, then long help messages are automatically squeezed, following which the help text can be viewed in a text viewer or expanded inline. This is still not great UX. I propose to simply have help(object) open a text viewer with the help text. There's actually an old (2007) comment in Lib\idlelib\pyshell.py by KBK that this should be done. See related discussion in issue #35855. ---------- assignee: taleinat components: IDLE messages: 349073 nosy: taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: Show help(object) output in a text viewer type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 18:02:48 2019 From: report at bugs.python.org (Jonas Binding) Date: Mon, 05 Aug 2019 22:02:48 +0000 Subject: [New-bugs-announce] [issue37769] Windows Store installer should warn user about MAX_PATH Message-ID: <1565042568.26.0.835015396911.issue37769@roundup.psfhosted.org> New submission from Jonas Binding : The "Windows Store" installer for Python has a seemingly low entry barrier, causing people to install without reading something like https://docs.python.org/3.7/using/windows.html. However, due to the really long path it uses for Python (e.g. C:\Users\USERNAME\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\), the chances of running into issues with paths longer than 260 characters are really high. For example installing pytorch already fails due to this limitation, see https://github.com/pytorch/pytorch/issues/23823 and https://www.reddit.com/r/pytorch/comments/c6cllq/issue_installing_pytorch/ Therefore, the Windows Store installer should offer to change the registry key to enable longer paths, or at least show a message to this effect. ---------- components: Windows messages: 349079 nosy: Jonas Binding, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows Store installer should warn user about MAX_PATH type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 18:40:03 2019 From: report at bugs.python.org (Jason Curtis) Date: Mon, 05 Aug 2019 22:40:03 +0000 Subject: [New-bugs-announce] [issue37770] reversed class should implement __reversed__ Message-ID: <1565044803.77.0.346292887054.issue37770@roundup.psfhosted.org> New submission from Jason Curtis : I've just been trying to implement some logic which potentially involves reversing things back to their initial orders, and it'd be nice to just be able to call reversed() on something that has already been reversed. >>> reversed(reversed([1,2,3,4])) TypeError: 'list_reverseiterator' object is not reversible Seems like this should be trivial to implement by just returning the initial iterator. Happy to post a pull request if it would be considered. ---------- components: Library (Lib) messages: 349085 nosy: jason.curtis priority: normal severity: normal status: open title: reversed class should implement __reversed__ type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 5 21:20:30 2019 From: report at bugs.python.org (GeeVye) Date: Tue, 06 Aug 2019 01:20:30 +0000 Subject: [New-bugs-announce] [issue37771] No equivalent of `inspect.getcoroutinestate` for PyAsyncGenASend instances Message-ID: <1565054430.97.0.58176402921.issue37771@roundup.psfhosted.org> New submission from GeeVye : In PEP 525, async generators were introduced. They use `.asend()` and `.athrow()` methods that return a "coroutine-like" object - specifically, a PyAsyncGenASend and PyAsyncGenAThrow respectively. While these "coroutine-like" object implement `.send()`, `.throw()`, and `.close()`, they don't provide any attributes like normal coroutine objects do such as `cr_running` or `cr_await`. When I use `inspect.getcoroutinestate()`, it raises an AttributeError on how there isn't a `cr_running` attribute / flag. There is a workaround I use which is to wrap it with another coroutine as below: >>> async def async_generator(): ... yield ... >>> async def wrap_coro(coro): ... return await coro >>> agen = async_generator() >>> asend = wrap_coro(agen.asend(None)) This seems like something that should be changed to make it more inline with normal coroutines / awaitables. ---------- components: Demos and Tools messages: 349093 nosy: GeeVye priority: normal severity: normal status: open title: No equivalent of `inspect.getcoroutinestate` for PyAsyncGenASend instances type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 02:57:30 2019 From: report at bugs.python.org (=?utf-8?q?J=C3=B6rn_Heissler?=) Date: Tue, 06 Aug 2019 06:57:30 +0000 Subject: [New-bugs-announce] [issue37772] zipfile.Path.iterdir() outputs sub directories many times or not at all Message-ID: <1565074650.65.0.762547380277.issue37772@roundup.psfhosted.org> New submission from J?rn Heissler : Hello, #!/usr/bin/python3.8 from zipfile import ZipFile, Path import io def recurse_print(parent): for child in parent.iterdir(): if child.is_file(): print(child, child.read_text()) if child.is_dir(): recurse_print(child) data = io.BytesIO() zf = ZipFile(data, "w") zf.writestr("a.txt", "content of a") zf.writestr("b/c.txt", "content of c") zf.writestr("b/d/e.txt", "content of e") zf.writestr("b/f.txt", "content of f") zf.filename = "abcde.zip" root = Path(zf) recurse_print(root) Expected result: abcde.zip/a.txt content of a abcde.zip/b/c.txt content of c abcde.zip/b/f.txt content of f abcde.zip/b/d/e.txt content of e Actual result: abcde.zip/a.txt content of a abcde.zip/b/c.txt content of c abcde.zip/b/f.txt content of f abcde.zip/b/d/e.txt content of e abcde.zip/b/c.txt content of c abcde.zip/b/f.txt content of f abcde.zip/b/d/e.txt content of e Path._add_implied_dirs adds the sub directory "b/" twice: once for each direct child (i.e. "c.txt" and "f.txt") And similarly: data = io.BytesIO() zf = ZipFile(data, "w") zf.writestr("a.txt", "content of a") zf.writestr("b/d/e.txt", "content of e") zf.filename = "abcde.zip" root = Path(zf) recurse_print(root) Expected result: abcde.zip/a.txt content of a abcde.zip/b/d/e.txt content of e Actual result: abcde.zip/a.txt content of a Here, Path._add_implied_dirs doesn't add "b/" at all, because there are no direct childs of "b". ---------- components: Library (Lib) messages: 349101 nosy: joernheissler priority: normal severity: normal status: open title: zipfile.Path.iterdir() outputs sub directories many times or not at all type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 05:35:50 2019 From: report at bugs.python.org (=?utf-8?q?J=C3=B6rn_Heissler?=) Date: Tue, 06 Aug 2019 09:35:50 +0000 Subject: [New-bugs-announce] [issue37773] ValueError: I/O operation on closed file. in ZipFile destructor Message-ID: <1565084150.95.0.74829896398.issue37773@roundup.psfhosted.org> New submission from J?rn Heissler : When running this code: from zipfile import ZipFile import io def foo(): pass data = io.BytesIO() zf = ZipFile(data, "w") I get this message: Exception ignored in: Traceback (most recent call last): File "/home/user/git/oss/cpython/Lib/zipfile.py", line 1800, in __del__ File "/home/user/git/oss/cpython/Lib/zipfile.py", line 1817, in close ValueError: I/O operation on closed file. Comment out def foo: pass, and there is no error. It looks like the bug was introduced with commit ada319bb6d0ebcc68d3e0ef2b4279ea061877ac8 (bpo-32388). ---------- messages: 349104 nosy: joernheissler, pitrou priority: normal severity: normal status: open title: ValueError: I/O operation on closed file. in ZipFile destructor type: crash versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 10:00:15 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Tue, 06 Aug 2019 14:00:15 +0000 Subject: [New-bugs-announce] [issue37774] Micro-optimize vectorcall using PY_LIKELY Message-ID: <1565100015.93.0.140823146757.issue37774@roundup.psfhosted.org> New submission from Jeroen Demeyer : Take the LIKELY/UNLIKELY macros out of Objects/obmalloc.c (renaming them of course). Use them in a few places to micro-optimize vectorcall. ---------- components: Interpreter Core messages: 349108 nosy: Mark.Shannon, inada.naoki, jdemeyer priority: normal severity: normal status: open title: Micro-optimize vectorcall using PY_LIKELY versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 13:38:00 2019 From: report at bugs.python.org (hai shi) Date: Tue, 06 Aug 2019 17:38:00 +0000 Subject: [New-bugs-announce] [issue37775] update doc of compileall Message-ID: <1565113080.48.0.890450145581.issue37775@roundup.psfhosted.org> New submission from hai shi : due to https://github.com/python/cpython/commit/a6b3ec5b6d4f6387820fccc570eea08b9615620d, we need update invalidation_mode's value from py_compile.PycInvalidationMode.TIMESTAMP to None in compile_xx function. ---------- assignee: docs at python components: Documentation messages: 349120 nosy: docs at python, shihai1991 priority: normal severity: normal status: open title: update doc of compileall versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 13:43:30 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 06 Aug 2019 17:43:30 +0000 Subject: [New-bugs-announce] [issue37776] Test Py_Finalize() from a subinterpreter Message-ID: <1565113410.42.0.547052767114.issue37776@roundup.psfhosted.org> New submission from Joannah Nanjekye : Am opening a test request from @ncoghlan from the discussing on issue 36225. There is a need to add a test that exercises what happens when Py_Finalize() is called from a sub-interpreter rather than the main interpreter. ---------- messages: 349123 nosy: eric.snow, nanjekyejoannah, ncoghlan priority: normal severity: normal status: open title: Test Py_Finalize() from a subinterpreter _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 15:49:52 2019 From: report at bugs.python.org (Casey) Date: Tue, 06 Aug 2019 19:49:52 +0000 Subject: [New-bugs-announce] [issue37777] imap breaks on OpenSSL 1.1.1 when SNI is enforced Message-ID: <1565120992.49.0.0764506500298.issue37777@roundup.psfhosted.org> New submission from Casey : OpenSSL 1.1.1 is an LTS release that will see long maintenance, and Ubuntu 18.04 LTS has now upgraded from 1.1.0 to 1.1.1. However, with this upgrade, TLS 1.3 allows email clients to require an SNI for the handshake to succeed. Because the 2.7 imap module does not enforce or provide SNI to the handshake, Python 2.7 with OpenSSL 1.1.1 will break if an email client requires the SNI hostname. Relevant 2.7 file: https://github.com/python/cpython/blob/2.7/Lib/imaplib.py Right now, the only email client that enforces an SNI header to connect is GMail, and this is why no SSL or imap tests would currently fail due to this issue. This issue was addressed in Python 3.4 but not backported as far as I've been able to tell: https://github.com/python/cpython/commit/7243b574e5fc6f9ae68dc5ebd8252047b8e78e3b With a few releases still planned for Python 2.7 before EOL according to Pep 373, while this is not directly a security issue it does block the use of the latest OpenSSL package and seems like a useful inclusion to the last few releases. Happy to submit a backport PR (in progress) if that's likely. Reproduce steps here: https://github.com/CaseyFaist/reproduceSNIcase ---------- assignee: christian.heimes components: SSL messages: 349131 nosy: alex, cfactoid, christian.heimes, dstufft, janssen priority: normal severity: normal status: open title: imap breaks on OpenSSL 1.1.1 when SNI is enforced type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 16:07:14 2019 From: report at bugs.python.org (Steve Dower) Date: Tue, 06 Aug 2019 20:07:14 +0000 Subject: [New-bugs-announce] [issue37778] Windows Store package uses wrong icons for file association Message-ID: <1565122034.52.0.620169066443.issue37778@roundup.psfhosted.org> New submission from Steve Dower : No Logo element is specified in the FileTypeAssociation element, and so files get a different icon from the regular installer. See https://docs.microsoft.com/en-us/uwp/schemas/appxpackage/appxmanifestschema/element-1-logo ---------- assignee: steve.dower components: Windows messages: 349132 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Windows Store package uses wrong icons for file association type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 16:22:40 2019 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Blondon?=) Date: Tue, 06 Aug 2019 20:22:40 +0000 Subject: [New-bugs-announce] [issue37779] configparser: add documentation about several read() behaviour Message-ID: <1565122960.66.0.217084239893.issue37779@roundup.psfhosted.org> New submission from St?phane Blondon : The documentation is not explicit about the behaviour if several files are read by the same ConfigParser: the data are not reset between two read(). I suggest to add such information in the documentation. There is a draft: === start === When a `ConfigParser` instance make several calls of `read_file()`, `read_string()` or `read_dict()` functions, the previous data will be overriden by the new ones. Otherwise, the previous data is kept. This behaviour is equivalent to a `read()` call with several files passed to `filenames` parameter`. Example: config = configparser.ConfigParser() s = """ [spam] alpha=1 """ config.read_string(s) # dict(config["spam"]) == {'alpha': '1'} config.read_string("") # dict(config["spam"]) == {'alpha': '1'} === end === What do you think about it? I can do a PR but I wonder where is the best location in the documentation to insert it. At the end of the 'Quick start paragraph' (https://docs.python.org/3/library/configparser.html#quick-start)? Or perhaps a new paragraph after 'Fallback Values'? Other location? ---------- components: Library (Lib) messages: 349133 nosy: sblondon priority: normal severity: normal status: open title: configparser: add documentation about several read() behaviour type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 21:43:58 2019 From: report at bugs.python.org (wang xuancong) Date: Wed, 07 Aug 2019 01:43:58 +0000 Subject: [New-bugs-announce] [issue37780] A strange bug in eval() not present in Python 3 Message-ID: <1565142238.6.0.594105921359.issue37780@roundup.psfhosted.org> New submission from wang xuancong : We all know that since: [False, True, False].count(True) gives 1 eval('[False, True, False].count(True)') also gives 1. However, in Python 2, eval('[False, True, False].count(True)', {}, Counter()) gives 3, while eval('[False, True, False].count(True)', {}, {}) gives 1. Take note that a Counter is a special kind of defaultdict, which is again a special kind of dict. Thus, this should not alter the behaviour of eval(). This behaviour is correct in Python 3. ---------- components: Library (Lib) messages: 349146 nosy: xuancong84 priority: normal severity: normal status: open title: A strange bug in eval() not present in Python 3 type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 6 22:07:14 2019 From: report at bugs.python.org (Inada Naoki) Date: Wed, 07 Aug 2019 02:07:14 +0000 Subject: [New-bugs-announce] [issue37781] Use "z" for PY_FORMAT_SIZE_T Message-ID: <1565143634.89.0.401096300573.issue37781@roundup.psfhosted.org> New submission from Inada Naoki : MSVC 2015 supports "z" for size_t format. I'm not sure about 2013. AIX support it too. Now "z" is portable enough. https://mail.python.org/archives/list/python-dev at python.org/thread/CAXKWESUIWJNJFLLXXWTQDUWTN3F7KOU/#QVSBYYICHYC3IFITF4QNMGIOLOGAQS6I ---------- messages: 349147 nosy: inada.naoki priority: normal severity: normal status: open title: Use "z" for PY_FORMAT_SIZE_T _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 05:32:13 2019 From: report at bugs.python.org (PBrudny) Date: Wed, 07 Aug 2019 09:32:13 +0000 Subject: [New-bugs-announce] [issue37782] typing.NamedTuple default parameter type issue Message-ID: <1565170333.28.0.690764361146.issue37782@roundup.psfhosted.org> New submission from PBrudny : There is an issue when NamedTuple parameter with default, which has not explicitly declared type is changed by keyword. Is that an expected behavior (no info https://docs.python.org/3.7/library/collections.html#collections.namedtuple) Used python release: Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32 test.py: from typing import NamedTuple class MyTestedTuple(NamedTuple): example_text = "default_text" example_int: int = 3 if __name__ == '__main__': print(MyTestedTuple().example_text) fault_tuple = MyTestedTuple(example_text="text_from_call") print(fault_tuple.example_text) Call: python test.py default_text Traceback (most recent call last): File "test.py", line 11, in fault_tuple = MyTestedTuple(example_text="text_from_call") TypeError: __new__() got an unexpected keyword argument 'example_text' ---------- components: Library (Lib) files: test.py messages: 349157 nosy: PBrudny priority: normal severity: normal status: open title: typing.NamedTuple default parameter type issue type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48533/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 06:08:54 2019 From: report at bugs.python.org (=?utf-8?q?Philippe_N=C3=A9grel?=) Date: Wed, 07 Aug 2019 10:08:54 +0000 Subject: [New-bugs-announce] [issue37783] int returns error (SystemError) Message-ID: <1565172534.52.0.656651813313.issue37783@roundup.psfhosted.org> New submission from Philippe N?grel : Whenever I compile the code, I get this error: Exception has occurred: SystemError returned a result with an error set This issue occured at line 32 of the file "SaveTools.py" in this github branch: https://github.com/l3alr0g/Wave-simulator/tree/python-bug-report (sry I couldn't get the 'Github PR' option to work) ---------- components: Windows messages: 349158 nosy: Philippe N?grel, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: int returns error (SystemError) type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 06:46:55 2019 From: report at bugs.python.org (Emmanuel Coirier) Date: Wed, 07 Aug 2019 10:46:55 +0000 Subject: [New-bugs-announce] [issue37784] Compiling Python 3 with sqlite impossible when sqlite installation is in a non standard directory Message-ID: <1565174815.9.0.140806109793.issue37784@roundup.psfhosted.org> New submission from Emmanuel Coirier : When compiling sqlite with a specific prefix, Python compilation process is unable to find sqlite despite CFLAGS and LDFLAGS environment variable correctly set. The problem is in the setup.py, in the detect_modules function or the detect_sqlite function. The sqlite_inc_paths list variable (line 1351) only contains well known places for sqlite, but there is no way to include others pathes (except by modifying the source code). The inc_dirs variable is also checked. But since it is not initialized with CFLAGS env_var, the CFLAGS -I pathes are not included, then not checked for a sqlite header file. This behavior forbids compiling and installing sqlite in a specific directory. Adding the sqlite3.h path in the detect_sqlite function allows sqlite to be included in the final python compilation and install and is a possible workaround. There should be a way to add this path to the detect_sqlite without having to edit the file on the fly in order to have a working sqlite module with a non standard sqlite install directory. On a side note, why installing sqlite in some random directory ? People don't always have right to write in /usr/lib and /usr/local/lib. ---------- components: Build, Extension Modules messages: 349162 nosy: manuco priority: normal severity: normal status: open title: Compiling Python 3 with sqlite impossible when sqlite installation is in a non standard directory type: compile error versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 08:49:16 2019 From: report at bugs.python.org (Jakub Kulik) Date: Wed, 07 Aug 2019 12:49:16 +0000 Subject: [New-bugs-announce] [issue37785] argparse uses %s in gettext calls Message-ID: <1565182156.22.0.512922989842.issue37785@roundup.psfhosted.org> New submission from Jakub Kulik : Running xgettext on argparse.py (of any currently supported Python 3.x) return following warning: ./Lib/argparse.py: warning: 'msgid' format string with unnamed arguments cannot be properly localized: The translator cannot reorder the arguments. Please consider using a format string with named arguments, and a mapping instead of a tuple for the arguments. Same problem was already partially fixed here: https://bugs.python.org/issue10528. I guess that this occurrence was either missed or is new since. It would be nice to have this backported to all supported releases but considering incompatibility worries in issue linked above, it may be fixed only in 3.8+ (which is still nice). ---------- components: Library (Lib) messages: 349166 nosy: kulikjak priority: normal severity: normal status: open title: argparse uses %s in gettext calls versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 10:23:59 2019 From: report at bugs.python.org (Paul Francis) Date: Wed, 07 Aug 2019 14:23:59 +0000 Subject: [New-bugs-announce] [issue37786] Doesn't delete PATH from System Variables (uninstallation) Message-ID: <1565187839.39.0.765902255643.issue37786@roundup.psfhosted.org> New submission from Paul Francis : Neither the 32bit nor the 64bit version of Python 3.7.4 will remove the PATH variables from the System Environment Variables of the O/S even though the uninstallation screen explicitly displays a message that infers it is doing so. Windows 10 Pro x64 v1903 ---------- components: Windows messages: 349167 nosy: Paul Francis, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Doesn't delete PATH from System Variables (uninstallation) type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 11:32:45 2019 From: report at bugs.python.org (Kevin Braun) Date: Wed, 07 Aug 2019 15:32:45 +0000 Subject: [New-bugs-announce] [issue37787] Minimum denormal or ** bug Message-ID: <1565191965.7.0.701349886261.issue37787@roundup.psfhosted.org> New submission from Kevin Braun : Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)] I believe 2**-1074 is the smallest denormalized number for Python on my system (Windows), so I would expect 2**-1075 to yield 0.0, but it does not. Instead: >>> 2**-1074==2**-1075 True >>> (2**-1074).hex() '0x0.0000000000001p-1022' >>> (2**-1075).hex() '0x0.0000000000001p-1022' And, the above is not consistent with the following: >>> (2**-1074)/2 0.0 >>> (2**-1074)/2 == 2**-1075 False >>> 1/2**1075 0.0 >>> 1/2**1075 == 2**-1075 False Given the above observations, I suspect there is a bug in **. ---------- components: Interpreter Core messages: 349168 nosy: Kevin Braun priority: normal severity: normal status: open title: Minimum denormal or ** bug type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 12:22:52 2019 From: report at bugs.python.org (Anselm Kruis) Date: Wed, 07 Aug 2019 16:22:52 +0000 Subject: [New-bugs-announce] [issue37788] fix for bpo-36402 (threading._shutdown() race condition) causes reference leak Message-ID: <1565194972.64.0.065323919799.issue37788@roundup.psfhosted.org> New submission from Anselm Kruis : Starting with commit 468e5fec (bpo-36402: Fix threading._shutdown() race condition (GH-13948)) the following trivial test case leaks one reference and one memory block. class MiscTestCase(unittest.TestCase): def test_without_join(self): # Test that a thread without join does not leak references. # Use a debug build and run "python -m test -R: test_threading" threading.Thread().start() Attached is a patch, that adds this test case to Lib/test/test_threading.py. After you apply this patch "python -m test -R: test_threading" leaks one (additional) reference. This leak is also present in Python 3.7.4 and 3.8. I'm not sure, if it correct not to join a thread, but it did work flawlessly and didn't leak in previous releases. I didn't analyse the root cause yet. ---------- components: Library (Lib) files: threading-leak-test-case.diff keywords: patch messages: 349173 nosy: anselm.kruis priority: normal severity: normal status: open title: fix for bpo-36402 (threading._shutdown() race condition) causes reference leak type: resource usage versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48534/threading-leak-test-case.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 15:22:07 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Wed, 07 Aug 2019 19:22:07 +0000 Subject: [New-bugs-announce] [issue37789] Update doc strings for test.bytecode_helper Message-ID: <1565205727.21.0.329229994167.issue37789@roundup.psfhosted.org> New submission from Joannah Nanjekye : I want to believe there is a mistake in the doc strings for these methods: def assertInBytecode(self, x, opname, argval=_UNSPECIFIED): """Returns instr if op is found, otherwise throws AssertionError""" for instr in dis.get_instructions(x): if instr.opname == opname: if argval is _UNSPECIFIED or instr.argval == argval: return instr disassembly = self.get_disassembly_as_string(x) if argval is _UNSPECIFIED: msg = '%s not found in bytecode:\n%s' % (opname, disassembly) else: msg = '(%s,%r) not found in bytecode:\n%s' msg = msg % (opname, argval, disassembly) self.fail(msg) def assertNotInBytecode(self, x, opname, argval=_UNSPECIFIED): """Throws AssertionError if op is found""" for instr in dis.get_instructions(x): if instr.opname == opname: disassembly = self.get_disassembly_as_string(x) if argval is _UNSPECIFIED: msg = '%s occurs in bytecode:\n%s' % (opname, disassembly) elif instr.argval == argval: msg = '(%s,%r) occurs in bytecode:\n%s' msg = msg % (opname, argval, disassembly) self.fail(msg) It is supported to refer to *opname* not *op*. Either the method signatures or the doc strings have to be updated. I stand to be corrected If wrong though. ---------- messages: 349193 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Update doc strings for test.bytecode_helper _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 7 16:36:00 2019 From: report at bugs.python.org (Alexander Pyhalov) Date: Wed, 07 Aug 2019 20:36:00 +0000 Subject: [New-bugs-announce] [issue37790] subprocess.Popen() is extremely slow Message-ID: <1565210160.36.0.538688157223.issue37790@roundup.psfhosted.org> New submission from Alexander Pyhalov : We've moved illumos-gate wsdiff python tool from Python 2 to Python 3. The tool does the following - for each file from old and new proto area compares file attributes to find differences in binary otput (spawning elfdump, dump and other utilities). Performance has degraded in two times when commands.getstatusoutput() was replaced by subprocess.getstatusoutput(). os.popen() now is Popen() wrapper, so it also has poor performance. Even naive popen() implementation using os.system() and os.mkstemp() behaved much more efficiently (about two times faster e.g. 20 minuts vs 40 minutes for single tool pass). ---------- components: Library (Lib) messages: 349197 nosy: Alexander.Pyhalov priority: normal severity: normal status: open title: subprocess.Popen() is extremely slow versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 8 04:20:45 2019 From: report at bugs.python.org (Marco Sulla) Date: Thu, 08 Aug 2019 08:20:45 +0000 Subject: [New-bugs-announce] [issue37791] Propose to deprecate ignore_errors, Message-ID: <1565252445.58.0.903646921674.issue37791@roundup.psfhosted.org> Change by Marco Sulla : ---------- nosy: Marco Sulla priority: normal severity: normal status: open title: Propose to deprecate ignore_errors, _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 8 05:57:04 2019 From: report at bugs.python.org (Marco Sulla) Date: Thu, 08 Aug 2019 09:57:04 +0000 Subject: [New-bugs-announce] [issue37792] xml.etree.ElementTree.Element.__eq__ does compare only objects identity Message-ID: <1565258224.48.0.98423013878.issue37792@roundup.psfhosted.org> New submission from Marco Sulla : Currectly, even if two `Element`s elem1 and elem2 are different objects but the tree is identical, elem1 == elem2 returns False. The only effective way to compare two `Element`s is ElementTree.tostring(elem1) == ElementTree.tostring(elem2) Furthermore, from 3.8 this could be not true anymore, since the order of insertion of attributes will be preserved. So if I simply wrote a tag with two identical attributes, but with different order, the trick will not work anymore. Is it so much complicated to implement an __eq__ for `Element` that traverse its tree? PS: some random remarks about xml.etree.ElementTree module: 1. why `fromstring` and `fromstringlist` separated functions? `fromstring` could use duck typing for the main argument, and `fromstringlist` deprecated. 2. `SubElement`: why the initial is a capital letter? It seems the constructor of a different class, while it's a factory function. I'll change it to `subElement` and deprecate `SubElement` ---------- components: Library (Lib) messages: 349230 nosy: Marco Sulla priority: normal severity: normal status: open title: xml.etree.ElementTree.Element.__eq__ does compare only objects identity versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 8 09:30:55 2019 From: report at bugs.python.org (Eric V. Smith) Date: Thu, 08 Aug 2019 13:30:55 +0000 Subject: [New-bugs-announce] [issue37793] argparse uses "container object", should be "iterable" Message-ID: <1565271055.06.0.436207643911.issue37793@roundup.psfhosted.org> New submission from Eric V. Smith : https://docs.python.org/3/library/argparse.html#choices says "These can be handled by passing a container object as the choices keyword argument to add_argument()". I think this should be "iterable" instead. Internally, argparse reads the iterable and converts it to a list so that it can read it multiple times (among other reasons, I'm sure). One of the examples uses range(), which is not a container. "container" is also used in https://docs.python.org/3/library/argparse.html#the-add-argument-method Bonus points for fixing the docstring in argparse.py. I didn't check if anywhere else in that file needs to be fixed. ---------- assignee: docs at python components: Documentation keywords: newcomer friendly messages: 349234 nosy: docs at python, eric.smith priority: normal severity: normal status: open title: argparse uses "container object", should be "iterable" versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 8 13:14:02 2019 From: report at bugs.python.org (Nikita Kniazev) Date: Thu, 08 Aug 2019 17:14:02 +0000 Subject: [New-bugs-announce] [issue37794] Replace /Ox with /O2 Message-ID: <1565284442.44.0.244166862045.issue37794@roundup.psfhosted.org> New submission from Nikita Kniazev : The /O2 is a superset of /Ox with additional /GF and /Gy switches which enables strings and functions deduplication and almost always are favorable optimizations without downsides. https://docs.microsoft.com/en-us/cpp/build/reference/ox-full-optimization ---------- components: Distutils messages: 349243 nosy: Kojoley, dstufft, eric.araujo priority: normal severity: normal status: open title: Replace /Ox with /O2 type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 8 17:57:48 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Thu, 08 Aug 2019 21:57:48 +0000 Subject: [New-bugs-announce] [issue37795] Fix deprecation warnings causing the test suite to fail when running with -Werror Message-ID: <1565301468.37.0.920953443916.issue37795@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : The test suite is failing when running with python -Werror due to some uncatched DeprecationWarnings. ---------- components: Tests messages: 349261 nosy: pablogsal priority: normal severity: normal status: open title: Fix deprecation warnings causing the test suite to fail when running with -Werror versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 8 20:02:55 2019 From: report at bugs.python.org (Michael Kleehammer) Date: Fri, 09 Aug 2019 00:02:55 +0000 Subject: [New-bugs-announce] [issue37796] ModuleFinder does not resolve ".." correctly Message-ID: <1565308975.62.0.708639838016.issue37796@roundup.psfhosted.org> New submission from Michael Kleehammer : The modulefinder module does not handle relative directories properly. The error I found is when one subpackage attempts to import from a sibling subpackage using the form from ..language import ( DirectiveDefinitionNode, ... ) In this example, it would report "language.DirectiveDefinitionNode" is missing. It correctly resolves the names when importing modules, but when an import fails because it is a variable or function, it records the name incorrectly and cannot filter it out later. I've attached a small test case and there is a README describing the test and results. ---------- components: Library (Lib) files: test.tar.gz messages: 349268 nosy: mkleehammer priority: normal severity: normal status: open title: ModuleFinder does not resolve ".." correctly type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48535/test.tar.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 8 20:30:15 2019 From: report at bugs.python.org (skreft) Date: Fri, 09 Aug 2019 00:30:15 +0000 Subject: [New-bugs-announce] [issue37797] Add name attribute to NameError Message-ID: <1565310615.1.0.816641824281.issue37797@roundup.psfhosted.org> New submission from skreft : PEP 473 was recently rejected (https://mail.python.org/pipermail/python-dev/2019-March/156692.html) because it was too broad and was suggested to be broken down in smaller issues. This issue is requesting the addition of the optional attribute `name` to `NameError`. The idea of having a structured attribute is to allow tools to act upon these exceptions. For example you could imagine a test runner which detect typos and suggests the right name to use. There is a current PR (https://github.com/python/cpython/pull/6271) adding this functionality, but it may need to be rebased as it has been awaiting a reply since April last year. ---------- components: Interpreter Core messages: 349270 nosy: skreft priority: normal pull_requests: 14921 severity: normal status: open title: Add name attribute to NameError type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 9 00:43:26 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Fri, 09 Aug 2019 04:43:26 +0000 Subject: [New-bugs-announce] [issue37798] Add C fastpath for statistics.NormalDist.inv_cdf() Message-ID: <1565325806.95.0.315734354533.issue37798@roundup.psfhosted.org> New submission from Raymond Hettinger : Create new module: Modules/_statisticsmodule.c Add a function: _normal_dist_inv_cdf(p, mu, sigma) |-> x Mostly, it should be a cut-and-paste from the pure Python version, just add argument processing and semi-colons. Expect to measure a manyfold speedup. ---------- components: Extension Modules keywords: easy (C) messages: 349273 nosy: rhettinger, steven.daprano priority: normal severity: normal status: open title: Add C fastpath for statistics.NormalDist.inv_cdf() versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 9 05:38:48 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Fri, 09 Aug 2019 09:38:48 +0000 Subject: [New-bugs-announce] [issue37799] Renaming Doc/reference/ to Doc/language/ Message-ID: <1565343528.85.0.0920762754632.issue37799@roundup.psfhosted.org> New submission from G?ry : The page https://docs.python.org/3/ lists these two parts: - Library Reference - Language Reference So both parts are "references". However in the cpython GitHub repository, the Doc/ directory contains these two directories: - library/ - reference/ So to be consistent, shouldn't we rename the Doc/reference/ directory to Doc/language/? ---------- assignee: docs at python components: Documentation messages: 349280 nosy: docs at python, maggyero priority: normal severity: normal status: open title: Renaming Doc/reference/ to Doc/language/ versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 9 06:11:12 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Fri, 09 Aug 2019 10:11:12 +0000 Subject: [New-bugs-announce] [issue37800] Clean up the documentation on module attributes Message-ID: <1565345472.1.0.903395402211.issue37800@roundup.psfhosted.org> Change by G?ry : ---------- assignee: docs at python components: Documentation nosy: docs at python, eric.snow, maggyero priority: normal pull_requests: 14923 severity: normal status: open title: Clean up the documentation on module attributes type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 9 07:53:21 2019 From: report at bugs.python.org (Vadim Engelson) Date: Fri, 09 Aug 2019 11:53:21 +0000 Subject: [New-bugs-announce] [issue37801] Compilation on MINGW64 fails (CODESET, wcstok, ...) Message-ID: <1565351601.24.0.515340331062.issue37801@roundup.psfhosted.org> New submission from Vadim Engelson : Compilation on MINGW64 fails (CODESET,wcstok,...) I am using the latest MINGW64 (http://repo.msys2.org/distrib/x86_64/msys2-x86_64-20190524.exe) Versions: Python-3.7.2, Python-3.8.0b3 $ gcc -v Using built-in specs. COLLECT_GCC=C:\msys64\mingw64\bin\gcc.exe COLLECT_LTO_WRAPPER=C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/9.1.0/lto-wrapper.exe Target: x86_64-w64-mingw32 gcc version 9.1.0 (Rev3, Built by MSYS2 project) Result of make: Python/initconfig.c: In function 'config_get_locale_encoding': Python/initconfig.c:1427:28: error: implicit declaration of function 'nl_langinfo' [-Werror=implicit-function-declaration] 1427 | const char *encoding = nl_langinfo(CODESET); | ^~~~~~~~~~~ Python/initconfig.c:1427:40: error: 'CODESET' undeclared (first use in this function); did you mean 'ECONNRESET'? 1427 | const char *encoding = nl_langinfo(CODESET); | ^~~~~~~ | ECONNRESET Python/initconfig.c:1427:40: note: each undeclared identifier is reported only once for each function it appears in Python/initconfig.c: In function 'config_init_env_warnoptions': Python/initconfig.c:1992:18: error: too many arguments to function 'wcstok' 1992 | # define WCSTOK wcstok | ^~~~~~ Python/initconfig.c:2015:20: note: in expansion of macro 'WCSTOK' 2015 | for (warning = WCSTOK(env, L",", &context); | ^~~~~~ In file included from ./Include/Python.h:30, from Python/initconfig.c:1: C:/msys64/mingw64/x86_64-w64-mingw32/include/string.h:147:20: note: declared here 147 | wchar_t *__cdecl wcstok(wchar_t * __restrict__ _Str,const wchar_t * __restrict__ _Delim) __MINGW_ATTRIB_DEPRECATED_SEC_WARN; | ^~~~~~ Python/initconfig.c:1992:18: error: too many arguments to function 'wcstok' 1992 | # define WCSTOK wcstok | ^~~~~~ Python/initconfig.c:2017:20: note: in expansion of macro 'WCSTOK' 2017 | warning = WCSTOK(NULL, L",", &context)) | ^~~~~~ In file included from ./Include/Python.h:30, from Python/initconfig.c:1: C:/msys64/mingw64/x86_64-w64-mingw32/include/string.h:147:20: note: declared here 147 | wchar_t *__cdecl wcstok(wchar_t * __restrict__ _Str,const wchar_t * __restrict__ _Delim) __MINGW_ATTRIB_DEPRECATED_SEC_WARN; | ^~~~~~ cc1.exe: some warnings being treated as errors make: *** [Makefile:1703: Python/initconfig.o] Error 1 ---------- components: Build, Interpreter Core, Windows messages: 349282 nosy: paul.moore, steve.dower, tim.golden, vengelson, zach.ware priority: normal severity: normal status: open title: Compilation on MINGW64 fails (CODESET,wcstok,...) type: compile error versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 9 09:27:40 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Fri, 09 Aug 2019 13:27:40 +0000 Subject: [New-bugs-announce] [issue37802] micro-optimization of PyLong_FromSize_t() Message-ID: <1565357260.71.0.37857302813.issue37802@roundup.psfhosted.org> New submission from Sergey Fedoseev : Currently PyLong_FromSize_t() uses PyLong_FromLong() for values < PyLong_BASE. It's suboptimal because PyLong_FromLong() needs to handle the sign. Removing PyLong_FromLong() call and handling small ints directly in PyLong_FromSize_t() makes it faster: $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=10000 /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 18.7 ns +- 0.3 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 16.7 ns +- 0.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 18.7 ns +- 0.3 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 16.7 ns +- 0.1 ns: 1.12x faster (-10%) $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2**10).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=10000 /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 26.2 ns +- 0.0 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 25.0 ns +- 0.7 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 26.2 ns +- 0.0 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 25.0 ns +- 0.7 ns: 1.05x faster (-5%) $ python -m perf timeit -s "from itertools import repeat; _len = repeat(None, 2**30).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=10000 /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 25.6 ns +- 0.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 25.6 ns +- 0.0 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 25.6 ns +- 0.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 25.6 ns +- 0.0 ns: 1.00x faster (-0%) This change makes PyLong_FromSize_t() consistently faster than PyLong_FromSsize_t(). So it might make sense to replace PyLong_FromSsize_t() with PyLong_FromSize_t() in __length_hint__() implementations and other similar cases. For example: $ python -m perf timeit -s "_len = iter(bytes(2)).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=10000 /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 19.4 ns +- 0.3 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 17.3 ns +- 0.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 19.4 ns +- 0.3 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 17.3 ns +- 0.1 ns: 1.12x faster (-11%) $ python -m perf timeit -s "_len = iter(bytes(2**10)).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=10000 /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 26.3 ns +- 0.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 25.3 ns +- 0.2 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 26.3 ns +- 0.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 25.3 ns +- 0.2 ns: 1.04x faster (-4%) $ python -m perf timeit -s "_len = iter(bytes(2**30)).__length_hint__" "_len()" --compare-to=../cpython-master/venv/bin/python --duplicate=10000 /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 27.6 ns +- 0.1 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 26.0 ns +- 0.1 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 27.6 ns +- 0.1 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 26.0 ns +- 0.1 ns: 1.06x faster (-6%) ---------- components: Interpreter Core messages: 349285 nosy: sir-sigurd priority: normal severity: normal status: open title: micro-optimization of PyLong_FromSize_t() type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 9 10:52:52 2019 From: report at bugs.python.org (daniel hahler) Date: Fri, 09 Aug 2019 14:52:52 +0000 Subject: [New-bugs-announce] [issue37803] "python -m pdb --help" does not work Message-ID: <1565362372.8.0.519393722004.issue37803@roundup.psfhosted.org> New submission from daniel hahler : The long options passed to `getopt.getopt` should not include the leading dashes: % python -m pdb --help Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/daniel/src/pdbpp/pdb.py", line 1672, in pdb.main() File "/usr/lib/python3.7/pdb.py", line 1662, in main opts, args = getopt.getopt(sys.argv[1:], 'mhc:', ['--help', '--command=']) File "/usr/lib/python3.7/getopt.py", line 93, in getopt opts, args = do_longs(opts, args[0][2:], longopts, args[1:]) File "/usr/lib/python3.7/getopt.py", line 157, in do_longs has_arg, opt = long_has_args(opt, longopts) File "/usr/lib/python3.7/getopt.py", line 174, in long_has_args raise GetoptError(_('option --%s not recognized') % opt, opt) getopt.GetoptError: option --help not recognized (it works in Python 2.7) ---------- components: Library (Lib) messages: 349291 nosy: blueyed priority: normal severity: normal status: open title: "python -m pdb --help" does not work type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 9 11:34:12 2019 From: report at bugs.python.org (Dong-hee Na) Date: Fri, 09 Aug 2019 15:34:12 +0000 Subject: [New-bugs-announce] [issue37804] Remove isAlive Message-ID: <1565364852.41.0.891047152339.issue37804@roundup.psfhosted.org> New submission from Dong-hee Na : As we discussed https://bugs.python.org/issue35283. Now is the time to remove Thread.isAlive. If it is okay, I 'd like to work on this issue. ---------- messages: 349293 nosy: asvetlov, corona10, vstinner priority: normal severity: normal status: open title: Remove isAlive versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 9 17:54:43 2019 From: report at bugs.python.org (Julian Berman) Date: Fri, 09 Aug 2019 21:54:43 +0000 Subject: [New-bugs-announce] [issue37805] json.dump(..., skipkeys=True) has no unit tests Message-ID: <1565387683.62.0.51432990737.issue37805@roundup.psfhosted.org> New submission from Julian Berman : Looks like there possibly are upstream tests that could be pulled in with modification: https://github.com/simplejson/simplejson/blob/00ed20da4c0e5f0396661f73482418651ff4d8c7/simplejson/tests/test_dump.py#L53-L66 (Found via https://bitbucket.org/pypy/pypy/issues/3052/json-skipkeys-true-results-in-invalid-json) ---------- components: Tests messages: 349317 nosy: Julian priority: normal severity: normal status: open title: json.dump(..., skipkeys=True) has no unit tests versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 01:26:57 2019 From: report at bugs.python.org (Valerio G) Date: Sat, 10 Aug 2019 05:26:57 +0000 Subject: [New-bugs-announce] [issue37806] Infinite recursion with typing.get_type_hints Message-ID: <1565414817.02.0.248356075738.issue37806@roundup.psfhosted.org> New submission from Valerio G : I encountered one condition where calling get_type_hints causes infinite recursion when dealing with forward declaration and cyclic types. Here's an example: from typing import Union, List, get_type_hints ValueList = List['Value'] Value = Union[str, ValueList] class A: a: List[Value] get_type_hints(A, globals(), locals()) This reaches the recursion limit as of 3.8.0b2. It seems that the combining _GenericAlias with ForwardRef is what triggers this condition: ForwardRef._evaluate sets __forward_value__ on its first call on a given instance _GenericAlias tries to compare its args post evaluation If one of the arguments is a previously evaluated forward reference containing a cycle, then it will infinitely recurse in the hash function when building a frozen set for the comparison. The above is, of course, a very artificial example, but I can imagine this happening a lot in code with trees or similar structures. My initial reproduction case was using _eval_type to resolve forward references returned by get_args (side note: it would be nice to have a public function to do that). ---------- components: Library (Lib) messages: 349330 nosy: vg0377467 priority: normal severity: normal status: open title: Infinite recursion with typing.get_type_hints versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 02:24:05 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 10 Aug 2019 06:24:05 +0000 Subject: [New-bugs-announce] [issue37807] Make hash() return a non-negative number Message-ID: <1565418246.0.0.695379140642.issue37807@roundup.psfhosted.org> New submission from Raymond Hettinger : The existing API for the builtin hash() function is inconvenient. When implementing structure that use a hash value, you never want a negative number. Running abs(hash(x)) works but is awkward, slower, and loses one bit of entropy. ---------- components: Interpreter Core messages: 349333 nosy: rhettinger, tim.peters priority: normal severity: normal status: open title: Make hash() return a non-negative number type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 04:17:59 2019 From: report at bugs.python.org (Steven D'Aprano) Date: Sat, 10 Aug 2019 08:17:59 +0000 Subject: [New-bugs-announce] [issue37808] Deprecate unbound super methods Message-ID: <1565425079.51.0.763493261864.issue37808@roundup.psfhosted.org> New submission from Steven D'Aprano : As per the discussion here, let's deprecate unbound super methods. https://discuss.python.org/t/is-it-time-to-deprecate-unbound-super-methods/1833 ---------- messages: 349338 nosy: steven.daprano priority: normal severity: normal status: open title: Deprecate unbound super methods versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 04:35:00 2019 From: report at bugs.python.org (Steven D'Aprano) Date: Sat, 10 Aug 2019 08:35:00 +0000 Subject: [New-bugs-announce] [issue37809] Alias typing.NamedTuple to collections Message-ID: <1565426100.13.0.700794454326.issue37809@roundup.psfhosted.org> New submission from Steven D'Aprano : As discussed in the thread starting here with Guido's message: https://mail.python.org/archives/list/python-ideas at python.org/message/WTBXYJJ7CSGDLLJHHPHSH5ZCCA4C7QEP/ and these two follow-ups: https://mail.python.org/archives/list/python-ideas at python.org/message/SXF3RKYQ6DXFKX2RFMUDUKAWQEGXGHP3/ https://mail.python.org/archives/list/python-ideas at python.org/message/VRCH56CHE4P7IAWO7QRA6TDBISLONGQT/ we should do a better job of advertising typing.NamedTuple by aliasing it in the collections module. (I'm not sure if this can still go into 3.8, since it's not really a new feature, or if its too late.) ---------- messages: 349339 nosy: gvanrossum, levkivskyi, steven.daprano priority: normal severity: normal status: open title: Alias typing.NamedTuple to collections versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 14:59:04 2019 From: report at bugs.python.org (Anthony Sottile) Date: Sat, 10 Aug 2019 18:59:04 +0000 Subject: [New-bugs-announce] [issue37810] ndiff reports incorrect location when diff strings contain tabs Message-ID: <1565463544.19.0.692436897534.issue37810@roundup.psfhosted.org> New submission from Anthony Sottile : Here's an example from difflib import ndiff def main(): x = '\tx\t=\ty\n\t \t \t^' y = '\tx\t=\ty\n\t \t \t^\n' print('\n'.join( line.rstrip('\n') for line in ndiff(x.splitlines(True), y.splitlines(True))) ) if __name__ == '__main__': exit(main()) Current output: $ python3.8 t.py x = y - ^ + ^ ? + Expected output: $ ./python t.py x = y - ^ + ^ ? + Found this while implementing a similar thing for flake8 here: https://gitlab.com/pycqa/flake8/merge_requests/339 and while debugging with pytest ---------- messages: 349353 nosy: Anthony Sottile priority: normal severity: normal status: open title: ndiff reports incorrect location when diff strings contain tabs versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 16:29:31 2019 From: report at bugs.python.org (Artem Khramov) Date: Sat, 10 Aug 2019 20:29:31 +0000 Subject: [New-bugs-announce] [issue37811] [FreeBSD, OSX] Socket module: incorrect usage of poll(2) Message-ID: <1565468971.21.0.481323607597.issue37811@roundup.psfhosted.org> New submission from Artem Khramov : FreeBSD implementation of poll(2) restricts timeout argument to be either zero, or positive, or equal to INFTIM (-1). Unless otherwise overridden, socket timeout defaults to -1. This value is then converted to milliseconds (-1000) and used as argument to the poll syscall. poll returns EINVAL (22), and the connection fails. I have discovered this bug during the EINTR handling testing, and have naturally found a repro code in https://bugs.python.org/issue23618 (see connect_eintr.py, attached). On GNU/Linux, the example runs as expected. ---------- files: connect_eintr.py messages: 349356 nosy: akhramov priority: normal severity: normal status: open title: [FreeBSD, OSX] Socket module: incorrect usage of poll(2) versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48537/connect_eintr.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 17:07:30 2019 From: report at bugs.python.org (Greg Price) Date: Sat, 10 Aug 2019 21:07:30 +0000 Subject: [New-bugs-announce] [issue37812] Make implicit returns explicit in longobject.c (in CHECK_SMALL_INT) Message-ID: <1565471250.77.0.491506074373.issue37812@roundup.psfhosted.org> New submission from Greg Price : In longobject.c we have the following usage a few times: PyObject * PyLong_FromLong(long ival) { PyLongObject *v; // ... more locals ... CHECK_SMALL_INT(ival); if (ival < 0) { /* negate: can't write this as abs_ival = -ival since that invokes undefined behaviour when ival is LONG_MIN */ abs_ival = 0U-(unsigned long)ival; sign = -1; } else { // ... etc. etc. The CHECK_SMALL_INT macro contains a `return`, so the function can actually return before it reaches any of the other code. #define CHECK_SMALL_INT(ival) \ do if (-NSMALLNEGINTS <= ival && ival < NSMALLPOSINTS) { \ return get_small_int((sdigit)ival); \ } while(0) That's not even an error condition -- in fact it's the fast, hopefully reasonably-common, path. An implicit return like this is pretty surprising for the reader. And it only takes one more line (plus a close-brace) to make it explicit: if (IS_SMALL_INT(ival)) { return get_small_int((sdigit)ival); } so that seems like a much better trade. Patch written, will post shortly. ---------- components: Interpreter Core messages: 349359 nosy: Greg Price priority: normal severity: normal status: open title: Make implicit returns explicit in longobject.c (in CHECK_SMALL_INT) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 17:46:12 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sat, 10 Aug 2019 21:46:12 +0000 Subject: [New-bugs-announce] [issue37813] PEP 7 line-breaking with binary operations contradicts Knuth's rule Message-ID: <1565473572.7.0.81062665759.issue37813@roundup.psfhosted.org> New submission from G?ry : I have just read PEP 7 and noticed that its line-breaking recommandation in presence of binary operations seems to contradict its analogue in PEP 8 which follows Knuth's rule. PEP 7 (https://www.python.org/dev/peps/pep-0007/#code-lay-out): > When you break a long expression at a binary operator, the operator > goes at the end of the previous line, and braces should be formatted > as shown. E.g.: > > if (type->tp_dictoffset != 0 && base->tp_dictoffset == 0 && > type->tp_dictoffset == b_size && > (size_t)t_size == b_size + sizeof(PyObject *)) > { > return 0; /* "Forgive" adding a __dict__ only */ > } PEP 8 (https://www.python.org/dev/peps/pep-0008/#should-a-line-break-before-or-after-a-binary-operator): > To solve this readability problem, mathematicians and their > publishers follow the opposite convention. Donald Knuth explains the > traditional rule in his Computers and Typesetting series: "Although > formulas within a paragraph always break after binary operations and > relations, displayed formulas always break before binary operations" > [3]. > > Following the tradition from mathematics usually results in more > readable code: > > # Yes: easy to match operators with operands > income = (gross_wages > + taxable_interest > + (dividends - qualified_dividends) > - ira_deduction > - student_loan_interest) > In Python code, it is permissible to break before or after a binary > operator, as long as the convention is consistent locally. For new > code Knuth's style is suggested. ---------- assignee: docs at python components: Documentation messages: 349361 nosy: docs at python, maggyero priority: normal severity: normal status: open title: PEP 7 line-breaking with binary operations contradicts Knuth's rule type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 20:36:34 2019 From: report at bugs.python.org (Josh Holland) Date: Sun, 11 Aug 2019 00:36:34 +0000 Subject: [New-bugs-announce] [issue37814] typing module: empty tuple syntax is undocumented Message-ID: <1565483794.67.0.80364614628.issue37814@roundup.psfhosted.org> New submission from Josh Holland : The empty tuple syntax in type annotations, `Tuple[()]`, is not obvious from the examples given in the documentation (I naively expected `Tuple[]` to work); it has been documented in PEP 484[1] and in mypy[2], but not in the documentation for the typing module. [1]: https://www.python.org/dev/peps/pep-0484/#the-typing-module [2]: https://github.com/python/mypy/pull/4313 ---------- assignee: docs at python components: Documentation messages: 349366 nosy: anowlcalledjosh, docs at python priority: normal severity: normal status: open title: typing module: empty tuple syntax is undocumented type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 20:42:33 2019 From: report at bugs.python.org (Missono Dell) Date: Sun, 11 Aug 2019 00:42:33 +0000 Subject: [New-bugs-announce] [issue37815] 'Make Test' error whe trying to install Python 3.7.4 on Linux Mint Message-ID: <1565484153.44.0.516842692215.issue37815@roundup.psfhosted.org> New submission from Missono Dell : Ran 38 tests in 1.058s FAILED (failures=1) test test_pdb failed 1 test failed again: test_pdb == Tests result: FAILURE then FAILURE == 403 tests OK. 1 test failed: test_pdb 12 tests skipped: test_devpoll test_kqueue test_msilib test_ossaudiodev test_startfile test_tix test_tk test_ttk_guionly test_winconsoleio test_winreg test_winsound test_zipfile64 1 re-run test: test_pdb Total duration: 6 min 5 sec Tests result: FAILURE then FAILURE Makefile:1076: recipe for target 'test' failed make: *** [test] Error 2 ---------- components: Installation messages: 349367 nosy: Missono Dell priority: normal severity: normal status: open title: 'Make Test' error whe trying to install Python 3.7.4 on Linux Mint type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 21:06:41 2019 From: report at bugs.python.org (Glenn Linderman) Date: Sun, 11 Aug 2019 01:06:41 +0000 Subject: [New-bugs-announce] [issue37816] f-string documentation not fully accurate Message-ID: <1565485601.85.0.782571390413.issue37816@roundup.psfhosted.org> New submission from Glenn Linderman : I noticed the following description for f-strings: > Escape sequences are decoded like in ordinary string literals (except when a literal is also marked as a raw string). After decoding, the grammar for the contents of the string is: followed by lots of stuff, followed by > Backslashes are not allowed in format expressions and will raise an error: > f"newline: {ord('\n')}" # raises SyntaxError If f-strings are processed AS DESCRIBED, the \n would never seen by the format expression... but in fact, it does produce an error. PEP 498 has a more accurate description, that the {} parsing actually happens before the escape processing. The string or raw-string escape processing is done only to the literal string pieces. So the string content is first parsed for format expressions enclosed in {}, on the way, converting {{ and }} to { and } in the literal parts of the string, and then the literal parts are treated exactly like independent string literals (with regular or raw-string escaping performed), to be concatenated with the evaluation of the format expressions, and subsequent literal parts. The evaluation of the format expressions presently precludes the use of # and \ for comments and escapes, but otherwise tries to sort of by the same as a python expression possibly followed by a format mini-language part. I am not the expert here, just the messenger. ---------- assignee: docs at python components: Documentation messages: 349370 nosy: docs at python, v+python priority: normal severity: normal status: open title: f-string documentation not fully accurate versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 22:12:15 2019 From: report at bugs.python.org (Paul Martin) Date: Sun, 11 Aug 2019 02:12:15 +0000 Subject: [New-bugs-announce] [issue37817] create_pipe_connection and start_serving_pipe not documented Message-ID: <1565489535.63.0.517706154991.issue37817@roundup.psfhosted.org> New submission from Paul Martin : I found these two methods in the windows_events code for asyncio. Is there a reason why they don't seem to be documented, and are not included in AbstractServer? They provide a good Windows alternative to create_unix_server & create_unix_connection for inter-process communication. ---------- components: asyncio messages: 349371 nosy: asvetlov, primal, yselivanov priority: normal severity: normal status: open title: create_pipe_connection and start_serving_pipe not documented versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 10 22:54:30 2019 From: report at bugs.python.org (John Rogers) Date: Sun, 11 Aug 2019 02:54:30 +0000 Subject: [New-bugs-announce] [issue37818] Behaviors of binary bitwise operators contradicting documentation Message-ID: <1565492070.46.0.473056579761.issue37818@roundup.psfhosted.org> New submission from John Rogers : In Python Language Reference (version 3.7), section 6.9 it states that the arguments of binary bitwise operators must be integers. However, the following expressions work without error: True & False False | True True ^ True Each produces a boolean result (not integer) (False, True, False, respectively). Also I find that mixing booleans and integers does work too, though this time it produces integers. One can easily test it on Python home page's console window. I also tested it on my Linux box running version 3.5.3. So it appears that it has been overlooked for quite some time! As an aside: I do assume that boolean values are *distinct* from integers. If they are not, then my apologies! ---------- assignee: docs at python components: Documentation messages: 349372 nosy: The Blue Wizard, docs at python priority: normal severity: normal status: open title: Behaviors of binary bitwise operators contradicting documentation type: behavior versions: Python 3.5, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 11 03:01:53 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 11 Aug 2019 07:01:53 +0000 Subject: [New-bugs-announce] [issue37819] as_integer_ratio() missing from fractions.Fraction() Message-ID: <1565506913.18.0.756689678567.issue37819@roundup.psfhosted.org> New submission from Raymond Hettinger : When working on Whatsnew3.8, I noticed that as_integer_ratio() had been added to ints and bools, was already present in floats and Decimals, but was missing from Fractions. IIRC, the goal was to make all of these have as a similar API so that x.as_integer_ratio() would work for all types that could support it (not complex numbers). ---------- components: Library (Lib) keywords: easy messages: 349378 nosy: lisroach, mark.dickinson, rhettinger priority: high severity: normal status: open title: as_integer_ratio() missing from fractions.Fraction() type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 11 07:16:29 2019 From: report at bugs.python.org (Abdullah) Date: Sun, 11 Aug 2019 11:16:29 +0000 Subject: [New-bugs-announce] [issue37820] Unnecessary URL scheme exists to allow 'URL: reading file in urllib Message-ID: <1565522189.95.0.525978892849.issue37820@roundup.psfhosted.org> New submission from Abdullah : I am not sure if this was reported before, fixed, or even how to report this. However this issue is similar to https://bugs.python.org/issue35907 # Vulnerability PoC import urllib print urllib.urlopen('URL:/etc/passwd').read()[:30] the result is ## # User Database # # Note t I have tested the PoC on my Mac python 2.7. ---------- components: Library (Lib) messages: 349385 nosy: Alyan priority: normal severity: normal status: open title: Unnecessary URL scheme exists to allow 'URL: reading file in urllib type: security versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 11 07:17:08 2019 From: report at bugs.python.org (Tal Einat) Date: Sun, 11 Aug 2019 11:17:08 +0000 Subject: [New-bugs-announce] [issue37821] IDLE shell uses wrong namespace for completions Message-ID: <1565522228.88.0.811193115322.issue37821@roundup.psfhosted.org> New submission from Tal Einat : Currently, when running an IDLE shell with a sub-process, it will allow completing attributes of objects not actually in the shell's namespace. For example, typing "codecs." will bring up completions for the codecs module's attributes, despite that not having being imported. Further, if one uses the completion, this results in a NameError exception, since "codecs" isn't actually in the namespace. AFAICT, the intended behavior is as follows: * If a shell exists, completion should use the namespace used for evaluating code in the shell. (Note that this is slightly different when running a shell without a sub-process.) * If no shell exists (only editor windows are open), completion should use the global namespace + sys.modules. ---------- assignee: terry.reedy components: IDLE messages: 349386 nosy: taleinat, terry.reedy priority: normal severity: normal status: open title: IDLE shell uses wrong namespace for completions type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 11 09:55:05 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 11 Aug 2019 13:55:05 +0000 Subject: [New-bugs-announce] [issue37822] Add math.as_integer_ratio() Message-ID: <1565531705.4.0.857062617038.issue37822@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are two interfaces to represent a number as a ratio. The numbers.Rational interface has two properties: numerator and denominator. float and Decimal do not support this interface, but they provide method as_integer_ratio() which return a 2-tuple (numerator, denominator). I propose to add math.as_integer_ratio() which unites both interfaces: uses the as_integer_ratio() method if it is defined, and uses the numerator and denominator properties otherwise. It will help in applications that work with exact numbers (e.g. modules fractions and statistics). ---------- components: Library (Lib) messages: 349390 nosy: mark.dickinson, rhettinger, serhiy.storchaka, stutzbach priority: normal severity: normal status: open title: Add math.as_integer_ratio() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 11 12:13:29 2019 From: report at bugs.python.org (Mariatta) Date: Sun, 11 Aug 2019 16:13:29 +0000 Subject: [New-bugs-announce] [issue37823] Telnet documentation: fix the link to open() Message-ID: <1565540009.68.0.907395311638.issue37823@roundup.psfhosted.org> New submission from Mariatta : Bug in the Telnet docs: https://docs.python.org/3/library/telnetlib.html In the section https://docs.python.org/3/library/telnetlib.html#telnetlib.Telnet the ?open()? link should probably point to https://docs.python.org/3/library/telnetlib.html#telnetlib.Telnet.open but instead it points to https://docs.python.org/3/library/functions.html#open This is newcomer friendly and I would prefer it be done by first time contributors. Reported in docs mailing list: https://mail.python.org/pipermail/docs/2019-August/041817.html ---------- assignee: docs at python components: Documentation keywords: easy, newcomer friendly messages: 349393 nosy: Mariatta, docs at python priority: normal severity: normal stage: needs patch status: open title: Telnet documentation: fix the link to open() versions: Python 2.7, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 11 12:32:12 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 11 Aug 2019 16:32:12 +0000 Subject: [New-bugs-announce] [issue37824] IDLE: DeprecationWarning not handled properly Message-ID: <1565541132.63.0.273797082992.issue37824@roundup.psfhosted.org> New submission from Serhiy Storchaka : DeprecationWarning at compile time is output multiple times to the stderr (on console from which IDLE was ran), but not to IDLE Shell window. For example, when enter '\e': $ ./python -m idlelib Warning (from warnings module): File "", line 1 '\e' DeprecationWarning: invalid escape sequence \e >>> Warning (from warnings module): File "", line 1 '\e' DeprecationWarning: invalid escape sequence \e >>> Warning (from warnings module): File "", line 1 '\e' DeprecationWarning: invalid escape sequence \e And when close IDLE, additional output is printed in 3.8+: >>> Exception ignored in: Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/idlelib/run.py", line 488, in close File "/home/serhiy/py/cpython/Lib/idlelib/pyshell.py", line 1019, in close File "/home/serhiy/py/cpython/Lib/idlelib/editor.py", line 1058, in close File "/home/serhiy/py/cpython/Lib/idlelib/outwin.py", line 94, in maybesave File "/home/serhiy/py/cpython/Lib/idlelib/editor.py", line 991, in get_saved AttributeError: 'NoneType' object has no attribute 'get_saved' ---------- assignee: terry.reedy components: IDLE messages: 349396 nosy: serhiy.storchaka, terry.reedy priority: normal severity: normal status: open title: IDLE: DeprecationWarning not handled properly type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 11 12:54:19 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 11 Aug 2019 16:54:19 +0000 Subject: [New-bugs-announce] [issue37825] IDLE doc improvements Message-ID: <1565542459.61.0.695168571425.issue37825@roundup.psfhosted.org> New submission from Terry J. Reedy : 1. Headings: Sections should be titlecased, subsections not. This is especially needed now that they are not numbered. 2. Python Shell subsection. Revise 'paste', maybe move. SyntaxWarnings are SyntaxErrors. Explain Restart. 3. Any other pending changes I include. ---------- assignee: terry.reedy components: IDLE messages: 349399 nosy: taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE doc improvements versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 11 14:36:06 2019 From: report at bugs.python.org (Abhilash Raj) Date: Sun, 11 Aug 2019 18:36:06 +0000 Subject: [New-bugs-announce] [issue37826] Document PEP 3134 in tutorials/errors.rst Message-ID: <1565548566.46.0.213467869715.issue37826@roundup.psfhosted.org> New submission from Abhilash Raj : Looking at the docs, I couldn't find the `raise from` documentation anywhere in the Errors and Exceptions page (https://docs.python.org/3/tutorial/errors.html) page, which seems to be the landing page for Python Exceptions. I do see however that it is documented on the top page of https://docs.python.org/3/library/exceptions.html and raise statement's docs (https://docs.python.org/3/reference/simple_stmts.html#raise), both of which are pretty hard to discover. It would be nice to add the docs for `raise from` in the Errors and Exception page too. ---------- assignee: docs at python components: Documentation messages: 349406 nosy: docs at python, maxking priority: normal severity: normal status: open title: Document PEP 3134 in tutorials/errors.rst _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 12 02:52:53 2019 From: report at bugs.python.org (Tal Einat) Date: Mon, 12 Aug 2019 06:52:53 +0000 Subject: [New-bugs-announce] [issue37827] IDLE: Have the shell mimic terminal handling of \r and \b control characters in outputs Message-ID: <1565592773.8.0.657523892243.issue37827@roundup.psfhosted.org> New submission from Tal Einat : IDLE's shell doesn't currently handle \r and \b in any special way; they are written the the Tk Text widget which displays them in strange, system-dependent ways. These are often used to show continuously updated progress, e.g. in text-based progress bars, without flooding the output, since they allow overwriting previously written output. If we implement handling for \r and \b, progress indicators such as these could finally work properly in IDLE's shell. To make things worse, Tk's Text widget becomes increasingly slow when it wraps very long lines. IDLE's shell must wrap lines, and is therefore prone to such slowdowns. Attempting to show updating progress info using \r or \b results in such increasingly long lines of output, eventually slowing IDLE's shell down to a crawl. (The recent addition of squeezing of long outputs help for the case of single, very long outputs, but not with many short strings written on a single line.) As a recent example, the basic Tensorflow tutorial shows such progress information for several of its stages. Due to the lack of handling for these control characters, it is practically unusable in the IDLE shell due to this. See issue #37762 about this. Since the shell aims to closely emulate using an interactive terminal session, and since showing progress is so common in interactive work, I propose to add special handling of these two control characters in outputs written to the shell window. Related issues: #23220: Documents input/output effects of how IDLE runs user code (originally titled "IDLE does not display \b backspace correctly") #24572: IDLE Text Output With ASCII Control Codes Not Working (marked as duplicate of #23220) #37762: IDLE very slow due a super long line output in chunks StackOverflow: what is the difference between cmd and idle when using tqdm? (https://stackoverflow.com/questions/35895864/what-is-the-difference-between-cmd-and-idle-when-using-tqdm) ---------- assignee: terry.reedy components: IDLE messages: 349440 nosy: cheryl.sabella, rhettinger, taleinat, terry.reedy priority: normal severity: normal status: open title: IDLE: Have the shell mimic terminal handling of \r and \b control characters in outputs type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 12 03:12:33 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Mon, 12 Aug 2019 07:12:33 +0000 Subject: [New-bugs-announce] [issue37828] Fix default mock_name in unittest.mock.assert_called error message Message-ID: <1565593953.82.0.916439327482.issue37828@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : In the format string for assert_called the evaluation order is incorrect and hence for mock's without name 'None' is printed whereas it should be 'mock' like for other messages. The error message is ("Expected '%s' to have been called." % self._mock_name or 'mock') . Here self._mock_name which is None is applied to form the string and then used with the string 'mock' in or combination. The fix would be to have the evaluation order correct like other error messages. Marking this as newcomer-friendly. Please leave this to new contributors as their 1st PR. ./python.exe Python 3.9.0a0 (heads/master:f03b4c8a48, Aug 12 2019, 10:04:10) [Clang 7.0.2 (clang-700.1.81)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from unittest.mock import Mock >>> m = Mock() >>> m.assert_called_once() Traceback (most recent call last): File "", line 1, in File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/unittest/mock.py", line 854, in assert_called_once raise AssertionError(msg) AssertionError: Expected 'mock' to have been called once. Called 0 times. >>> m.assert_called() Traceback (most recent call last): File "", line 1, in File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/unittest/mock.py", line 844, in assert_called raise AssertionError(msg) AssertionError: Expected 'None' to have been called. Thanks ---------- components: Library (Lib) keywords: newcomer friendly messages: 349444 nosy: cjw296, mariocj89, michael.foord, xtreak priority: normal severity: normal status: open title: Fix default mock_name in unittest.mock.assert_called error message type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 12 04:25:59 2019 From: report at bugs.python.org (pasenor) Date: Mon, 12 Aug 2019 08:25:59 +0000 Subject: [New-bugs-announce] [issue37829] Documentation of stdlib: add example of mixed arguments to dict() Message-ID: <1565598359.42.0.652909866188.issue37829@roundup.psfhosted.org> New submission from pasenor : The following use of the dict() function with both positional and keyword arguments does follow from the description, but probably needs it's own example: dict({'a': 1}, b=2}) == {'a': 1, 'b': 2} ---------- assignee: docs at python components: Documentation messages: 349446 nosy: docs at python, pasenor priority: normal severity: normal status: open title: Documentation of stdlib: add example of mixed arguments to dict() versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 12 05:33:33 2019 From: report at bugs.python.org (Batuhan) Date: Mon, 12 Aug 2019 09:33:33 +0000 Subject: [New-bugs-announce] [issue37830] continue in finally with return in try results with segfault Message-ID: <1565602413.18.0.439989203772.issue37830@roundup.psfhosted.org> New submission from Batuhan : When you try to put a return statement with the iterated value inside of try/finally and if you have continue in finally it will result with segfault. def simple(): for number in range(2): try: return number finally: continue simple() SEGV My first debugs shows TOS is the next value int(1) when it comes to FOR_ITER, instead of the range instance. So when python try to call next() with (*iter->ob_type->tp_iternext)(iter) in FOR_ITER it gets a non iterator, int(1). Adding an assert can prove it, python: Python/ceval.c:3198: _PyEval_EvalFrameDefault: Assertion `PyIter_Check(iter)' failed. For seeing how stack changed i enabled lltrace and formatted it little bit; >>> STACK_SIZE 0 LOAD_GLOBAL 0 >>> STACK_SIZE 0 >>> push LOAD_CONST 1 >>> STACK_SIZE 1 >>> push 2 CALL_FUNCTION 1 >>> STACK_SIZE 2 >>> ext_pop 2 >>> ext_pop >>> push range(0, 2) GET_ITER None >>> STACK_SIZE 1 FOR_ITER 24 >>> STACK_SIZE 1 >>> push 0 STORE_FAST 0 >>> STACK_SIZE 2 >>> pop 0 SETUP_FINALLY 12 >>> STACK_SIZE 1 LOAD_FAST 0 >>> STACK_SIZE 1 >>> push 0 POP_BLOCK None >>> STACK_SIZE 2 CALL_FINALLY 6 >>> STACK_SIZE 2 >>> push 20 POP_FINALLY 0 >>> STACK_SIZE 3 >>> pop 20 JUMP_ABSOLUTE 8 >>> STACK_SIZE 2 FOR_ITER 24 >>> STACK_SIZE 2 [SEGV] And the oddity is STACK_SIZE should be 1 before the FOR_ITER but it is 2, then i said why dont i try the SECOND() as iter, and it worked. It means an instruction is pushing the value of previous iteration. There are 3 things we can do; => raise a RuntimeError if TOS isn't an iterator (IMHO we should do that) => check if try/finally created inside of a function and an iterator. then check inside of try if a return happens with the iterated value and if so set preserve_tos value to false. => dont allow continue in finally I want to fix this, and i prefer the first one. If you have any suggestion, i am open. ---------- messages: 349450 nosy: BTaskaya priority: normal severity: normal status: open title: continue in finally with return in try results with segfault _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 12 06:48:52 2019 From: report at bugs.python.org (Tomer Vromen) Date: Mon, 12 Aug 2019 10:48:52 +0000 Subject: [New-bugs-announce] [issue37831] bool(~True) == True Message-ID: <1565606932.6.0.360898799273.issue37831@roundup.psfhosted.org> New submission from Tomer Vromen : Bitwise operators have inconsistent behavior when acting on bool values: (Python 3.7.4) # "&" works like "and" >>> True & True True >>> True & False False >>> False & False False # "|" works like "or" >>> True | True True >>> True | False True >>> False | False False # "~" does not work like "not"! >>> ~True -2 >>> ~False -1 The result of this is the a user might start working with "&" and "|" on bool values (for whatever reason) and it will work as expected. But then, when adding "~" to the mix, things start to break. The proposal is to make "~" act like "not" on bool values, i.e. ~True will be False; ~False will be True. I'm not sure if this has any negative impact on existing code. I don't expect any, but you can never know. If there is no objection to this change, I can even try to implement it myself an submit a patch. ---------- components: Interpreter Core messages: 349452 nosy: tomerv priority: normal severity: normal status: open title: bool(~True) == True type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 12 07:00:50 2019 From: report at bugs.python.org (Jeffrey Walton) Date: Mon, 12 Aug 2019 11:00:50 +0000 Subject: [New-bugs-announce] [issue37832] _Py_HashRandomization_Init: failed to get random numbers Message-ID: <1565607650.45.0.0402326097483.issue37832@roundup.psfhosted.org> New submission from Jeffrey Walton : I need to setup a Debian HURD test machine to investigate a problem I was seeing in the Crypto++ library. After setting up the machine and running an apt-get install for some build tools I noticed Python was failing: Fatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python It seems HURD no longer provides /dev/urandom out of the box. This is new behavior, and it was not present in the past. It is present in the ISO's from June 2019 (https://cdimage.debian.org/cdimage/ports/current-hurd-i386/iso-dvd/). Python may want to employ a fallback strategy since whatever is using Python ignores the error and trucks on. Sorry to dump this on you. ----- jwalton at hurd-x86:~$ ls /dev/*rand* /dev/random jwalton at hurd-x86:~$ python --version Python 2.7.16 jwalton at hurd-x86:~$ uname -a GNU hurd-x86 0.9 GNU-Mach 1.8+git20190109-486/Hurd-0.9 i686-AT386 GNU jwalton at hurd-x86:~$ apt-cache show python Package: python Architecture: hurd-i386 Version: 2.7.16-1 Multi-Arch: allowed Priority: standard Section: python Source: python-defaults Maintainer: Matthias Klose Installed-Size: 68 Provides: python-ctypes, python-email, python-importlib, python-profiler, python-wsgiref Pre-Depends: python-minimal (= 2.7.16-1) Depends: python2.7 (>= 2.7.16-1~), libpython-stdlib (= 2.7.16-1), python2 (= 2.7.16-1) ... ---------- components: Interpreter Core messages: 349453 nosy: Jeffrey.Walton priority: normal severity: normal status: open title: _Py_HashRandomization_Init: failed to get random numbers type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 12 09:59:38 2019 From: report at bugs.python.org (Ricardo Smits) Date: Mon, 12 Aug 2019 13:59:38 +0000 Subject: [New-bugs-announce] [issue37833] tkinter crashes macOS in the latest macOS update 10.14.6 Message-ID: <1565618378.34.0.81940050082.issue37833@roundup.psfhosted.org> New submission from Ricardo Smits : tkVersion == 8.6 After the new update in macOS (10.14.6): In any python interpreter, when running tk.Tk() it makes macOS crash and logout giving the following error: $ CGSTrackingRegionSetIsEnabled returned CG error 268435459 $ HIToolbox: received notification of WindowServer event port death. ---------- components: macOS messages: 349472 nosy: ned.deily, ronaldoussoren, smits92 priority: normal severity: normal status: open title: tkinter crashes macOS in the latest macOS update 10.14.6 type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 12 14:49:22 2019 From: report at bugs.python.org (Steve Dower) Date: Mon, 12 Aug 2019 18:49:22 +0000 Subject: [New-bugs-announce] [issue37834] readlink on Windows cannot read app exec links Message-ID: <1565635762.02.0.826831872137.issue37834@roundup.psfhosted.org> New submission from Steve Dower : The IO_REPARSE_TAG_APPEXECLINK was introduced for aliases to installed UWP apps that require activation before they can be executed. Currently these are in an unusual place as far as Python support goes - stat() fails where lstat() succeeds, but the lstat() result doesn't have the right flags to make is_link() return True. It's not clear whether we *should* treat these as regular symlinks, given there are a number of practical differences, but since we can enable it I'm going to post a PR anyway. ---------- components: Windows messages: 349486 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: readlink on Windows cannot read app exec links versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 05:36:43 2019 From: report at bugs.python.org (Netzeband) Date: Tue, 13 Aug 2019 09:36:43 +0000 Subject: [New-bugs-announce] [issue37835] typing.get_type_hints wrong namespace for forward-declaration of inner class Message-ID: <1565689003.91.0.0312877377737.issue37835@roundup.psfhosted.org> New submission from Netzeband : When evaluating the type-hints of an inner-class (a class defined inside a function scope), the forward declaration does not work correctly. In this case the typing.get_type_hints does not get the right namespace, thus the class-string specified is not found. When using the same syntax for a normal class (defined in global space) it works, or when using another class instead a forward declaration, also everything works. As a workaround one could pass the local namespace (locals()) from the function where the class has been defined in to typing.get_type_hints. However in normal situations the typing.get_type_hints call is deep in the call hierarchy to do some runtime type-checks and at this point only the reference to the function-object is existing and no-one is aware of the fact, that this is just a method defined in a inner class. >From the outside perspective one would expect, that typing.get_type_hints reacts the same, independent of the type of class. ---------- components: Library (Lib) files: typing_check.py messages: 349535 nosy: netbnd priority: normal severity: normal status: open title: typing.get_type_hints wrong namespace for forward-declaration of inner class type: behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48539/typing_check.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 05:41:19 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Tue, 13 Aug 2019 09:41:19 +0000 Subject: [New-bugs-announce] [issue37836] Support .as_integer_ratio() in fractions.Fraction Message-ID: <1565689279.74.0.17922836333.issue37836@roundup.psfhosted.org> New submission from Jeroen Demeyer : Currently, the fractions.Fraction constructor accepts an .as_integer_ratio() method only for the specific types "float" and "Decimal". It would be good to support this for arbitrary classes. This is part of what was proposed in #37822, but without adding the math.operator() function. ---------- components: Library (Lib) messages: 349536 nosy: jdemeyer, mark.dickinson, rhettinger, serhiy.storchaka, stutzbach priority: normal severity: normal status: open title: Support .as_integer_ratio() in fractions.Fraction versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 06:29:55 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Tue, 13 Aug 2019 10:29:55 +0000 Subject: [New-bugs-announce] [issue37837] add internal _PyLong_FromUnsignedChar() function Message-ID: <1565692195.8.0.862568192553.issue37837@roundup.psfhosted.org> New submission from Sergey Fedoseev : When compiled with default NSMALLPOSINTS, _PyLong_FromUnsignedChar() is significantly faster than other PyLong_From*(): $ python -m perf timeit -s "from collections import deque; consume = deque(maxlen=0).extend; b = bytes(2**20)" "consume(b)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 7.10 ms +- 0.02 ms /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 4.29 ms +- 0.03 ms Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 7.10 ms +- 0.02 ms -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 4.29 ms +- 0.03 ms: 1.66x faster (-40%) It's mostly useful for bytes/bytearray, but also can be used in several other places. ---------- components: Interpreter Core messages: 349540 nosy: sir-sigurd priority: normal severity: normal status: open title: add internal _PyLong_FromUnsignedChar() function type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 06:42:59 2019 From: report at bugs.python.org (Netzeband) Date: Tue, 13 Aug 2019 10:42:59 +0000 Subject: [New-bugs-announce] [issue37838] typing.get_type_hints not working with forward-declaration and decorated functions Message-ID: <1565692979.16.0.678723196671.issue37838@roundup.psfhosted.org> New submission from Netzeband : When decorating a function and using a forward declaration as type hint, the typing.get_type_hints function does not work anymore, since it cannot find the forward declared name in the namespace. After debugging I think, the typing.get_type_hints function is actually using the namespace of the decorator instead of the decorated function. When using a normal class type (no forward declaration) everything works fine and also when not using any decorator it works like expected. As a workaround, one could pass the local namespace to typing.get_type_hints. However in normal usecases this function is used for runtime typechecking in a deep call hierarchy. So one would normally not have access to the right local namespace, only to the function object itself. However there is another easy workaround. At least when using the functool.wraps method to create a function decorator. The decorated functions has a "__wrapped__" attribute, which references the original function. When using "typing.get_type_hints(function.__wrapped__)" instead of "typing.get_type_hints(function)", it works like expected. So maybe this could be built in into the get_type_hints method. ---------- components: Library (Lib) files: typing_check_wrapped.zip messages: 349542 nosy: netbnd priority: normal severity: normal status: open title: typing.get_type_hints not working with forward-declaration and decorated functions type: behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48540/typing_check_wrapped.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 07:00:09 2019 From: report at bugs.python.org (Will Bond) Date: Tue, 13 Aug 2019 11:00:09 +0000 Subject: [New-bugs-announce] [issue37839] makesetup Doesn't Handle Defines with Equal Sign Message-ID: <1565694009.91.0.692452487744.issue37839@roundup.psfhosted.org> New submission from Will Bond : Using 3.8.0b3 on macOS. I'm doing a custom compile with (heavy) modifications to Modules/Setup.local. Whenever I add a define rule to a module line that includes an equal sign, e.g.: _sqlite3 -DMODULE_NAME=_sqlite3 _sqlite/module.c _sqlite/cache.c _sqlite/connection.c _sqlite/cursor.c _sqlite/microprotocols.c _sqlite/prepare_protocol.c _sqlite/row.c _sqlite/statement.c _sqlite/util.c -I../env/include -I\$(srcdir)/Modules/_sqlite ../env/lib/libsqlite3.a makesetup appears to treat this as a Makefile variable definition, which places the rule in the wrong part of the Makefile. In my situation, this causes _sqlite3 to be compiled as a shared library instead of statically. I see this was peripherally reported at https://bugs.python.org/issue35184, but in that case the =1 was just dropped rather than solving the underlying issue. For many situations, dropping the =1 works, but in others it is not. Not that this is necessarily helpful, but I do know that this used to work with Python 3.3. ---------- messages: 349543 nosy: wbond priority: normal severity: normal status: open title: makesetup Doesn't Handle Defines with Equal Sign type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 07:50:02 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Tue, 13 Aug 2019 11:50:02 +0000 Subject: [New-bugs-announce] [issue37840] bytearray_getitem() handles negative index incorrectly Message-ID: <1565697002.74.0.85786305512.issue37840@roundup.psfhosted.org> New submission from Sergey Fedoseev : bytearray_getitem() adjusts negative index, though that's already done by PySequence_GetItem(). This makes PySequence_GetItem(bytearray(1), -2) to return 0 instead of raise IndexError. ---------- components: Interpreter Core messages: 349545 nosy: sir-sigurd priority: normal severity: normal status: open title: bytearray_getitem() handles negative index incorrectly type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 11:49:27 2019 From: report at bugs.python.org (Steve Dower) Date: Tue, 13 Aug 2019 15:49:27 +0000 Subject: [New-bugs-announce] [issue37841] Python store app launcher has dependency on msvcp140.dll Message-ID: <1565711367.34.0.212877315326.issue37841@roundup.psfhosted.org> New submission from Steve Dower : A change made to the python_uwp.vcxproj (or more likely the .cpp file) has introduced a runtime dependency on msvcp140.dll. As we don't distribute this dependency, and it is not always installed by default, we should statically link it instead. ---------- assignee: steve.dower components: Windows messages: 349575 nosy: lukasz.langa, paul.moore, steve.dower, tim.golden, zach.ware priority: release blocker severity: normal stage: needs patch status: open title: Python store app launcher has dependency on msvcp140.dll type: crash versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 12:37:28 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Tue, 13 Aug 2019 16:37:28 +0000 Subject: [New-bugs-announce] [issue37842] Initialize Py_buffer variables more efficiently Message-ID: <1565714248.3.0.317246195617.issue37842@roundup.psfhosted.org> New submission from Sergey Fedoseev : Argument Clinic generates `{NULL, NULL}` initializer for Py_buffer variables. Such initializer zeroes all Py_buffer members, but as I understand only `obj` and `buf` members are really had to be initialized. Avoiding unneeded initialization provides tiny speed-up: $ python -m perf timeit -s "replace = b''.replace" "replace(b'', b'')" --compare-to=../cpython-master/venv/bin/python --duplicate=1000 /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 43.0 ns +- 0.5 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 41.8 ns +- 0.4 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 43.0 ns +- 0.5 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 41.8 ns +- 0.4 ns: 1.03x faster (-3%) ---------- components: Argument Clinic messages: 349582 nosy: larry, sir-sigurd priority: normal severity: normal status: open title: Initialize Py_buffer variables more efficiently type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 13:04:27 2019 From: report at bugs.python.org (Richard Jayne) Date: Tue, 13 Aug 2019 17:04:27 +0000 Subject: [New-bugs-announce] [issue37843] CGIHTTPRequestHandler does not take args.directory in constructor Message-ID: <1565715867.88.0.342712762269.issue37843@roundup.psfhosted.org> New submission from Richard Jayne : In Lib/http/server.py if args.cgi: handler_class = CGIHTTPRequestHandler else: handler_class = partial(SimpleHTTPRequestHandler, directory=args.directory) Notice that CGIHTTPRequestHandler does not accept directory=args.directory, and so the option does not work with the --cgi option. ---------- components: Extension Modules messages: 349585 nosy: rjayne priority: normal severity: normal status: open title: CGIHTTPRequestHandler does not take args.directory in constructor type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 14:21:40 2019 From: report at bugs.python.org (Christian Biesinger) Date: Tue, 13 Aug 2019 18:21:40 +0000 Subject: [New-bugs-announce] [issue37844] PyRun_SimpleFile should provide a version that does not need a FILE* Message-ID: <1565720500.55.0.0348058431334.issue37844@roundup.psfhosted.org> New submission from Christian Biesinger : Because FILE* requires that the runtime library matches between Python and a program using it, it is very hard to use this correctly on Windows. It would be nice if Python provided either: - A function to open a FILE* given a filename, suitable for passing to the various PyRun_* functions, or - Versions of the functions that take a filename and internally open the file (ref: https://docs.python.org/3.9/c-api/veryhigh.html, which talks about this but provides no useful guidance: "One particular issue which needs to be handled carefully is that the FILE structure for different C libraries can be different and incompatible. Under Windows (at least), it is possible for dynamically linked extensions to actually use different libraries, so care should be taken that FILE* parameters are only passed to these functions if it is certain that they were created by the same library that the Python runtime is using." ) ---------- components: Windows messages: 349598 nosy: cbiesinger, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: PyRun_SimpleFile should provide a version that does not need a FILE* type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 14:33:24 2019 From: report at bugs.python.org (David K.) Date: Tue, 13 Aug 2019 18:33:24 +0000 Subject: [New-bugs-announce] [issue37845] SLCertVerificationError: Unable to handle SAN names (from Certifications) published with white spaces at start Message-ID: <1565721204.27.0.66064229859.issue37845@roundup.psfhosted.org> New submission from David K. : Unable to establish SSL connections using company's private certificates where their SANs (Subject Alternative Names) contain at least one DNS Name that starts with white spaces. Attempting to establish SSL connection would result in Exception: SSLCertVerificationError("partial wildcards in leftmost label are not supported: ' *.x.y.com'.") This situation made us co-depended on SecOps in a big company where ultimately all other none-python apps weren't effected by that change they made and thus couldn't or wouldn't fix the problem on their side for us. (We were at their mercy!) I originally encountered this bug @ Python 3.7 and fixed it manually on my own local Python environment. As the bug seems to be still unfixed to date, I publish this issue. A small and simple fix will follow shortly on github. ---------- assignee: christian.heimes components: SSL messages: 349600 nosy: DK26, christian.heimes priority: normal severity: normal status: open title: SLCertVerificationError: Unable to handle SAN names (from Certifications) published with white spaces at start type: security versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 22:44:09 2019 From: report at bugs.python.org (Windson Yang) Date: Wed, 14 Aug 2019 02:44:09 +0000 Subject: [New-bugs-announce] [issue37846] declare that Text I/O use buffer inside Message-ID: <1565750649.11.0.831019575323.issue37846@roundup.psfhosted.org> New submission from Windson Yang : At the beginning of https://docs.python.org/3.7/library/io.html#io.RawIOBase, we declared that > Binary I/O (also called buffered I/O) and > Raw I/O (also called unbuffered I/O) But we didn't mention if Text I/O use buffer or not which led to confusion. Even though we talked about it later in https://docs.python.org/3.7/library/io.html#class-hierarchy > The TextIOBase ABC, another subclass of IOBase, deals with streams whose bytes represent text, and handles encoding and decoding to and from strings. TextIOWrapper, which extends it, is a buffered text interface to a buffered raw stream (BufferedIOBase). Finally, StringIO is an in-memory stream for text. IMO, it will be better to declare 'Reads and writes are internally buffered in order to speed things up' at the very beginning in > Text I/O > Text I/O expects and produces str objects... or maybe > class io.TextIOBase > Base class for text streams. This class provides a character and line based interface to stream I/O. It inherits IOBase. There is no public constructor. ---------- assignee: docs at python components: Documentation messages: 349633 nosy: Windson Yang, docs at python priority: normal severity: normal status: open title: declare that Text I/O use buffer inside type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 13 23:37:15 2019 From: report at bugs.python.org (Edwin Pratt) Date: Wed, 14 Aug 2019 03:37:15 +0000 Subject: [New-bugs-announce] [issue37847] The IDLE does not show previous code suggestions if I tap on the up arrow Message-ID: <1565753835.82.0.953640803878.issue37847@roundup.psfhosted.org> New submission from Edwin Pratt : If I am typing some Python code in the IDLE, for example a function: def sayHi(name): print('Hello ', name) and I execute the function: sayHi('Ed') I can not edit the function or execute a previous line of code again if I tap the up arrow on my keyboard. It seemed to work in the previous versions of Python. In order to execute a previous line of code I have to either copy it, or type it in again. ---------- assignee: terry.reedy components: IDLE messages: 349638 nosy: Edwin Pratt, terry.reedy priority: normal severity: normal status: open title: The IDLE does not show previous code suggestions if I tap on the up arrow type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 01:42:20 2019 From: report at bugs.python.org (Greg Price) Date: Wed, 14 Aug 2019 05:42:20 +0000 Subject: [New-bugs-announce] [issue37848] More fully implement Unicode's case mappings Message-ID: <1565761340.97.0.731805225537.issue37848@roundup.psfhosted.org> New submission from Greg Price : Splitting this out from #32771 for more specific discussion. Benjamin writes there that it would be good to: > implement the locale-specific case mappings of https://www.unicode.org/Public/UCD/latest/ucd/SpecialCasing.txt and ?3.13 of the Unicode 12 standard in str.lower/upper/casefold. and adds that an implementation would require having available in the core the data on canonical combining classes, which is currently only in the unicodedata module. --- First, I'd like to better understand what functionality we have now and what else the standard describes. Reading https://www.unicode.org/Public/12.0.0/ucd/SpecialCasing.txt , I see * a bunch of rules that aren't language-specific * some other rules that are. I also see in makeunicodedata.py that we don't even parse the language-specific rules. Here's, IIUC, a demo of us correctly implementing the language-independent rules. One line in the data file reads: FB00; FB00; 0046 0066; 0046 0046; # LATIN SMALL LIGATURE FF And in fact the `lower`, `title`, and `upper` of `\uFB00` are those strings respectively: $ unicode --brief "$(./python -c \ 's="\ufb00"; print(" ".join((s.lower(), s.title(), s.upper())))')" ? U+FB00 LATIN SMALL LIGATURE FF U+0020 SPACE F U+0046 LATIN CAPITAL LETTER F f U+0066 LATIN SMALL LETTER F U+0020 SPACE F U+0046 LATIN CAPITAL LETTER F F U+0046 LATIN CAPITAL LETTER F OK, great. --- Then here's something we don't implement. Another line in the file reads: 00CD; 0069 0307 0301; 00CD; 00CD; lt; # LATIN CAPITAL LETTER I WITH ACUTE IOW `'\u00CD'` should lowercase to `'\u0069\u0307\u0301'`, i.e.: i U+0069 LATIN SMALL LETTER I ? U+0307 COMBINING DOT ABOVE ? U+0301 COMBINING ACUTE ACCENT ... but only in a Lithuanian (`lt`) locale. One question is: what would the right API for this be? I'm not sure I'd want `str.lower`'s results to depend on the process's current Unix locale... and I definitely wouldn't want to get that without some way of instead telling it what locale to use. (Either to use a single constant locale, or to use a per-user locale in e.g. a web application.) Perhaps `str.lower` and friends would take a keyword argument `locale`? Oh, one more link for reference: the said section of the standard is in this PDF: https://www.unicode.org/versions/Unicode12.0.0/ch03.pdf , near the end. And a related previous issue: #12736. ---------- components: Unicode messages: 349646 nosy: Greg Price, benjamin.peterson, ezio.melotti, lemburg, vstinner priority: normal severity: normal status: open title: More fully implement Unicode's case mappings versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 02:40:49 2019 From: report at bugs.python.org (Tal Einat) Date: Wed, 14 Aug 2019 06:40:49 +0000 Subject: [New-bugs-announce] [issue37849] IDLE: Completion window misplaced when shown above current line Message-ID: <1565764849.72.0.621293975284.issue37849@roundup.psfhosted.org> New submission from Tal Einat : When the current line is near the bottom of the shell window, the completions list will be shown above it. However, instead of appearing directly above the current line, it appears quite a bit higher, about one line too high. See attached screenshot. Seen on Windows 10 with current master (2a570af12ac5e4ac5575a68f8739b31c24d01367). First brought up by Terry Reedy in a comment on PR GH-15169 [1]. [1] https://github.com/python/cpython/pull/15169#issuecomment-521066906 ---------- assignee: terry.reedy components: IDLE messages: 349652 nosy: taleinat, terry.reedy priority: normal severity: normal status: open title: IDLE: Completion window misplaced when shown above current line versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 07:05:36 2019 From: report at bugs.python.org (Matthijs Blom) Date: Wed, 14 Aug 2019 11:05:36 +0000 Subject: [New-bugs-announce] [issue37850] Console: holding right arrow key reproduces entire previous input Message-ID: <1565780736.93.0.23974280241.issue37850@roundup.psfhosted.org> New submission from Matthijs Blom : To reproduce (on Windows): On the console, enter some input: >>> "Ozewiezewoze wiezewalla kristalla" "Ozewiezewoze wiezewalla kristalla" Next, hold the right arrow key, obtaining: >>> "Ozewiezewoze wiezewalla kristalla" One can subsequently delete sections of this line and again reproduce (last parts of) the previous input. I have produced this result on 3.4, 3.8.0b3 and master (077af8c). I did not observe this behaviour on WSL, which is why I think this is a bug. ---------- components: Windows messages: 349679 nosy: Matthijs Blom, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Console: holding right arrow key reproduces entire previous input type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 09:44:00 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 14 Aug 2019 13:44:00 +0000 Subject: [New-bugs-announce] [issue37851] faulthandler: only allocate the signal handler stack when faulthandler is used Message-ID: <1565790240.15.0.788753710363.issue37851@roundup.psfhosted.org> New submission from STINNER Victor : Currently at startup, Python always call _PyFaulthandler_Init() which allocates a stack of SIGSTKSZ bytes, even if faulthandler is never used. That's a waste of memory: the stack should be allocated the first time faulthandler is used. bpo-21131 requires to enlarge this stack size. ---------- components: Library (Lib) messages: 349697 nosy: vstinner priority: normal severity: normal status: open title: faulthandler: only allocate the signal handler stack when faulthandler is used versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 12:58:36 2019 From: report at bugs.python.org (Josh Rosenberg) Date: Wed, 14 Aug 2019 16:58:36 +0000 Subject: [New-bugs-announce] [issue37852] Pickling doesn't work for name-mangled private methods Message-ID: <1565801916.64.0.686845199304.issue37852@roundup.psfhosted.org> New submission from Josh Rosenberg : Inspired by this Stack Overflow question, where it prevented using multiprocessing.Pool.map with a private method: https://stackoverflow.com/q/57497370/364696 The __name__ of a private method remains the unmangled form, even though only the mangled form exists on the class dictionary for lookup. The __reduce__ for bound methods doesn't handle them private names specially, so it will serialize it such that on the other end, it does getattr(method.__self__, method.__func__.__name__). On deserializing, it tries to perform that lookup, but of course, only the mangled name exists, so it dies with an AttributeError. Minimal repro: import pickle class Spam: def __eggs(self): pass def eggs(self): return pickle.dumps(self.__eggs) spam = Spam() pkl = spam.eggs() # Succeeds via implicit mangling (but pickles unmangled name) pickle.loads(pkl) # Fails (tried to load __eggs Explicitly mangling via pickle.dumps(spam._Spam__eggs) fails too, and in the same way. A similar problem occurs (on the serializing end) when you do: pkl = pickle.dumps(Spam._Spam__eggs) # Pickling function in Spam class, not bound method of Spam instance though that failure occurs at serialization time, because pickle itself tries to look up .Spam.__eggs (which doesn't exist), instead of .Spam._Spam__eggs (which does). 1. It fails at serialization time (so it doesn't silently produce pickles that can never be unpickled) 2. It's an explicit PicklingError, with a message that explains what it tried to do, and why it failed ("Can't pickle : attribute lookup Spam.__eggs on __main__ failed") In the use case on Stack Overflow, it was the implicit case; a public method of a class created a multiprocessing.Pool, and tried to call Pool.map with a private method on the same class as the mapper function. While normally pickling methods seems odd, for multiprocessing, it's pretty standard. I think the correct fix here is to make method_reduce in classobject.c (the __reduce__ implementation for bound methods) perform the mangling itself (meth_reduce in methodobject.c has the same bug, but it's less critical, since only private methods of built-in/extension types would be affected, and most of the time, such private methods aren't exposed to Python at all, they're just static methods for direct calling in C). This would handle all bound methods, but for "unbound methods" (read: functions defined in a class), it might also be good to update save_global/get_deep_attribute in _pickle.c to make it recognize the case where a component of a dotted name begins with two underscores (and doesn't end with them), and the prior component is a class, so that pickling the private unbound method (e.g. plain function which happened to be defined on a class) also works, instead of dying with a lookup error. The fix is most important, and least costly, for bound methods, but I think doing it for plain functions is still worthwhile, since I could easily see Pool.map operations using an @staticmethod utility function defined privately in the class for encapsulation purposes, and it seems silly to force them to make it more public and/or remove it from the class. ---------- components: Interpreter Core, Library (Lib) messages: 349716 nosy: josh.r priority: normal severity: normal status: open title: Pickling doesn't work for name-mangled private methods versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 13:37:25 2019 From: report at bugs.python.org (Fatih Sarhan) Date: Wed, 14 Aug 2019 17:37:25 +0000 Subject: [New-bugs-announce] [issue37853] [urllib.parse.urlparse] It does not correctly parse the URL with basic authentication. Message-ID: <1565804245.33.0.663961239717.issue37853@roundup.psfhosted.org> New submission from Fatih Sarhan : No problem for these: "http://localhost:9100" "http://user:password at localhost:9100" But, these are problematic: "http://use#r:password at localhost:9100" "http://user:pass#word at localhost:9100" ``` from urllib.parse import urlparse url = "http://us#er:123 at localhost:9001/RPC2" u = urlparse(url) print(u) # ParseResult(scheme='http', netloc='us', path='', params='', query='', fragment='er:123 at localhost:9001/RPC2') ``` ---------- components: Library (Lib) messages: 349721 nosy: f9n priority: normal severity: normal status: open title: [urllib.parse.urlparse] It does not correctly parse the URL with basic authentication. type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 13:49:16 2019 From: report at bugs.python.org (Fatih Sarhan) Date: Wed, 14 Aug 2019 17:49:16 +0000 Subject: [New-bugs-announce] [issue37854] [xmlrpc.client.ServerProxy] It does not correctly parse the URL with basic authentication. Message-ID: <1565804956.68.0.411195172232.issue37854@roundup.psfhosted.org> New submission from Fatih Sarhan : Same problem here. (https://bugs.python.org/issue37853) ---------- components: Library (Lib) messages: 349726 nosy: f9n priority: normal severity: normal status: open title: [xmlrpc.client.ServerProxy] It does not correctly parse the URL with basic authentication. versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 14:48:14 2019 From: report at bugs.python.org (Zhiyong Zhang) Date: Wed, 14 Aug 2019 18:48:14 +0000 Subject: [New-bugs-announce] [issue37855] Compiling Python 3.7.4 with Intel compilers 2019 Message-ID: <1565808494.42.0.743401642234.issue37855@roundup.psfhosted.org> New submission from Zhiyong Zhang : Compilation of Python 3.7.4 with Intel icc/2019 failed with the following errors: icpc -c -Wsign-compare -DDYNAMIC_ANNOTATIONS_ENABLED=1 -g -O0 -Wall -O3 -fp-model strict -fp-model source -xHost -ipo -prec-div -prec-sqrt -std=c++11 -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fp-model strict -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -fp-model strict -IObjects -IInclude -IPython -I. -I../Include -DPy_BUILD_CORE -o Programs/python.o ../Programs/python.c In file included from ../Include/Python.h(75), from ../Programs/python.c(3): ../Include/pyatomic.h(32): error: identifier "memory_order_relaxed" is undefined _Py_memory_order_relaxed = memory_order_relaxed, ^ In file included from ../Include/Python.h(75), from ../Programs/python.c(3): ../Include/pyatomic.h(33): error: identifier "memory_order_acquire" is undefined _Py_memory_order_acquire = memory_order_acquire, ^ In file included from ../Include/Python.h(75), from ../Programs/python.c(3): ../Include/pyatomic.h(34): error: identifier "memory_order_release" is undefined _Py_memory_order_release = memory_order_release, ^ In file included from ../Include/Python.h(75), from ../Programs/python.c(3): ../Include/pyatomic.h(35): error: identifier "memory_order_acq_rel" is undefined _Py_memory_order_acq_rel = memory_order_acq_rel, ^ In file included from ../Include/Python.h(75), from ../Programs/python.c(3): ../Include/pyatomic.h(36): error: identifier "memory_order_seq_cst" is undefined _Py_memory_order_seq_cst = memory_order_seq_cst ^ In file included from ../Include/Python.h(75), from ../Programs/python.c(3): ../Include/pyatomic.h(40): error: identifier "atomic_uintptr_t" is undefined atomic_uintptr_t _value; ^ In file included from ../Include/Python.h(75), from ../Programs/python.c(3): ../Include/pyatomic.h(44): error: identifier "atomic_int" is undefined atomic_int _value; ^ ---------- components: Installation messages: 349734 nosy: zyzhang2006 priority: normal severity: normal status: open title: Compiling Python 3.7.4 with Intel compilers 2019 type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 15:39:17 2019 From: report at bugs.python.org (evanTO) Date: Wed, 14 Aug 2019 19:39:17 +0000 Subject: [New-bugs-announce] [issue37856] Adding additional python installations to py launcher Message-ID: <1565811557.54.0.422004539892.issue37856@roundup.psfhosted.org> New submission from evanTO : I have four instances of Python installed on my machine, 2.7x32 & 3.7x32 that I installed manually into their default locations, and 2.7x64 & 3.4x64 that came pre-packaged with a third party piece of software (SPSS). Py Launcher successfully detects the two installations that I installed manually but cannot see the two installations that came with the third party software. Here (https://www.python.org/dev/peps/pep-0397/#configuration-file) there is an allusion to there being commands which allow customization of Py launcher within user space (monkeying around with the registry in a corporate environments is often disallowed). I have created a py.ini file using the [commands] section with respective copies of the following entry: "3.4-64="C:\Program Files\IBM\SPSS\Statistics\24\Python3\python.exe" and saved to "C:\Users\\AppData\Local" but the additional installations do not appear. I have added the two additional installation directories (and Scripts folders) to the PATH variable and confirmed that the changes persisted (displayed below): PATH=C:\Program Files (x86)\Python37-32\Scripts\;C:\Program Files (x86)\Python37-32\;C:\Python27\;C:\Python27\Scripts;C:\Program Files\IBM\SPSS\Statistics\24\Python\;C:\Program Files\IBM\SPSS\Statistics\24\Python\Scripts\;C:\Program Files\IBM\SPSS\Statistics\24\Python3\;C:\Program Files\IBM\SPSS\Statistics\24\Python3\Scripts\; Current result of "py -0p" (List the available pythons with paths) C:\>py -0p Installed Pythons found by py Launcher for Windows -3.7-32 "C:\Program Files (x86)\Python37-32\python.exe" * -2.7-32 C:\Python27\python.exe Expected result of "py -0p" C:\>py -0p Installed Pythons found by py Launcher for Windows -3.7-32 "C:\Program Files (x86)\Python37-32\python.exe" * -2.7-32 C:\Python27\python.exe -3.4-64 "C:\Program Files\IBM\SPSS\Statistics\24\Python3\python.exe" -2.7-64 "C:\Program Files\IBM\SPSS\Statistics\24\Python\python.exe" ---------- components: Windows messages: 349740 nosy: evanTO, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Adding additional python installations to py launcher type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 15:46:10 2019 From: report at bugs.python.org (Zane Bitter) Date: Wed, 14 Aug 2019 19:46:10 +0000 Subject: [New-bugs-announce] [issue37857] Setting logger.level directly has no effect due to caching in 3.7+ Message-ID: <1565811970.47.0.529067236549.issue37857@roundup.psfhosted.org> New submission from Zane Bitter : This is a related issue to bug 34269, in the sense that it is also due to the caching added by the fix for bug 30962. In this case, the bug is triggered by setting the public 'level' attribute of a logging.Logger object, instead of calling the setLevel() method. Although this was probably never a good idea, prior to Python3.7 it worked as expected. Now it renders the level out of sync with the cache, leading to inconsistent results that are hard to debug. An example in the wild: https://review.opendev.org/676450 ---------- components: Library (Lib) messages: 349741 nosy: zaneb priority: normal severity: normal status: open title: Setting logger.level directly has no effect due to caching in 3.7+ type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 16:14:54 2019 From: report at bugs.python.org (Ashley Harvey) Date: Wed, 14 Aug 2019 20:14:54 +0000 Subject: [New-bugs-announce] [issue37858] CookieLib: MozillaCookieJar.py uses case-sensitive regex to validate cookies file Message-ID: <1565813694.91.0.343923131881.issue37858@roundup.psfhosted.org> New submission from Ashley Harvey : I'm on macOS 10.14.6, wget 1.20.3, python 2.7. Command line: $ wget --save-cookies cookies.txt --keep-session-cookies --post-data 'username=myUserName&password=myPassword' --delete-after Line 39 of _MozillaCookieJar.py (cookielib) shows it looking for 'magic_re = "#( Netscape)? HTTP Cookie File"' in order to validate the supplied cookies file. Unlike cURL, wget however, produces a cookies file that begins with "# HTTP cookie file". Note the lower-case c and f. I reported this as a bug to the wget team who looked for the spec to say that that line must follow a certain format and couldn't find any such mention. (See: https://savannah.gnu.org/bugs/?56755) The lack of upper-case c and f cause cookielib to choke and stop processing the cookies file, and so here I am reporting it as a bug that the regex is case-sensitive. ---------- components: Library (Lib), macOS messages: 349743 nosy: ashleyharvey, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: CookieLib: MozillaCookieJar.py uses case-sensitive regex to validate cookies file type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 16:32:41 2019 From: report at bugs.python.org (Red Glyph) Date: Wed, 14 Aug 2019 20:32:41 +0000 Subject: [New-bugs-announce] [issue37859] time.process_time() constant / erratic on Windows Message-ID: <1565814761.83.0.783590608069.issue37859@roundup.psfhosted.org> New submission from Red Glyph : Tested with - Python 3.7.4, 64-bit, Windows version (installer version) - python-3.8.0b3-embed-amd64.zip - python-3.7.2.post1-embed-amd64.zip on Windows 7 x64 Professional time.process_time() always returns the same value. If I check the value of time.process_time_ns(), sometimes it is constant, sometimes I observe a few changes, then it becomes constant too. Here is a log of an example session, I'm waiting at least 1-2 seconds between each command: Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license()" for more information. >>> import time >>> time.process_time() 0.1092007 >>> time.process_time() 0.1092007 >>> time.process_time_ns() 109200700 >>> time.process_time_ns() 124800800 >>> time.process_time_ns() 124800800 >>> time.process_time() 0.1248008 >>> time.process_time() 0.1248008 >>> time.process_time() 0.1248008 >>> time.process_time_ns() 124800800 >>> time.process_time_ns() 124800800 >>> time.process_time_ns() 124800800 >>> time.process_time_ns() 124800800 >>> time.clock() Warning (from warnings module): File "__main__", line 1 DeprecationWarning: time.clock has been deprecated in Python 3.3 and will be removed from Python 3.8: use time.perf_counter or time.process_time instead 77.006126126 >>> time.clock() 79.245575778 >>> time.clock() 80.801103036 >>> time.process_time() 0.1248008 >>> ---------- components: Library (Lib), Windows messages: 349745 nosy: Red Glyph, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: time.process_time() constant / erratic on Windows type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 17:14:46 2019 From: report at bugs.python.org (Ashwin Ramaswami) Date: Wed, 14 Aug 2019 21:14:46 +0000 Subject: [New-bugs-announce] [issue37860] Add netlify deploy preview for docs Message-ID: <1565817286.53.0.142673590989.issue37860@roundup.psfhosted.org> New submission from Ashwin Ramaswami : It would be good to preview the cpython documentation on PRs using Netlify. See https://github.com/python/core-workflow/issues/348 ---------- assignee: docs at python components: Documentation messages: 349752 nosy: docs at python, epicfaace priority: normal severity: normal status: open title: Add netlify deploy preview for docs versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 19:18:03 2019 From: report at bugs.python.org (Clive Bruton) Date: Wed, 14 Aug 2019 23:18:03 +0000 Subject: [New-bugs-announce] [issue37861] Install fails on MacOS X 10.6 with python >= 3.7.1 Message-ID: <1565824683.85.0.593386949201.issue37861@roundup.psfhosted.org> New submission from Clive Bruton : When attempting to install Python >= 3.7.1 on MacOS X 10.6 the installer fails with the message: **** The operation couldn?t be completed. (com.apple.installer.pagecontroller error -1.) Couldn't open "python-3.7.x-macosx10.6.pkg". **** The installer runs without incident with the 3.7.0 installer. Console reports (when "python-3.7.4-macosx10.6.pkg" is run): *** 15/08/2019 00:03:57 Installer[516] @(#)PROGRAM:Install PROJECT:Install-596.1 15/08/2019 00:03:57 Installer[516] @(#)PROGRAM:Installer PROJECT:Installer-430.1 15/08/2019 00:03:57 Installer[516] Hardware: Macmini2,1 @ 1.83 GHz (x 2), 2048 MB RAM 15/08/2019 00:03:57 Installer[516] Running OS Build: Mac OS X 10.6.8 (10K549) 15/08/2019 00:03:57 kernel Installer (map: 0x5a9770c) triggered DYLD shared region unnest for map: 0x5a9770c, region 0x7fff83400000->0x7fff83600000. While not abnormal for debuggers, this increases system memory footprint until the target exits. 15/08/2019 00:03:58 Installer[516] Failed to verify data against certificate. 15/08/2019 00:03:58 Installer[516] Invalid Distribution File/Package **** Similar reports are available here: https://python-forum.io/Thread-Cannot-Install-python-3-7-3-macosx10-6-pkg ---------- components: Installation files: grab1.png messages: 349776 nosy: typonaut priority: normal severity: normal status: open title: Install fails on MacOS X 10.6 with python >= 3.7.1 type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48543/grab1.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 21:13:06 2019 From: report at bugs.python.org (Kim Oldfield) Date: Thu, 15 Aug 2019 01:13:06 +0000 Subject: [New-bugs-announce] [issue37862] Search doesn't find built-in functions Message-ID: <1565831586.7.0.196033639625.issue37862@roundup.psfhosted.org> New submission from Kim Oldfield : The python 3 documentation search https://docs.python.org/3/search.html doesn't always find built-in functions. For example, searching for "zip" takes me to https://docs.python.org/3/search.html?q=zip I would expect the first match to be a link to https://docs.python.org/3/library/functions.html#zip but I can't see a link to this page anywhere in the 146 results. ---------- assignee: docs at python components: Documentation messages: 349781 nosy: docs at python, kim.oldfield priority: normal severity: normal status: open title: Search doesn't find built-in functions type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 14 22:30:45 2019 From: report at bugs.python.org (Tim Peters) Date: Thu, 15 Aug 2019 02:30:45 +0000 Subject: [New-bugs-announce] [issue37863] Speed hash(fractions.Fraction) Message-ID: <1565836245.18.0.379877186731.issue37863@roundup.psfhosted.org> New submission from Tim Peters : Recording before I forget. These are easy: 1. As the comments note, cache the hash code. 2. Use the new (in 3.8) pow(denominator, -1, modulus) to get the inverse instead of raising to the modulus-2 power. Should be significantly faster. If not, the new "-1" implementation should be changed ;-) Will require catching ValueError in case the denom is a multiple of the modulus. 3. Instead of multiplying by the absolute value of the numerator, multiply by the hash of the absolute value of the numerator. That changes the multiplication, and the subsequent modulus operation, from unbounded-length operations to short bounded-length ones. Hashing the numerator on its own should be significantly faster, because the int hash doesn't require any multiplies or divides regardless of how large the numerator is. None of those should change any computed results. ---------- messages: 349789 nosy: tim.peters priority: low severity: normal stage: needs patch status: open title: Speed hash(fractions.Fraction) type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 15 00:40:26 2019 From: report at bugs.python.org (Greg Price) Date: Thu, 15 Aug 2019 04:40:26 +0000 Subject: [New-bugs-announce] [issue37864] Correct and deduplicate docs on "printable" characters Message-ID: <1565844026.38.0.454772996752.issue37864@roundup.psfhosted.org> New submission from Greg Price : While working on #36502 and then #18236 about the definition and docs of str.isspace(), I looked closely also at its neighbor str.isprintable(). It turned out that we have the definition of what makes a character "printable" documented in three places, giving two different definitions. The definition in the comment on `_PyUnicode_IsPrintable` is inverted, so that's an easy small fix. With that correction, the two definitions turn out to be equivalent -- but to confirm that, you have to go look up, or happen to know, that those are the only five "Other" categories and only three "Separator" categories in the Unicode character database. That makes it hard for the reader to tell whether they really are the same, or if there's some subtle difference in the intended semantics. I've taken a crack at writing some improved docs text for a single definition, borrowing ideas from the C comment as well as the existing docs text; and then pointing there from the other places we'd had definitions. PR coming shortly. ---------- components: Unicode messages: 349792 nosy: Greg Price, ezio.melotti, vstinner priority: normal severity: normal status: open title: Correct and deduplicate docs on "printable" characters versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 15 03:43:15 2019 From: report at bugs.python.org (Andrei Pashkin) Date: Thu, 15 Aug 2019 07:43:15 +0000 Subject: [New-bugs-announce] [issue37865] tempfile.NamedTemporaryFile() raises exception on close() when file is absent Message-ID: <1565854995.12.0.82543629972.issue37865@roundup.psfhosted.org> New submission from Andrei Pashkin : Here is an example: import tempfile import os with tempfile.NamedTemporaryFile() as temp: os.remove(temp.name) And here is an error it produces: Traceback (most recent call last): File "test.py", line 6, in os.remove(temp.name) File "/usr/lib/python3.7/tempfile.py", line 639, in __exit__ self.close() File "/usr/lib/python3.7/tempfile.py", line 646, in close self._closer.close() File "/usr/lib/python3.7/tempfile.py", line 583, in close unlink(self.name) FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpzn8gtiz1' ---------- messages: 349794 nosy: pashkin priority: normal severity: normal status: open title: tempfile.NamedTemporaryFile() raises exception on close() when file is absent versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 15 05:18:25 2019 From: report at bugs.python.org (Hua Liu) Date: Thu, 15 Aug 2019 09:18:25 +0000 Subject: [New-bugs-announce] [issue37866] PyModule_GetState Segmentation fault when called Py_Initialize Message-ID: <1565860705.57.0.264365379897.issue37866@roundup.psfhosted.org> Change by Hua Liu : ---------- nosy: Hua Liu priority: normal severity: normal status: open title: PyModule_GetState Segmentation fault when called Py_Initialize type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 15 06:38:42 2019 From: report at bugs.python.org (simon mackenzie) Date: Thu, 15 Aug 2019 10:38:42 +0000 Subject: [New-bugs-announce] [issue37867] docs says subprocess.run accepts a string but this does not work on linux Message-ID: <1565865522.62.0.177919974549.issue37867@roundup.psfhosted.org> New submission from simon mackenzie : The docs for subprocess.run say "The arguments used to launch the process. This may be a list or a string." This works in windows but in linux it has to be a list. Either needs fixing or the docs need to be changed. ---------- messages: 349800 nosy: simon mackenzie priority: normal severity: normal status: open title: docs says subprocess.run accepts a string but this does not work on linux _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 15 08:02:15 2019 From: report at bugs.python.org (Johan Hidding) Date: Thu, 15 Aug 2019 12:02:15 +0000 Subject: [New-bugs-announce] [issue37868] `is_dataclass` returns `True` if `getattr` always succeeds. Message-ID: <1565870535.54.0.93577990924.issue37868@roundup.psfhosted.org> New submission from Johan Hidding : Given a class `A` that overloads `__getattr__` ``` class A: def __getattr__(self, key): return 0 ``` An instance of this class is always identified as a dataclass. ``` from dataclasses import is_dataclass a = A() print(is_dataclass(a)) ``` gives the output `True`. Possible fix: check for the instance type. ``` is_dataclass(type(a)) ``` does give the correct answer. ---------- components: Library (Lib) messages: 349802 nosy: Johan Hidding priority: normal severity: normal status: open title: `is_dataclass` returns `True` if `getattr` always succeeds. type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 15 16:30:41 2019 From: report at bugs.python.org (Hansraj Das) Date: Thu, 15 Aug 2019 20:30:41 +0000 Subject: [New-bugs-announce] [issue37869] Compilation warning on GCC version 7.4.0-1ubuntu1~18.04.1 Message-ID: <1565901041.74.0.0939495038375.issue37869@roundup.psfhosted.org> New submission from Hansraj Das : I am facing below compilation warning on compilation python source: *********************************************************************** gcc -pthread -c -Wno-unused-result -Wsign-compare -g -Og -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initial izers -Werror=implicit-function-declaration -I./Include/internal -I. -I./Include -DPy_BUILD_CORE -o Objects/obmalloc.o Objects/obmalloc.c Objects/obmalloc.c: In function __PyObject_Malloc_: Objects/obmalloc.c:1646:16: warning: _ptr_ may be used uninitialized in this function [-Wmaybe-uninitialized] return ptr; ^~~ *********************************************************************** This is another thread having suggestions from Victor: https://github.com/python/cpython/pull/15293 ---------- components: Build files: warning-2019-08-15 01-15-06.png messages: 349824 nosy: hansrajdas, vstinner priority: normal pull_requests: 15029 severity: normal status: open title: Compilation warning on GCC version 7.4.0-1ubuntu1~18.04.1 type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file48546/warning-2019-08-15 01-15-06.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 15 19:08:33 2019 From: report at bugs.python.org (Matt Christopher) Date: Thu, 15 Aug 2019 23:08:33 +0000 Subject: [New-bugs-announce] [issue37870] os.path.ismount returns false for disconnected CIFS mounts in Linux Message-ID: <1565910513.41.0.618173477068.issue37870@roundup.psfhosted.org> New submission from Matt Christopher : I've got a case where we mount a CIFS filesystem and then later the actual backing filesystem is deleted (but the mount remains on the machine). When running from a shell, this is the behavior which I see after the backing CIFS filesystem has gone away: root at 1b20608623a246f1af69058acdfbfd30000006:/fsmounts# ll ls: cannot access 'cifsmountpoint': Input/output error total 8 drwxrwx--- 3 _user _grp 4096 Aug 15 15:46 ./ drwxrwx--- 8 _user _grp 4096 Aug 15 15:46 ../ d????????? ? ? ? ? ? cifsmountpoint/ root at 1b20608623a246f1af69058acdfbfd30000006:/fsmounts# stat -c "%d" cifsmountpoint stat: cannot stat 'cifsmountpoint': Input/output error Running mount -l shows this: ///c7e868cd-3047-4881-b05b-a1a1d087dbf5 on /fsmounts/cifsmountpoint type cifs (rw,relatime,vers=3.0,cache=strict,username=,domain=,uid=0,noforceuid,gid=0,noforcegid,addr=52.239.160.104,file_mode=0777,dir_mode=0777,soft,persistenthandles,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) In the Python code that I see posixpath.py has this snippet: try: s1 = os.lstat(path) except (OSError, ValueError): # It doesn't exist -- so not a mount point. :-) return False The problem is that the comment: "# It doesn't exist -- so not a mount point. :-)" assumes a particular kind of OSError - in reality not every OS error means that it doesn't exist. In this case we're getting OSError with errno == 5, which is: OSError: [Errno 5] Input/output error: Now, I'm not entirely sure what (if anything) the ismount function is supposed to be doing here... but returning false seems incorrect. This IS a mount, and you can see so via mount -l. I am aware that there are other libraries (i.e. psutil.disk_partitions) which can help me to detect this situation but I was surprised that ismount was saying false here. It seems like it should possibly just raise, or maybe there's a fancy way to check mounts if lstat fails. This looks kinda related to https://bugs.python.org/issue2466 (although this is already fixed and not exactly the same problem it's a similar class of issue) ---------- messages: 349833 nosy: Matt Christopher priority: normal severity: normal status: open title: os.path.ismount returns false for disconnected CIFS mounts in Linux type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 15 20:36:04 2019 From: report at bugs.python.org (ANdy) Date: Fri, 16 Aug 2019 00:36:04 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue37871=5D_40_*_473_grid_of_?= =?utf-8?q?=22=C3=A9=22_has_a_single_wrong_character_on_Windows?= Message-ID: <1565915764.5.0.613301280158.issue37871@roundup.psfhosted.org> New submission from ANdy : # To reproduce: # Put this text in a file `a.py` and run `py a.py`. # Or just run: py -c "print(('?' * 40 + '\n') * 473)" # Scroll up for a while. One of the lines will be: # ????????????????????????????????????????? # (You can spot this because it's slightly longer than the other lines.) # The error is consistently on line 237, column 21 (1-indexed). # The error reproduces on Windows but not Linux. Tested in both powershell and CMD. # (Failed to reproduce on either a real Linux machine or on Ubuntu with WSL.) # On Windows, the error reproduces every time consistently. # There is no error if N = 472 or 474. N = 473 # There is no error if W = 39 or 41. # (I tested with console windows of varying sizes, all well over 40 characters.) W = 40 # There is no error if ch = "e" with no accent. # There is still an error for other unicode characters like "?" or "?". ch = "?" # There is no error without newlines. s = (ch * W + "\n") * N # Assert the string itself is correct. assert all(c in (ch, "\n") for c in s) print(s) # There is no error if we use N separate print statements # instead of printing a single string with N newlines. # Similar scripts written in Groovy, JS and Ruby have no error. # Groovy: System.out.println(("?" * 40 + "\n") * 473) # JS: console.log(("?".repeat(40) + "\n").repeat(473)) # Ruby: puts(("?" * 40 + "\n") * 473) ---------- components: Windows messages: 349837 nosy: anhans, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: 40 * 473 grid of "?" has a single wrong character on Windows type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 16 09:21:10 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Fri, 16 Aug 2019 13:21:10 +0000 Subject: [New-bugs-announce] [issue37872] Move statitics in Python/import.c to top of the file Message-ID: <1565961670.68.0.815730473417.issue37872@roundup.psfhosted.org> New submission from Joannah Nanjekye : Following a PR review here: https://github.com/python/cpython/pull/15057 by @ericsnowcurrently i.e : Please move both these statics to the top of the file (next to CACHEDIR) as globals. To keep the diff in the above PR simple, I am tracking this refactor in this issue. ---------- assignee: nanjekyejoannah components: Library (Lib) messages: 349859 nosy: eric.snow, nanjekyejoannah, serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: Move statitics in Python/import.c to top of the file type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 16 10:15:28 2019 From: report at bugs.python.org (D. A. Pellegrino) Date: Fri, 16 Aug 2019 14:15:28 +0000 Subject: [New-bugs-announce] [issue37873] unittest: execute tests in parallel Message-ID: <1565964928.02.0.804836557192.issue37873@roundup.psfhosted.org> New submission from D. A. Pellegrino : The unittest documentation makes reference to a potential parallelization feature: "Note that shared fixtures do not play well with [potential] features like test parallelization and they break test isolation. They should be used with care." (https://docs.python.org/3/library/unittest.html) However, it seems that executing tests in parallel is not yet a feature of unittest. This enhancement request is to add parallel execution of tests to unittest. A command line option may be a good interface. Ideally, it would be compatible with test discovery. Outside of the Python ecosystem, a common practice is to define test cases in a Makefile and then execute GNU Make with the '-j' flag (https://www.gnu.org/software/make/manual/html_node/Parallel.html#Parallel). Adding such an option to unittest would be a convenience and may save the effort of bringing in additional libraries or tools for parallel unit test execution. ---------- components: Library (Lib) messages: 349864 nosy: user93448 priority: normal severity: normal status: open title: unittest: execute tests in parallel type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 16 10:22:41 2019 From: report at bugs.python.org (af) Date: Fri, 16 Aug 2019 14:22:41 +0000 Subject: [New-bugs-announce] [issue37874] json traceback on a float Message-ID: <1565965361.29.0.137485852534.issue37874@roundup.psfhosted.org> New submission from af : json.loads traceback with: [In [16]: json.loads("[1.e-8]") --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) in () ----> 1 json.loads("[1.e-8]") /scr/fonari/2019-4/internal/lib/python3.6/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 352 parse_int is None and parse_float is None and 353 parse_constant is None and object_pairs_hook is None and not kw): --> 354 return _default_decoder.decode(s) 355 if cls is None: 356 cls = JSONDecoder /scr/fonari/2019-4/internal/lib/python3.6/json/decoder.py in decode(self, s, _w) 337 338 """ --> 339 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 340 end = _w(s, end).end() 341 if end != len(s): /scr/fonari/2019-4/internal/lib/python3.6/json/decoder.py in raw_decode(self, s, idx) 353 """ 354 try: --> 355 obj, end = self.scan_once(s, idx) 356 except StopIteration as err: 357 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Expecting ',' delimiter: line 1 column 3 (char 2) Works with json.loads("[1.0e-8]") and json.loads("[1e-8]") ---------- components: Library (Lib) messages: 349866 nosy: af priority: normal severity: normal status: open title: json traceback on a float type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 16 10:52:10 2019 From: report at bugs.python.org (Sivaprabu Ganesan) Date: Fri, 16 Aug 2019 14:52:10 +0000 Subject: [New-bugs-announce] [issue37875] gzip module any difference for compressing png file in version 2.X and 3.X Message-ID: <1565967130.84.0.685308566342.issue37875@roundup.psfhosted.org> New submission from Sivaprabu Ganesan : gzip module any difference for compressing png file in version 2.X and 3.X ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 349868 nosy: Sivaprabu Ganesan, sjoerd priority: normal severity: normal status: open title: gzip module any difference for compressing png file in version 2.X and 3.X type: performance versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 16 12:06:36 2019 From: report at bugs.python.org (Zeth) Date: Fri, 16 Aug 2019 16:06:36 +0000 Subject: [New-bugs-announce] [issue37876] Tests for Rot13 codec Message-ID: <1565971596.92.0.47171524836.issue37876@roundup.psfhosted.org> New submission from Zeth : Julius Caesar might have written the Rot13 codec, but he forgot to write unit tests, dragging down test coverage. ---------- components: Tests messages: 349874 nosy: zeth priority: normal pull_requests: 15033 severity: normal status: open title: Tests for Rot13 codec type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 16 13:52:04 2019 From: report at bugs.python.org (Michael Hearn) Date: Fri, 16 Aug 2019 17:52:04 +0000 Subject: [New-bugs-announce] [issue37877] MacOS crash appJar 3.7.3 Message-ID: <1565977924.82.0.581116277881.issue37877@roundup.psfhosted.org> Change by Michael Hearn : ---------- components: macOS nosy: Michael Hearn, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: MacOS crash appJar 3.7.3 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 16 15:53:55 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Fri, 16 Aug 2019 19:53:55 +0000 Subject: [New-bugs-announce] [issue37878] sub-ineterpreters : Document PyThreadState_DeleteCurrent() ? Message-ID: <1565985235.6.0.212314052144.issue37878@roundup.psfhosted.org> New submission from Joannah Nanjekye : I just noticed that PyThreadState_DeleteCurrent()in Python/pystate.c is not documented. Relevant documentation should go in Doc/c-api/init.rst If no one objects to this. ---------- assignee: docs at python components: Documentation keywords: easy messages: 349881 nosy: docs at python, eric.snow, nanjekyejoannah, ncoghlan, vstinner priority: normal severity: normal stage: needs patch status: open title: sub-ineterpreters : Document PyThreadState_DeleteCurrent() ? versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 17 16:47:31 2019 From: report at bugs.python.org (Eddie Elizondo) Date: Sat, 17 Aug 2019 20:47:31 +0000 Subject: [New-bugs-announce] [issue37879] Segfaults in C heap type subclasses Message-ID: <1566074851.0.0.451710928537.issue37879@roundup.psfhosted.org> New submission from Eddie Elizondo : `subtype_dealloc` is not correctly handling the reference count of c heap type subclasses. It has some builtin assumptions which can result in the type getting its reference count decreased more that it needs to be, leading to its destruction when it should still be alive. Also, this bug is a blocker for the full adoption of PEP384. The full details of the bug along with a fix and tests are described in the Github PR. ---------- messages: 349905 nosy: eelizondo priority: normal severity: normal status: open title: Segfaults in C heap type subclasses _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 17 20:41:15 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Sun, 18 Aug 2019 00:41:15 +0000 Subject: [New-bugs-announce] [issue37880] For argparse add_argument with action='store_const', const should default to None. Message-ID: <1566088875.45.0.199116489639.issue37880@roundup.psfhosted.org> New submission from Joannah Nanjekye : Currently, when `parser.add_argument()` is given argument with `action='store_const'` and no `const` argument , it throws an exception : >>> from argparse import ArgumentParser >>> parser = ArgumentParser() >>> parser.add_argument("--foo", help="foo", action='store_const') Traceback (most recent call last): File "", line 1, in File "/home/captain/projects/cpython/Lib/argparse.py", line 1350, in add_argument action = action_class(**kwargs) TypeError: __init__() missing 1 required positional argument: 'const' >>> Specifying the `const` argument stops this exception: >>> parser.add_argument("--foo", help="foo", action='store_const', const=None) _StoreConstAction(option_strings=['--foo'], dest='foo', nargs=0, const=None, default=None, type=None, choices=None, help='foo', metavar=None) Originally the docs, said when `action` was set to` 'store_const'` `const` defaulted to `None` which was not matching with the implementation at the time. After this commit : https://github.com/python/cpython/commi/b4912b8ed367e540ee060fe912f841cc764fd293, The docs were updated to match the implementation to fix Bpo issues : https://bugs.python.org/issue25299 https://bugs.python.org/issue24754 and https://bugs.python.org/issue25314 I suggest that we make `const` default to `None` If `action='store_const'` as was intended originally before edits to the docs. If no one objects, I can open a PR for this. ---------- messages: 349911 nosy: A. Skrobov, nanjekyejoannah, r.david.murray priority: normal severity: normal status: open title: For argparse add_argument with action='store_const', const should default to None. versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 18 09:33:56 2019 From: report at bugs.python.org (Antony Lee) Date: Sun, 18 Aug 2019 13:33:56 +0000 Subject: [New-bugs-announce] [issue37881] __text_signature__ parser doesn't handle globals in extension module Message-ID: <1566135236.59.0.315734223996.issue37881@roundup.psfhosted.org> New submission from Antony Lee : Starting from the custom2 example at https://docs.python.org/3/extending/newtypes_tutorial.html#adding-data-and-methods-to-the-basic-example, change the methods table to static PyMethodDef Custom_methods[] = { {"foo", (PyCFunction) Custom_foo, METH_VARARGS, "foo(x=ONE)\n--\n\nFoos this." }, {NULL} /* Sentinel */ }; and add a global ONE to the module dict: PyModule_AddObject(m, "ONE", PyLong_FromLong(1)); Building and running e.g. pydoc on this module results in Traceback (most recent call last): File ".../lib/python3.7/inspect.py", line 2003, in wrap_value value = eval(s, module_dict) File "", line 1, in NameError: name 'ONE' is not defined During handling of the above exception, another exception occurred: Traceback (most recent call last): File ".../lib/python3.7/inspect.py", line 2006, in wrap_value value = eval(s, sys_module_dict) File "", line 1, in NameError: name 'ONE' is not defined During handling of the above exception, another exception occurred: Traceback (most recent call last): File ".../lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File ".../lib/python3.7/inspect.py", line 2008, in wrap_value raise RuntimeError() RuntimeError I think the fix is fairly simple; one needs to replace module_name = getattr(obj, '__module__', None) in inspect.py::_signature_fromstr by module_name = (getattr(obj, '__module__', None) or getattr(getattr(obj, '__objclass__'), '__module__', None)) (This is a less general but simpler solution than https://bugs.python.org/issue23967.) ---------- components: Extension Modules messages: 349919 nosy: Antony.Lee priority: normal severity: normal status: open title: __text_signature__ parser doesn't handle globals in extension module _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 18 13:46:51 2019 From: report at bugs.python.org (George Zhang) Date: Sun, 18 Aug 2019 17:46:51 +0000 Subject: [New-bugs-announce] [issue37882] Code folding in IDLE Message-ID: <1566150411.87.0.37054908058.issue37882@roundup.psfhosted.org> New submission from George Zhang : Congrats on adding line numbers to IDLE. With this change, a change to add code folding could be done more easily as the folding can reference the line numbers. Many other IDEs have code folding but IDLE should also have it. Code folding could be done with a +/- to the right of the line numbers and can show/hide the indented suite under the header (compound statements). It should use indentation as tool to figure out which parts can be folded. Blank lines don't count. Single line compound statements cannot be folded. Something like this: 1 - | def spam(ham): 2 - | if ham: 3 | eggs() 4 | 5 + | def frob(): 8 | 9 - | FOO = ( 10 | 1, 2, 3, 4, 11 | 5, 6, 7, 8, 12 | ) 13 | 14 | BAR = ( 17 | ) 18 | 19 | if True: print("True") 20 | ---------- assignee: terry.reedy components: IDLE messages: 349922 nosy: GeeVye, terry.reedy priority: normal severity: normal status: open title: Code folding in IDLE versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 18 14:36:18 2019 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Sun, 18 Aug 2019 18:36:18 +0000 Subject: [New-bugs-announce] [issue37883] threading.Lock.locked is not documented Message-ID: <1566153378.15.0.0440090654943.issue37883@roundup.psfhosted.org> New submission from R?mi Lapeyre : As far as I can tell, it has never been documented. I'm not sure if it is deprecated but it has a docstring so it seems to me, that it just needs documentation in Doc/Library/threading.rst PS: I don't know how to set the beginner friendly flag. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 349923 nosy: docs at python, remi.lapeyre priority: normal severity: normal status: open title: threading.Lock.locked is not documented type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 19 02:17:30 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 19 Aug 2019 06:17:30 +0000 Subject: [New-bugs-announce] [issue37884] Optimize Fraction() and statistics.mean() Message-ID: <1566195450.53.0.150552159407.issue37884@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR optimizes the Fraction constructor and statistics.mean() (and several other statistics functions) by using a private helper implemented in C which abstracts converting a number to an integer ratio. $ ./python -m timeit -s "from fractions import Fraction as F" "F(123)" 500000 loops, best of 5: 655 nsec per loop 500000 loops, best of 5: 749 nsec per loop $ ./python -m timeit -s "from fractions import Fraction as F" "F(1.23)" 200000 loops, best of 5: 1.29 usec per loop 200000 loops, best of 5: 1.03 usec per loop $ ./python -m timeit -s "from fractions import Fraction as F; f = F(22, 7)" "F(f)" 200000 loops, best of 5: 1.17 usec per loop 500000 loops, best of 5: 899 nsec per loop $ ./python -m timeit -s "from fractions import Fraction as F; from decimal import Decimal as D; d = D('1.23')" "F(d)" 200000 loops, best of 5: 1.64 usec per loop 200000 loops, best of 5: 1.29 usec per loop $ ./python -m timeit -s "from statistics import mean; a = [1]*1000" "mean(a)" 500 loops, best of 5: 456 usec per loop 1000 loops, best of 5: 321 usec per loop $ ./python -m timeit -s "from statistics import mean; a = [1.23]*1000" "mean(a)" 500 loops, best of 5: 645 usec per loop 500 loops, best of 5: 659 usec per loop $ ./python -m timeit -s "from statistics import mean; from fractions import Fraction as F; a = [F(22, 7)]*1000" "mean(a)" 500 loops, best of 5: 637 usec per loop 500 loops, best of 5: 490 usec per loop $ ./python -m timeit -s "from statistics import mean; from decimal import Decimal as D; a = [D('1.23')]*1000" "mean(a)" 500 loops, best of 5: 946 usec per loop 500 loops, best of 5: 915 usec per loop There is a 14% regression in creating a Fraction from an integer, but for non-integer numbers it gains 25% speed up. The effect on statistics.mean() varies from insignificant to +40%. ---------- components: Library (Lib) messages: 349939 nosy: mark.dickinson, rhettinger, serhiy.storchaka, steven.daprano priority: normal severity: normal status: open title: Optimize Fraction() and statistics.mean() type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 19 04:50:09 2019 From: report at bugs.python.org (Daniel Abrahamsson) Date: Mon, 19 Aug 2019 08:50:09 +0000 Subject: [New-bugs-announce] [issue37885] venv: Don't produce unbound variable warning on deactivate Message-ID: <1566204609.23.0.435399494767.issue37885@roundup.psfhosted.org> New submission from Daniel Abrahamsson : Running deactivate from a bash shell configured to treat undefined variables as errors (`set -u`) produces a warning: ``` $ python3 -m venv test $ source test/bin/activate (test) $ deactivate -bash: $1: unbound variable ``` ---------- components: Library (Lib) messages: 349944 nosy: danabr priority: normal severity: normal status: open title: venv: Don't produce unbound variable warning on deactivate type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 19 08:43:49 2019 From: report at bugs.python.org (Jeff Robbins) Date: Mon, 19 Aug 2019 12:43:49 +0000 Subject: [New-bugs-announce] [issue37886] PyStructSequence_UnnamedField not exported Message-ID: <1566218629.27.0.381481574182.issue37886@roundup.psfhosted.org> New submission from Jeff Robbins : Python 3.8.0b3 has the fixed https://docs.python.org/3/c-api/tuple.html#c.PyStructSequence_NewType, but one of the documented features of PyStructSequence is the special https://docs.python.org/3/c-api/tuple.html#c.PyStructSequence_UnnamedField which is meant to be used for unnamed (and presumably also "hidden") fields. However, this variable is not "exported" (via __declspec(dllexport) or the relevant Python C macro) and so my C extension cannot "import" it and use it. My guess is that this passed testing because the only tests using it are internal modules linked into python38.dll, which are happy with the `extern` in the header: Include\structseq.h:extern char* PyStructSequence_UnnamedField; ---------- components: Windows messages: 349956 nosy: jeffr at livedata.com, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: PyStructSequence_UnnamedField not exported type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 19 10:55:03 2019 From: report at bugs.python.org (hai shi) Date: Mon, 19 Aug 2019 14:55:03 +0000 Subject: [New-bugs-announce] [issue37887] some leak in the compiler_assert function Message-ID: <1566226503.82.0.418800588376.issue37887@roundup.psfhosted.org> New submission from hai shi : Some reference leak in compiler_assert function, due to not using Py_DECREF(assertion_error) before return. And having a question about code order in compiler_assert function. ---------- components: Interpreter Core files: compiler_assert.patch keywords: patch messages: 349959 nosy: pitrou, shihai1991, vstinner priority: normal severity: normal status: open title: some leak in the compiler_assert function type: resource usage versions: Python 3.9 Added file: https://bugs.python.org/file48550/compiler_assert.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 19 15:05:22 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Mon, 19 Aug 2019 19:05:22 +0000 Subject: [New-bugs-announce] [issue37888] Sub-interpreters : Confusing docs about state after calling Py_NewInterpreter() Message-ID: <1566241522.51.0.41054137805.issue37888@roundup.psfhosted.org> New submission from Joannah Nanjekye : In the documentation for Py_NewInterpreter(): It is said that : The return value points to the first thread state created in the new sub-interpreter. This thread state is made in the current thread state. I think changing : This thread state is made in the current thread state. To: This thread state is made the current thread state. Sounds good. Since a call such as: substate = Py_NewInterpreter() makes *substate* the current state. no? The *in* takes me in a different direction of thought. ---------- messages: 349964 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Sub-interpreters : Confusing docs about state after calling Py_NewInterpreter() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 19 19:16:33 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 19 Aug 2019 23:16:33 +0000 Subject: [New-bugs-announce] [issue37889] "Fatal Python error: Py_EndInterpreter: not the last thread" that's bad Message-ID: <1566256593.58.0.888116360653.issue37889@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : The x86-64 High Sierra 3.x buildbot and AMD64 FreeBSD CURRENT Shared 3.x are failing with: Fatal Python error: Py_EndInterpreter: not the last thread https://buildbot.python.org/all/#/builders/145/builds/2233 https://buildbot.python.org/all/#/builders/168/builds/1295 Fatal Python error: Py_EndInterpreter: not the last thread Current thread 0x00007fff8e587380 (most recent call first): File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/support/__init__.py", line 2911 in run_in_subinterp File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/test_threading.py", line 1006 in test_threads_join_2 File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/case.py", line 611 in _callTestMethod File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/case.py", line 654 in run File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/case.py", line 714 in __call__ File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/suite.py", line 122 in run File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/suite.py", line 84 in __call__ File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/suite.py", line 122 in run File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/suite.py", line 84 in __call__ File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/suite.py", line 122 in run File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/suite.py", line 84 in __call__ File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/unittest/runner.py", line 176 in run File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/support/__init__.py", line 1996 in _run_suite File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/support/__init__.py", line 2092 in run_unittest File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/runtest.py", line 209 in _test_module File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/runtest.py", line 153 in _runtest File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/runtest.py", line 193 in runtest File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/main.py", line 310 in rerun_failed_tests File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/main.py", line 678 in _main File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/main.py", line 628 in main File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/libregrtest/main.py", line 695 in main File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/test/__main__.py", line 2 in File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/runpy.py", line 85 in _run_code File "/Users/buildbot/buildarea/3.x.billenstein-sierra/build/Lib/runpy.py", line 192 in _run_module_as_main make: *** [buildbottest] Abort trap: 6 program finished with exit code 2 elapsedTime=1799.062811 test_threads_join_2 (test.test_threading.SubinterpThreadingTests) ... https://buildbot.python.org/all/#/builders/145/builds/2233 ---------- messages: 349974 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: "Fatal Python error: Py_EndInterpreter: not the last thread" that's bad versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 19 22:48:16 2019 From: report at bugs.python.org (Kyle Stanley) Date: Tue, 20 Aug 2019 02:48:16 +0000 Subject: [New-bugs-announce] [issue37890] Modernize several tests in test_importlib Message-ID: <1566269296.72.0.828504709815.issue37890@roundup.psfhosted.org> New submission from Kyle Stanley : Last month, several tests were moved into test_importlib (https://bugs.python.org/issue19696): "test_pkg_import.py", "test_threaded_import.py", and "threaded_import_hangers.py". Those tests were created quite a while ago though and do not currently utilize importlib directly. They should be updated accordingly. Brett Cannon: > BTW, if you want to open a new issue and modernize the tests to use importlib directly that would be great! I'm interested in helping with this issue, but I may require some assistance as I'm not overly familiar with the internals of importlib. I'll probably start with "test_pkg_import.py". Anyone else can feel free to work on the other two in the meantime, but they should be worked on together as "threaded_import_hangers.py" is a dependency for "test_threaded_import.py". ---------- components: Tests messages: 349984 nosy: aeros167, brett.cannon priority: normal severity: normal status: open title: Modernize several tests in test_importlib versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 03:21:07 2019 From: report at bugs.python.org (Niels Albers) Date: Tue, 20 Aug 2019 07:21:07 +0000 Subject: [New-bugs-announce] [issue37891] Exceptions tutorial page does not mention raise from Message-ID: <1566285667.91.0.992762773609.issue37891@roundup.psfhosted.org> New submission from Niels Albers : raise from has been in the language since python 3, yet the tutorial page teaching about exceptions does not mention it. (see https://docs.python.org/3.7/tutorial/errors.html#raising-exceptions) It would be especially helpful to language newcomers to touch on the possibility of passing error context when raising a new exception in an exception handler. ---------- assignee: docs at python components: Documentation messages: 349994 nosy: Niels Albers, docs at python priority: normal severity: normal status: open title: Exceptions tutorial page does not mention raise from versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 07:38:43 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 20 Aug 2019 11:38:43 +0000 Subject: [New-bugs-announce] [issue37892] IDLE Shell: isolate user code input Message-ID: <1566301123.94.0.491748175182.issue37892@roundup.psfhosted.org> New submission from Terry J. Reedy : The main operational difference between the standard Python REPL and IDLE's Shell is that the latter operates with complete Python statements rather than physical lines. Shell keeps the '>>>' prompt, but with the expanded meaning of "Enter a complete Python statement" rather than just "Enter the first line of a Python statement". It omits the consequently superfluous continuation prompts, which would mostly be a nuisance if left. Currently, the prompt precedes and indents the first line of code. This causes multiple problems. Internally, first lines have to be treated differently from the rest. This has lead to bugs, mostly but not all fixed. Externally, indentation depends on the prompt and does not look right or work right, compared to the same code in a proper editor. And it lead to the use of Tab for indents. The use of Tab for Shell indents was recognized as a problem by 2005. #1196946 proposed using the same space indents as in the editor, resulting, for instance, in >>> if a: if b: print(a+b) KBK rejected this with "Doesn't really solve the problem." In 2010, #7676 suggested 4 space indents again and added 2 variations: 8 space indents, and 8 followed by 4, etc. OP Cherniavsky Beni noted that Tab indents are "inconsistent with PEP 8; what's worse, it's makes copy-paste code between the shell and editor windows confusing and dangerous!" Raymond Hettinger later added that tabs are a "major PITA" and "a continual source of frustration for students". Starting with msg151418 in 1212, my response was much the same as KBK's. To me, the underlying problem is having the prompt physically indent the first physical line relative to the following lines. I consider this IDLE's single biggest design wart. I proposed then 3 possible solutions to avoid the first line non-significant indent. They are, with current comments, ... 1. Prompt on a line by itself (vertical separation). This is easy, and allows for an expanded active prompt, such as >>> Enter a complete Python statement on the lines below. This says exactly what the user should do and should help avoid confusion with a command-line prompt. ("I entered 'cd path' and got 'SyntaxError'".) Once a statement is entered, the instruction is obsolete. Only '>>>' is needed, to identify input from output. I think putting '>>>' above past input works less well than for current input. I will come back to this proposal below, after 3. 2. No input prompt; instead mark output (with #, for instance). Possible, but confronting beginners with no prompt would likely leave them wondering what to do. But this is a possible savefile format, one that could be run or edited. (A savefile with only the code would do the same.) 3. Prompt in a margin, as with line numbers (horizontal separation). In 1214, I realized that the 'margin' should be implemented as a separate sidebar widget, which was initially being developed for editor line numbers. We now have that, and a shell sidebar should be fairly easy. I will open a separate issue with a fairly specific design. Basically, the first lines of input, stderr output, and stdout output would be flagged with, for instance, '>>>', 'err', and 'out'. This should be done before the additional proposal below. IDLE's Shell should isolate user input not only from prompts. Debug on/off messages, warnings, and delayed program output can also interfere. I think that IDLE's Shell should consist of an input and output history area with sidebar, fixed prompt and separator line such as in 1. above, and active input area. The history area, as now, would be read-only except when responding to input() in the running code. The change is that it would receive all messages from IDLE and asynchronous program output. The input area would be a specialized editor area. When input code is run, it would be copied above the separator prompt with '>>>' added to the sidebar, opposite the first line. I believe that the easiest implementation would be to use a label for the fixed prompt line and a specialized editor that runs statements as entered. The editing and smart indents would be the same as in a regular editor. Once this is done, we could discuss refinements such as allowing pasting of multiple statements. ---------- assignee: terry.reedy components: IDLE messages: 350000 nosy: cheryl.sabella, rhettinger, taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE Shell: isolate user code input type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 11:42:42 2019 From: report at bugs.python.org (Tim Peters) Date: Tue, 20 Aug 2019 15:42:42 +0000 Subject: [New-bugs-announce] [issue37893] pow() should disallow inverse when modulus is +-1 Message-ID: <1566315762.28.0.408407507817.issue37893@roundup.psfhosted.org> New submission from Tim Peters : For example, these should all raise ValueError instead: >>> pow(2, -1, 1) 0 >>> pow(1, -1, 1) 0 >>> pow(0, -1, 1) 0 >>> pow(2, -1, -1) 0 >>> pow(1, -1, -1) 0 >>> pow(0, -1, -1) 0 ---------- components: Library (Lib) messages: 350015 nosy: mark.dickinson, tim.peters priority: normal severity: normal stage: needs patch status: open title: pow() should disallow inverse when modulus is +-1 type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 11:59:37 2019 From: report at bugs.python.org (Wator Sead) Date: Tue, 20 Aug 2019 15:59:37 +0000 Subject: [New-bugs-announce] [issue37894] [win] shutil.which can not find the path if 'cmd' include directory path and not include extension name Message-ID: <1566316777.4.0.882745227499.issue37894@roundup.psfhosted.org> New submission from Wator Sead : The current code is: ... if os.path.dirname(cmd): if _access_check(cmd, mode): return cmd return None ... In Windows, if 'cmd' include directory path and not include extension name, it return 'None'. e.g. a file's path is 'd:\dir\app.exe', call shutil.which with 'cmd=="d:\dir\app"'. How about this patch: ... if os.path.dirname(cmd): path, cmd = os.path.split(cmd) ... ---------- components: Library (Lib) messages: 350019 nosy: seahoh priority: normal severity: normal status: open title: [win] shutil.which can not find the path if 'cmd' include directory path and not include extension name type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 14:26:19 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Tue, 20 Aug 2019 18:26:19 +0000 Subject: [New-bugs-announce] [issue37895] test_logging hangs on an IPv6-only host Message-ID: <1566325579.16.0.877260560963.issue37895@roundup.psfhosted.org> New submission from Gregory P. Smith : test_logging hangs when run on an IPv6-only host. (127.0.0.1 isn't even available) test_listen_config_10_ok (test.test_logging.ConfigDictTest) ... Exception in thread Thread-3: Traceback (most recent call last): File "/home/greg/oss/cpython/Lib/threading.py", line 938, in _bootstrap_inner self.run() File "/home/greg/oss/cpython/Lib/logging/config.py", line 918, in run server = self.rcvr(port=self.port, handler=self.hdlr, File "/home/greg/oss/cpython/Lib/logging/config.py", line 885, in __init__ ThreadingTCPServer.__init__(self, (host, port), handler) File "/home/greg/oss/cpython/Lib/socketserver.py", line 452, in __init__ self.server_bind() File "/home/greg/oss/cpython/Lib/socketserver.py", line 466, in server_bind self.socket.bind(self.server_address) OSError: [Errno 99] Cannot assign requested address possibly more, but it's hung here. I'm preparing an IPv6-only buildbot. ---------- assignee: gregory.p.smith components: Tests messages: 350031 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: test_logging hangs on an IPv6-only host type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 14:30:31 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Tue, 20 Aug 2019 18:30:31 +0000 Subject: [New-bugs-announce] [issue37896] test_multiprocessing_fork hangs on an IPv6-only host Message-ID: <1566325831.71.0.21098480947.issue37896@roundup.psfhosted.org> New submission from Gregory P. Smith : It winds up stuck on a leftover process: test_import (test.test_multiprocessing_fork._TestImportStar) ... ok Warning -- Dangling processes: {} Which is likely related to one of the other numerous failure ERRORs further up in the log. I'm preparing an IPv6-only buildbot ---------- assignee: gregory.p.smith components: Tests messages: 350032 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: test_multiprocessing_fork hangs on an IPv6-only host type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 14:32:15 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Tue, 20 Aug 2019 18:32:15 +0000 Subject: [New-bugs-announce] [issue37897] test_asyncio hangs on an IPv6-only host Message-ID: <1566325935.2.0.450737551586.issue37897@roundup.psfhosted.org> New submission from Gregory P. Smith : test_drain_raises (test.test_asyncio.test_streams.StreamTests) ... Exception in thread Thread-20: Traceback (most recent call last): File "/home/greg/oss/cpython/Lib/threading.py", line 938, in _bootstrap_inner self.run() File "/home/greg/oss/cpython/Lib/threading.py", line 876, in run self._target(*self._args, **self._kwargs) File "/home/greg/oss/cpython/Lib/test/test_asyncio/test_streams.py", line 951, in server with socket.create_server(('localhost', 0)) as sock: File "/home/greg/oss/cpython/Lib/socket.py", line 804, in create_server raise error(err.errno, msg) from None OSError: [Errno 99] Cannot assign requested address (while attempting to bind on address ('localhost', 0)) I'm preparing an IPv6-only buildbot. (there is no IPv4 localhost) ---------- assignee: gregory.p.smith components: Tests messages: 350033 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: test_asyncio hangs on an IPv6-only host versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 14:33:34 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Tue, 20 Aug 2019 18:33:34 +0000 Subject: [New-bugs-announce] [issue37898] test_httpservers hangs on an IPv6-only host Message-ID: <1566326014.38.0.490683386628.issue37898@roundup.psfhosted.org> New submission from Gregory P. Smith : test_err (test.test_httpservers.RequestHandlerLoggingTestCase) ... Exception in thread Thread-1: Traceback (most recent call last): File "/home/greg/oss/cpython/Lib/threading.py", line 938, in _bootstrap_inner self.run() File "/home/greg/oss/cpython/Lib/test/test_httpservers.py", line 50, in run self.server = HTTPServer(('localhost', 0), self.request_handler) File "/home/greg/oss/cpython/Lib/socketserver.py", line 452, in __init__ self.server_bind() File "/home/greg/oss/cpython/Lib/http/server.py", line 137, in server_bind socketserver.TCPServer.server_bind(self) File "/home/greg/oss/cpython/Lib/socketserver.py", line 466, in server_bind self.socket.bind(self.server_address) OSError: [Errno 99] Cannot assign requested address I'm preparing an IPv6-only buildbot. There is no IPv4 localhost. ---------- assignee: gregory.p.smith components: Tests messages: 350034 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: test_httpservers hangs on an IPv6-only host type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 14:34:29 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Tue, 20 Aug 2019 18:34:29 +0000 Subject: [New-bugs-announce] [issue37899] test_xmlrpc hangs on an IPv6-only host Message-ID: <1566326069.96.0.170799041894.issue37899@roundup.psfhosted.org> New submission from Gregory P. Smith : test_404 (test.test_xmlrpc.SimpleServerTestCase) ... Exception in thread Thread-1: Traceback (most recent call last): File "/home/greg/oss/cpython/Lib/threading.py", line 938, in _bootstrap_inner self.run() File "/home/greg/oss/cpython/Lib/threading.py", line 876, in run self._target(*self._args, **self._kwargs) File "/home/greg/oss/cpython/Lib/test/test_xmlrpc.py", line 619, in http_server serv.server_bind() File "/home/greg/oss/cpython/Lib/socketserver.py", line 466, in server_bind self.socket.bind(self.server_address) OSError: [Errno 99] Cannot assign requested address I'm preparing an IPv6-only buildbot. There is no IPv4 localhost. ---------- assignee: gregory.p.smith components: Tests messages: 350035 nosy: gregory.p.smith priority: normal severity: normal status: open title: test_xmlrpc hangs on an IPv6-only host versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 16:26:53 2019 From: report at bugs.python.org (Kevin Wojniak) Date: Tue, 20 Aug 2019 20:26:53 +0000 Subject: [New-bugs-announce] [issue37900] [urllib] proxy_bypass_registry doesn't handle invalid proxy override values Message-ID: <1566332813.63.0.906100778728.issue37900@roundup.psfhosted.org> New submission from Kevin Wojniak : proxy_bypass_registry() will split the ProxyOverride registry key by semicolon. Then for each value it uses that value as a regular expression pattern with match(). However, if this value is not a valid regular expression, then match() will throw an exception that goes uncaught. This then breaks the loop and prevents the function from working correctly on other valid input. It's easy to reproduce: 1. Set this registry key to 1 HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyEnable 2. Set HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyOverride to this value (create as a string if necessary): []-78; 3. Call urllib.proxy_bypass() My suggestion for a fix would be to catch exceptions from match() in the loop and continue the loop on error. ---------- components: Windows messages: 350038 nosy: kwojniak_box, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: [urllib] proxy_bypass_registry doesn't handle invalid proxy override values type: crash versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 18:41:23 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Tue, 20 Aug 2019 22:41:23 +0000 Subject: [New-bugs-announce] [issue37901] 21 tests fail when run on an IPv6-only host Message-ID: <1566340883.47.0.899841913634.issue37901@roundup.psfhosted.org> New submission from Gregory P. Smith : 21 tests failed: test_asynchat test_asyncore test_docxmlrpc test_eintr test_epoll test_ftplib test_httplib test_imaplib test_multiprocessing_forkserver test_multiprocessing_spawn test_nntplib test_os test_poplib test_robotparser test_smtplib test_socket test_ssl test_support test_telnetlib test_urllib2_localnet test_wsgiref This is a rollup tracking issue. I've got an IPv6-only future buildbot host with which to run an diagnose these for fixes. Of note there is no IPv4 localhost. If there are larger problems I may spawn child bugs for specific issues off of this one. (I already filed separate issues about 5 other tests that hang rather than fail) ---------- assignee: gregory.p.smith components: Tests messages: 350039 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: 21 tests fail when run on an IPv6-only host type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 20 21:54:40 2019 From: report at bugs.python.org (George Zhang) Date: Wed, 21 Aug 2019 01:54:40 +0000 Subject: [New-bugs-announce] [issue37902] Add scrolling for IDLE browsers Message-ID: <1566352480.43.0.51909437022.issue37902@roundup.psfhosted.org> New submission from George Zhang : I've just started using IDLE's module/path browsers and they offer a lot! Putting aside the issue of them opening in separate windows, they have a small change that could be made to improve them. Both browsers have scrollbars, but (for me at least) I cannot scroll using my mouse. I propose adding support for scrolling similar to the editor/shell windows. ---------- assignee: terry.reedy components: IDLE messages: 350043 nosy: GeeVye, terry.reedy priority: normal severity: normal status: open title: Add scrolling for IDLE browsers type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 02:47:13 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 21 Aug 2019 06:47:13 +0000 Subject: [New-bugs-announce] [issue37903] IDLE Shell sidebar. Message-ID: <1566370033.98.0.615651298891.issue37903@roundup.psfhosted.org> New submission from Terry J. Reedy : This issue proposes to add a shell-specific sidebar to Shell. It follows-up #17535, which added a line-number sidebar to module editors and implements the first phase of #37892, which proposes that the single statement editing area of Shell should be strictly just that. Looking ahead to this issue, #17535 added both a generic Sidebar class and a specific LineNumber subclass. This issue proposes to add a 3 char wide ShellIO subclass that would mark the beginning of the blocks of text added to the Shell history area. Labels, such as '>>>', 'Out', 'Inp', and 'Err' would be used for the first line of user code input, program stdout, user response to input() (there is only one line), and program stderr (which includes tracebacks). I am not quite sure what to do with debug notices and warnings from the warning modules. Maybe use 'Con', maybe use Dbg and Wrn. I expect to test variations. As with LineNumber, the font face and size will the the same as in the text. The labels should use the highlight colors of their text block except that '>>>' should continue getting the 'console' colors. I thing the initial implementation should not response to clicks. After experimentation, we might decide that clicking on a label could select whole blocks, but I would like get the essential stuff right first. ---------- messages: 350054 nosy: taleinat, terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE Shell sidebar. type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 07:28:42 2019 From: report at bugs.python.org (Lawrence R. Normie) Date: Wed, 21 Aug 2019 11:28:42 +0000 Subject: [New-bugs-announce] [issue37904] Suggested edit to Python Tutorial - Section 4 Message-ID: <1566386922.03.0.919785821856.issue37904@roundup.psfhosted.org> New submission from Lawrence R. Normie : Suggest amending the text of the first sentence in Section 4, from: "Besides the while statement just introduced, Python knows the usual control flow statements known from other languages, with some twists." to: Besides the while statement just introduced, Python follows the usual syntax fir control flow statements known from other languages, with some twists." ---------- assignee: docs at python components: Documentation messages: 350068 nosy: docs at python, lrnormie at gmail.com priority: normal severity: normal status: open title: Suggested edit to Python Tutorial - Section 4 type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 08:38:54 2019 From: report at bugs.python.org (Christoph Deil) Date: Wed, 21 Aug 2019 12:38:54 +0000 Subject: [New-bugs-announce] [issue37905] Remove NormalDist.overlap() or improve documentation? Message-ID: <1566391134.6.0.700853961099.issue37905@roundup.psfhosted.org> New submission from Christoph Deil : I saw that Python 3.8 will add a NormalDist class: https://docs.python.org/3.8/library/statistics.html#normaldist-objects Personally I don't see the value of adding this to the Python standard lib. The natural progression would be to extend and extend, but in the end only duplicate what already exists in scientific Python packages. But Ok, I guess this is not up for debate any more? I'd like to make a specific comment on NormalDist.overlap. The rest of NormalDist is very standard, but that method is an oddball. My suggestion is to remove it or to improve the documentation. Current docstring: https://github.com/python/cpython/blob/44f2c096804e8e3adc09400a59ef9c9ae843f339/Lib/statistics.py#L959-L991 And this docs example: https://github.com/python/cpython/commit/318d537daabf2bd5f781255c7e25bfce260cf227#diff-d436928bc44b5d7c40a8047840f55d35R620-R629 > What percentage of men and women will have the same height in `two normally distributed populations with known means and standard deviations `_? 50.3% This statement doesn't make sense to me. No two people have the exact same height, I think the answer to this question should be 0%. Using n = 100_000; sum(m > w for m, w in zip(men.samples(n), women.samples(n))) / n I see that for 82% of random (men, women) matches the man will be larger. That's another measure, but still, stating that 50% of men and women have the same height is confusing. Note that there is a multitude of PDF overlap measures different from this min(pdf1, pdf2) that I think are much more common in statistics and the physical sciences: - https://en.wikipedia.org/wiki/Hellinger_distance - https://arxiv.org/pdf/1407.7172.pdf And note that the references that are given currently are weird (basic statistics textbooks would be appropriate references IMO, or open references like Wikipedia) - slides: http://www.iceaaonline.com/ready/wp-content/uploads/2014/06/MM-9-Presentation-Meet-the-Overlapping-Coefficient-A-Measure-for-Elevator-Speeches.pdf - implementation code comment points to http://dx.doi.org/10.1080/03610928908830127 which is behind a paywall Why add this one overlap measure and expose it under the "overlap" method name? My suggestion would be to be conservative and to remove that method again, before releasing it in 3.8. A reference in the docs could be added to other existing third-party codes (e.g. scipy or the uncertainties package) with further functionality, such as being able to handle correlations or multi-dimensional distributions. For this change I'd be happy to send a PR any time. Raymond and others interested in this topic - thoughts? (note: I wrote a MultiNorm class prototype last year at https://github.com/cdeil/multinorm/blob/master/multinorm.py and now wanted to rewrite it and try to find a good API and thus was interested in this NormalDist class and what functionality it offers) ---------- components: Library (Lib) messages: 350076 nosy: Christoph.Deil, rhettinger priority: normal severity: normal status: open title: Remove NormalDist.overlap() or improve documentation? type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 08:45:23 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 21 Aug 2019 12:45:23 +0000 Subject: [New-bugs-announce] [issue37906] FreeBSD: test_threading: test_recursion_limit() crash with SIGSEGV and create a coredump Message-ID: <1566391523.37.0.328762522638.issue37906@roundup.psfhosted.org> New submission from STINNER Victor : On my FreeBSD 12.0-RELEASE-p10 VM, test_threading.test_recursion_limit() does crash with SIGSEGV and create a coredump. vstinner at freebsd$ ./python -m test -v test_threading -m test_recursion_limit == CPython 3.9.0a0 (heads/master:e0b6117e27, Aug 21 2019, 12:23:28) [Clang 6.0.1 (tags/RELEASE_601/final 335540)] == FreeBSD-12.0-RELEASE-p10-amd64-64bit-ELF little-endian == cwd: /usr/home/vstinner/python/master/build/test_python_3547 == CPU count: 2 == encodings: locale=UTF-8, FS=utf-8 Run tests sequentially 0:00:01 load avg: 4.85 [1/1] test_threading test_recursion_limit (test.test_threading.ThreadingExceptionTests) ... FAIL ====================================================================== FAIL: test_recursion_limit (test.test_threading.ThreadingExceptionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/vstinner/python/master/Lib/test/test_threading.py", line 1086, in test_recursion_limit self.assertEqual(p.returncode, 0, "Unexpected error: " + stderr.decode()) AssertionError: -11 != 0 : Unexpected error: ---------------------------------------------------------------------- Ran 1 test in 6.017s FAILED (failures=1) Warning -- files was modified by test_threading Before: [] After: ['python.core'] test test_threading failed test_threading failed == Tests result: FAILURE == 1 test failed: test_threading Total duration: 7 sec 284 ms Tests result: FAILURE ---------- components: Tests messages: 350079 nosy: vstinner priority: normal severity: normal status: open title: FreeBSD: test_threading: test_recursion_limit() crash with SIGSEGV and create a coredump versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 12:17:42 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Wed, 21 Aug 2019 16:17:42 +0000 Subject: [New-bugs-announce] [issue37907] speed-up PyLong_As*() for large longs Message-ID: <1566404262.5.0.0996699211051.issue37907@roundup.psfhosted.org> New submission from Sergey Fedoseev : PyLong_As*() functions computes result for large longs like this: size_t x, prev; x = 0; while (--i >= 0) { prev = x; x = (x << PyLong_SHIFT) | v->ob_digit[i]; if ((x >> PyLong_SHIFT) != prev) { *overflow = sign; goto exit; } } It can be rewritten like this: size_t x = 0; while (--i >= 0) { if (x > (size_t)-1 >> PyLong_SHIFT) { goto overflow; } x = (x << PyLong_SHIFT) | v->ob_digit[i]; } This provides some speed-up: PyLong_AsSsize_t() $ python -m perf timeit -s "from struct import Struct; N = 1000; pack = Struct('n'*N).pack; values = (2**30,)*N" "pack(*values)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 9.69 us +- 0.02 us /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 8.61 us +- 0.07 us Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 9.69 us +- 0.02 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 8.61 us +- 0.07 us: 1.12x faster (-11%) PyLong_AsSize_t() $ python -m perf timeit -s "from struct import Struct; N = 1000; pack = Struct('N'*N).pack; values = (2**30,)*N" "pack(*values)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 10.5 us +- 0.1 us /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 8.19 us +- 0.17 us Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 10.5 us +- 0.1 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 8.19 us +- 0.17 us: 1.29x faster (-22%) PyLong_AsLong() $ python -m perf timeit -s "from struct import Struct; N = 1000; pack = Struct('l'*N).pack; values = (2**30,)*N" "pack(*values)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 9.68 us +- 0.02 us /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 8.48 us +- 0.22 us Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 9.68 us +- 0.02 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 8.48 us +- 0.22 us: 1.14x faster (-12%) PyLong_AsUnsignedLong() $ python -m perf timeit -s "from struct import Struct; N = 1000; pack = Struct('L'*N).pack; values = (2**30,)*N" "pack(*values)" --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 10.5 us +- 0.1 us /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 8.41 us +- 0.26 us Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 10.5 us +- 0.1 us -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 8.41 us +- 0.26 us: 1.25x faster (-20%) The mentioned pattern is also used in PyLong_AsLongLongAndOverflow(), but I left it untouched since the proposed change doesn't seem to affect its performance. ---------- components: Interpreter Core messages: 350091 nosy: sir-sigurd priority: normal severity: normal status: open title: speed-up PyLong_As*() for large longs type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 13:11:33 2019 From: report at bugs.python.org (hai shi) Date: Wed, 21 Aug 2019 17:11:33 +0000 Subject: [New-bugs-announce] [issue37908] Add some examples of ArgumentParser.exit() Message-ID: <1566407493.49.0.869927821626.issue37908@roundup.psfhosted.org> New submission from hai shi : As paul said in bpo 9938: The exit and error methods are mentioned in the 3.4 documentation, but there are no examples of modifying them. 16.4.5.9. Exiting methods ArgumentParser.exit(status=0, message=None) ArgumentParser.error(message) I will update the examples in this weekend. ---------- assignee: docs at python components: Documentation messages: 350097 nosy: docs at python, shihai1991 priority: normal severity: normal status: open title: Add some examples of ArgumentParser.exit() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 16:55:03 2019 From: report at bugs.python.org (Tianshu Gao) Date: Wed, 21 Aug 2019 20:55:03 +0000 Subject: [New-bugs-announce] [issue37909] Thread pool return ref hold memory Message-ID: <1566420903.63.0.857548145971.issue37909@roundup.psfhosted.org> New submission from Tianshu Gao : This is very similar to issue35715. But this is happen for thread. After the func in thread finished, the memory is still hold and accumulate. import asyncio import time import concurrent import threading loop = asyncio.get_event_loop() def prepare_a_giant_list(): m = [] for i in range(1000*1000): m.append("There's a fat fox jump over a sheep" + str(i)) th_num = threading.active_count() print("Thread number is {}".format(th_num)) return m @asyncio.coroutine def main(): global loop global counter async_executor = concurrent.futures.ThreadPoolExecutor(max_workers=20) loop.run_in_executor(async_executor, prepare_a_giant_list) time.sleep(15) loop.run_in_executor(async_executor, prepare_a_giant_list) time.sleep(15) loop.run_in_executor(async_executor, prepare_a_giant_list) time.sleep(15) loop.run_in_executor(async_executor, prepare_a_giant_list) time.sleep(15) if __name__ == "__main__": loop.run_until_complete(main()) loop.close() ---------- components: asyncio messages: 350108 nosy: Tianshu Gao, asvetlov, yselivanov priority: normal severity: normal status: open title: Thread pool return ref hold memory type: resource usage versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 18:08:19 2019 From: report at bugs.python.org (Sam Franklin) Date: Wed, 21 Aug 2019 22:08:19 +0000 Subject: [New-bugs-announce] [issue37910] argparse wrapping fails with metavar="" (no metavar) Message-ID: <1566425299.96.0.375120843288.issue37910@roundup.psfhosted.org> New submission from Sam Franklin : When argparse wraps the usage text, it can fail its assertion tests with whitespace differences. This can occur when metavar="", needed if a user wishes to avoid having a metavar print. It also could occur if a user specifies any other whitespace. Here's a minimum example (depending on $COLUMNS): import argparse # based on Vajrasky Kok's script in https://bugs.python.org/issue11874 parser = argparse.ArgumentParser(prog='PROG') parser.add_argument('--nil', metavar='', required=True) parser.add_argument('--a', metavar='a' * 165) parser.parse_args() This produces the AssertionError at the bottom of this comment. A solution is to have the two asserts ignore whitespace. I'll submit a pull request very shortly for this. (First time so happy for any comments or critiques!) A more extensive example: import argparse # based on Vajrasky Kok's script in https://bugs.python.org/issue11874 parser = argparse.ArgumentParser(prog='PROG') parser.add_argument('--nil', metavar='', required=True) parser.add_argument('--Line-Feed', metavar='\n', required=True) parser.add_argument('--Tab', metavar='\t', required=True) parser.add_argument('--Carriage-Return', metavar='\r', required=True) parser.add_argument('--Carriage-Return-and-Line-Feed', metavar='\r\n', required=True) parser.add_argument('--vLine-Tabulation', metavar='\v', required=True) parser.add_argument('--x0bLine-Tabulation', metavar='\x0b', required=True) parser.add_argument('--fForm-Feed', metavar='\f', required=True) parser.add_argument('--x0cForm-Feed', metavar='\x0c', required=True) parser.add_argument('--File-Separator', metavar='\x1c', required=True) parser.add_argument('--Group-Separator', metavar='\x1d', required=True) parser.add_argument('--Record-Separator', metavar='\x1e', required=True) parser.add_argument('--C1-Control-Code', metavar='\x85', required=True) parser.add_argument('--Line-Separator', metavar='\u2028', required=True) parser.add_argument('--Paragraph-Separator', metavar='\u2029', required=True) parser.add_argument('--a', metavar='a' * 165) parser.parse_args() This is related to https://bugs.python.org/issue17890 and https://bugs.python.org/issue32867. File "/minimum_argparse_bug.py", line 7, in parser.parse_args() File "/path/to/cpython/Lib/argparse.py", line 1758, in parse_args args, argv = self.parse_known_args(args, namespace) File "/path/to/cpython/Lib/argparse.py", line 1790, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/path/to/cpython/Lib/argparse.py", line 1996, in _parse_known_args start_index = consume_optional(start_index) File "/path/to/cpython/Lib/argparse.py", line 1936, in consume_optional take_action(action, args, option_string) File "/path/to/cpython/Lib/argparse.py", line 1864, in take_action action(self, namespace, argument_values, option_string) File "/path/to/cpython/Lib/argparse.py", line 1037, in __call__ parser.print_help() File "/path/to/cpython/Lib/argparse.py", line 2483, in print_help self._print_message(self.format_help(), file) File "/path/to/cpython/Lib/argparse.py", line 2467, in format_help return formatter.format_help() File "/path/to/cpython/Lib/argparse.py", line 281, in format_help help = self._root_section.format_help() File "/path/to/cpython/Lib/argparse.py", line 212, in format_help item_help = join([func(*args) for func, args in self.items]) File "/path/to/cpython/Lib/argparse.py", line 212, in item_help = join([func(*args) for func, args in self.items]) File "/path/to/cpython/Lib/argparse.py", line 336, in _format_usage assert ' '.join(opt_parts) == opt_usage AssertionError ---------- components: Library (Lib) messages: 350117 nosy: sjfranklin priority: normal severity: normal status: open title: argparse wrapping fails with metavar="" (no metavar) type: crash versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 19:01:15 2019 From: report at bugs.python.org (Semyon) Date: Wed, 21 Aug 2019 23:01:15 +0000 Subject: [New-bugs-announce] [issue37911] Minor error in PEP567 code example Message-ID: <1566428475.33.0.0889947555242.issue37911@roundup.psfhosted.org> New submission from Semyon : In PEP-567 there is a code example in `contextvars.Context` section (https://www.python.org/dev/peps/pep-0567/#contextvars-context): ``` # Print all context variables and their values in 'ctx': print(ctx.items()) ``` But `ctx.items()` doesn't return a list of tuples as probably expected by this code; instead it returns a `items` object which, unlike `dict_items`, does not contain any sensible `repr` or `str`. So this print statement will output something like ``. I think this code example should be chaned to something like `print(list(ctx.items()))`. ---------- assignee: docs at python components: Documentation messages: 350129 nosy: MarSoft, docs at python priority: normal severity: normal status: open title: Minor error in PEP567 code example versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 21 21:02:04 2019 From: report at bugs.python.org (Seaky Lone) Date: Thu, 22 Aug 2019 01:02:04 +0000 Subject: [New-bugs-announce] [issue37912] fstring with quotation marks conflict Message-ID: <1566435724.71.0.382771248594.issue37912@roundup.psfhosted.org> New submission from Seaky Lone : For example, I have the following code: something = 'some thing' string = f'some text {something.split(' ')}' The second expression is thought to be invalid because it is regarded as two strings ['some text {something.split(', ')}']. I have to change either pair of '' to "" in order to make things work. I think this should not be an invalid expression because the ' ' or any other expressions inside the {} should be independent of the outer ''. ---------- components: Interpreter Core messages: 350147 nosy: Seaky Lone priority: normal severity: normal status: open title: fstring with quotation marks conflict versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 06:17:07 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Thu, 22 Aug 2019 10:17:07 +0000 Subject: [New-bugs-announce] [issue37913] Document that __length_hint__ may return NotImplemented Message-ID: <1566469027.85.0.30597516641.issue37913@roundup.psfhosted.org> New submission from Jeroen Demeyer : The special method __length_hint__ can return NotImplemented. In this case, the result is as if the __length_hint__ method didn't exist at all. This behaviour is implemented and tested but not documented. ---------- assignee: docs at python components: Documentation messages: 350180 nosy: docs at python, jdemeyer priority: normal severity: normal status: open title: Document that __length_hint__ may return NotImplemented type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 06:48:13 2019 From: report at bugs.python.org (Elinaldo Monteiro) Date: Thu, 22 Aug 2019 10:48:13 +0000 Subject: [New-bugs-announce] [issue37914] class timedelta, support the method hours and minutes in field accessors Message-ID: <1566470893.59.0.211009594568.issue37914@roundup.psfhosted.org> New submission from Elinaldo Monteiro : Hello, Does it make sense to exist theses 2 methods in class timedelta? @property def hours(self): return self._seconds // 3600 @property def minutes(self): return self._seconds // 60 ---------- messages: 350182 nosy: elinaldosoft priority: normal severity: normal status: open title: class timedelta, support the method hours and minutes in field accessors type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 07:18:39 2019 From: report at bugs.python.org (Tom Augspurger) Date: Thu, 22 Aug 2019 11:18:39 +0000 Subject: [New-bugs-announce] [issue37915] Segfault in comparison between datetime.timezone.utc and putz.utc Message-ID: <1566472719.5.0.285730751371.issue37915@roundup.psfhosted.org> New submission from Tom Augspurger : The following crashes with Python 3.8b3 ``` import sys import pytz import datetime print(sys.version_info) print(pytz.__version__) print(datetime.timezone.utc == pytz.utc) ``` When run with `-X faulthandler`, I see ``` sys.version_info(major=3, minor=8, micro=0, releaselevel='beta', serial=3) 2019.2 Fatal Python error: Segmentation fault Current thread 0x00000001138dc5c0 (most recent call first): File "foo.py", line 8 in Segmentation fault: 11 ``` ---------- messages: 350184 nosy: tomaugspurger priority: normal severity: normal status: open title: Segfault in comparison between datetime.timezone.utc and putz.utc type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 07:37:17 2019 From: report at bugs.python.org (=?utf-8?q?Jakub_Piotr_C=C5=82apa?=) Date: Thu, 22 Aug 2019 11:37:17 +0000 Subject: [New-bugs-announce] [issue37916] distutils: allow overriding of the RANLIB command on macOS (darwin) Message-ID: <1566473837.96.0.294748627211.issue37916@roundup.psfhosted.org> New submission from Jakub Piotr C?apa : On a macOS hosts the system ranlib does not understand ELF files so using the "ranlib" command causes errors during cross-compilations. The simplest way to fix it is to pass the RANLIB parameter provided to setup.py through to the distutils compiler machinery. This is analogous to the way the C/C++ cross-compiler is configured. This change (in a GitHub PR) was required to proceed with crosscompiling numpy. It should help with other packages too (if they use distutils and need ranlib). ---------- components: Cross-Build messages: 350185 nosy: Alex.Willmer, Jakub Piotr C?apa priority: normal severity: normal status: open title: distutils: allow overriding of the RANLIB command on macOS (darwin) type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 11:50:18 2019 From: report at bugs.python.org (Michael Anckaert) Date: Thu, 22 Aug 2019 15:50:18 +0000 Subject: [New-bugs-announce] [issue37917] Warning regarding collections ABCs still present in 3.9 Message-ID: <1566489018.68.0.646926687827.issue37917@roundup.psfhosted.org> New submission from Michael Anckaert : When importing an ABC from the collections module in Python 3.9 there is a warning that this is deprecated since Python 3.3 and will stop working in Python 3.9. Should this warning be removed and lead to an ImportError? Python 3.9.0a0 (heads/master:a38e9d1399, Aug 22 2019, 17:48:16) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from collections import Sequence :1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working ---------- components: Library (Lib) messages: 350202 nosy: michaelanckaert priority: normal severity: normal status: open title: Warning regarding collections ABCs still present in 3.9 type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 11:54:09 2019 From: report at bugs.python.org (Marco Sulla) Date: Thu, 22 Aug 2019 15:54:09 +0000 Subject: [New-bugs-announce] [issue37918] What about an enum for open() modes? Message-ID: <1566489249.72.0.0117548525062.issue37918@roundup.psfhosted.org> New submission from Marco Sulla : As title. I just created it: https://pastebin.com/pNYezw2V I think it could be useful to have a more descriptive way to declare a file open mode. Many languages has an enum for this. Maybe open(), os.fdopen(), os.popen() and pathlib.Path.open() can just accept also an OpenMode enum as mode parameter, without the need to write OpenMode.append.value, for example, but just OpenMode.append. As an alternative, OpenMode could be a namedtuple. I don't know in which module should be put. In `builtins`, `os` or `pathlib`? ---------- components: Library (Lib) messages: 350203 nosy: Marco Sulla priority: normal severity: normal status: open title: What about an enum for open() modes? versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 13:22:32 2019 From: report at bugs.python.org (Mark Sapiro) Date: Thu, 22 Aug 2019 17:22:32 +0000 Subject: [New-bugs-announce] [issue37919] nntplib throws spurious NNTPProtocolError Message-ID: <1566494552.99.0.566197119137.issue37919@roundup.psfhosted.org> New submission from Mark Sapiro : This is really due to an nntp server bug, but here's the scenerio. A connection is opened to the server. An article is posted via the connection's post() method. The server responds to the article data with 240 Article posted but due to the server bug, if the message-id is long, this response comes on two lines as 240 Article posted The post() method reads only the first line and returns it. Then the connection's quit() method (or some other method) is called, and it sees the second line of the prior response as the server's response rather than the actual response, and raises NNTPProtocolError. Arguably, NNTPProtocolError is appropriate in this scenario, but if so, it should be raised by the post() method and not by a subsequent method. ---------- components: Library (Lib) messages: 350214 nosy: msapiro priority: normal severity: normal status: open title: nntplib throws spurious NNTPProtocolError versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 14:20:41 2019 From: report at bugs.python.org (Cameron Trando) Date: Thu, 22 Aug 2019 18:20:41 +0000 Subject: [New-bugs-announce] [issue37920] Support subscripting os.PathLike and make it valid at runtime Message-ID: <1566498041.58.0.919656436902.issue37920@roundup.psfhosted.org> New submission from Cameron Trando : Currently os.PathLike[str] causes a runtime error; however, typeshed sees it as valid and mypy does not throw any errors on it. mypy treats it as os.PathLike[AnyStr] I already filed a bug on typeshed, see https://github.com/python/typeshed/issues/3202 And since mypy interprets it as useful, cpython should try and support it as well. ---------- messages: 350221 nosy: Cameron Trando priority: normal severity: normal status: open title: Support subscripting os.PathLike and make it valid at runtime type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 17:14:36 2019 From: report at bugs.python.org (Pierre-Jean Grenier) Date: Thu, 22 Aug 2019 21:14:36 +0000 Subject: [New-bugs-announce] [issue37921] Improve zipfile: add support for symlinks Message-ID: <1566508476.42.0.582332769766.issue37921@roundup.psfhosted.org> New submission from Pierre-Jean Grenier : The module tarfile contains some methods for knowing whether an archive member is a regular file/a directory/a symlink. Apart from an "is_dir()" method, there was nothing alike in the zipfile module. For an on-going project, I needed to know whether an archive member was a symlink or not, to prevent zip symlinks attacks. I thought this could be of used for other people, given I struggled a little to find a way of saying if an archive member is a symlink or not. This is why I think adding support for symlinks in the zipfile module could be a good idea. ---------- components: Library (Lib) messages: 350231 nosy: zaphodef priority: normal severity: normal status: open title: Improve zipfile: add support for symlinks type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 22 18:46:07 2019 From: report at bugs.python.org (Leonard Truong) Date: Thu, 22 Aug 2019 22:46:07 +0000 Subject: [New-bugs-announce] [issue37922] inspect.getsource returns wrong class definition when multiple class definitions share the same name (but are defined in different scopes) Message-ID: <1566513967.26.0.232695535045.issue37922@roundup.psfhosted.org> New submission from Leonard Truong : Here's a case where `inspect.getsource` returns the wrong class definition when a file contains multiple class definitions with the same name. This pattern is valid runtime behavior when the class definitions are inside different scopes (e.g. a factory pattern where classes are defined and returned inside a function). ``` import inspect def foo0(): class Foo: x = 4 return Foo def foo1(): class Foo: x = 5 return Foo print(inspect.getsource(foo1())) print(foo1().x) print(foo0().x) ``` Running this file produces ``` ? python inspect-getsource-issue.py class Foo: x = 4 5 4 ``` ---------- components: Library (Lib) files: inspect-getsource-issue.py messages: 350235 nosy: lennyt priority: normal severity: normal status: open title: inspect.getsource returns wrong class definition when multiple class definitions share the same name (but are defined in different scopes) versions: Python 3.7 Added file: https://bugs.python.org/file48557/inspect-getsource-issue.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 01:44:01 2019 From: report at bugs.python.org (dmontague) Date: Fri, 23 Aug 2019 05:44:01 +0000 Subject: [New-bugs-announce] [issue37923] Combining typing.get_type_hints and inspect.signature Message-ID: <1566539041.61.0.905400107157.issue37923@roundup.psfhosted.org> New submission from dmontague : I am trying to obtain the output of `inspect.signature`, except with string-valued annotations converted to resolved type hints, similarly to `typing.get_type_hints`. Is there a good way to do this currently? If not, might this be a good fit for the standard library? --------------- The straightforward way I see to attempt this would be to call both `typing.get_type_hints` and `inspect.signature` on the callable, and then "merge" the results. However, as far as I can tell, that only works if the provided callable is a function or method (i.e., not a type or a callable class instance). As an example, if I have an instance of a class with a `__call__` method, and the class was defined in a module with `from __future__ import annotations`, then calling `inspect.signature` on the instances will only return an `inspect.Signature` with type-unaware strings as the annotations for each `inspect.Parameter`. On the other hand, calling `get_type_hints` on the instance will return type hints for the class, rather than for its `__call__` method. I wouldn't mind manually handling an edge case or two, but the logic used by `inspect.signature` to determine which function will be called seems relatively involved, and so I would ideally like to leverage this same logic while obtaining the type-resolved version signature for the callable. ---------- messages: 350248 nosy: dmontague, levkivskyi priority: normal severity: normal status: open title: Combining typing.get_type_hints and inspect.signature type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 01:50:28 2019 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Fri, 23 Aug 2019 05:50:28 +0000 Subject: [New-bugs-announce] [issue37924] Embedding Python in Another Application: Compiling under Unix misses the --embed flag Message-ID: <1566539428.8.0.420349967118.issue37924@roundup.psfhosted.org> New submission from Miro Hron?ok : Based on changes in https://docs.python.org/3.8/whatsnew/3.8.html#debug-build-uses-the-same-abi-as-release-build I belive we should document the python3.8-config --embed flag in https://docs.python.org/3.8/extending/embedding.html#compiling-and-linking-under-unix-like-systems ---------- assignee: docs at python components: Documentation messages: 350249 nosy: docs at python, hroncok, vstinner priority: normal severity: normal status: open title: Embedding Python in Another Application: Compiling under Unix misses the --embed flag versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 01:52:58 2019 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Fri, 23 Aug 2019 05:52:58 +0000 Subject: [New-bugs-announce] [issue37925] --embed not included in python3.8-config usage/--help Message-ID: <1566539578.53.0.511767507414.issue37925@roundup.psfhosted.org> New submission from Miro Hron?ok : Based on changes in https://docs.python.org/3.8/whatsnew/3.8.html#debug-build-uses-the-same-abi-as-release-build I think that the usage string of python3.8-config should also contain --embed. Actual output: $ python3.8-config Usage: /usr/bin/python3.8-x86_64-config --prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--abiflags|--configdir Expected output: $ python3.8-config Usage: /usr/bin/python3.8-x86_64-config --prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--embed|--extension-suffix|--help|--abiflags|--configdir ---------- components: Extension Modules messages: 350250 nosy: hroncok, vstinner priority: normal severity: normal status: open title: --embed not included in python3.8-config usage/--help versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 03:39:09 2019 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Fri, 23 Aug 2019 07:39:09 +0000 Subject: [New-bugs-announce] [issue37926] regression: PySys_SetArgvEx(0, NULL, 0): SystemError: Python-3.8.0b3/Objects/unicodeobject.c:2089: bad argument to internal function Message-ID: <1566545949.45.0.201062291719.issue37926@roundup.psfhosted.org> New submission from Miro Hron?ok : There is a regression between Python 3.7 and 3.8 when using PySys_SetArgvEx(0, NULL, 0). Consider this example: #include int main() { Py_Initialize(); PySys_SetArgvEx(0, NULL, 0); /* HERE */ PyRun_SimpleString("from time import time,ctime\n" "print('Today is', ctime(time()))\n"); Py_FinalizeEx(); return 0; } This works in 3.7 but no longer does in 3.8: $ gcc $(python3.7-config --cflags --ldflags) example.c $ ./a.out Today is Fri Aug 23 07:59:52 2019 $ gcc $(python3.8-config --cflags --ldflags --embed) example.c $ ./a.out Fatal Python error: no mem for sys.argv SystemError: /builddir/build/BUILD/Python-3.8.0b3/Objects/unicodeobject.c:2089: bad argument to internal function Current thread 0x00007f12c78b9740 (most recent call first): Aborted (core dumped) The documentation https://docs.python.org/3/c-api/init.html#c.PySys_SetArgvEx explicitly mentions passing 0 to PySys_SetArgvEx: "if argc is 0..." So I guess this is not something you shouldn't do. ---------- components: Extension Modules messages: 350262 nosy: hroncok, vstinner priority: normal severity: normal status: open title: regression: PySys_SetArgvEx(0, NULL, 0): SystemError: Python-3.8.0b3/Objects/unicodeobject.c:2089: bad argument to internal function versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 03:51:55 2019 From: report at bugs.python.org (Talha Ahmed) Date: Fri, 23 Aug 2019 07:51:55 +0000 Subject: [New-bugs-announce] [issue37927] No Instantiation Restrictions for AbstractBaseClasses derived from builtin types Message-ID: <1566546715.82.0.869555987333.issue37927@roundup.psfhosted.org> New submission from Talha Ahmed : I have been experimenting a little with the `abc` module in python. A la >>> import abc In the normal case you expect your ABC class to not be instantiated if it contains an unimplemented `abstractmethod`. You know like as follows: >>> class MyClass(metaclass=abc.ABCMeta): ... @abc.abstractmethod ... def mymethod(self): ... return -1 ... >>> MyClass() Traceback (most recent call last): File "", line 1, in TypeError: Can't instantiate abstract class MyClass with abstract methods mymethod OR for any derived Class. It all seems to work fine until you inherit from something ... say `dict` or `list` as in the following: >>> class YourClass(list, metaclass=abc.ABCMeta): ... @abc.abstractmethod ... def yourmethod(self): ... return -1 ... >>> YourClass() [] This is inconsistent because `type` is supposed to be the primary factory. >>> type(abc.ABCMeta) >>> type(list) Why is the check for `__abstractmethods__` only implemented for `object_new` and not for other built-in types? ---------- components: Interpreter Core messages: 350264 nosy: Talha Ahmed priority: normal severity: normal status: open title: No Instantiation Restrictions for AbstractBaseClasses derived from builtin types type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 06:41:28 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 23 Aug 2019 10:41:28 +0000 Subject: [New-bugs-announce] [issue37928] test_glob.test_selflink() fails randomly on AMD64 Fedora Rawhide Clang Installed 3.x Message-ID: <1566556888.02.0.20425235487.issue37928@roundup.psfhosted.org> New submission from STINNER Victor : The test failed and then passed when re-run in verbose mode: https://buildbot.python.org/all/#/builders/188/builds/858 FAIL: test_selflink (test.test_glob.SymlinkLoopGlobTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora.installed/build/target/lib/python3.9/test/test_glob.py", line 311, in test_selflink self.assertIn(path, results) AssertionError: 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/' not found in {'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/', 'dir/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/link/'} ---------- components: Tests messages: 350273 nosy: vstinner priority: normal severity: normal status: open title: test_glob.test_selflink() fails randomly on AMD64 Fedora Rawhide Clang Installed 3.x versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 09:27:44 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Fri, 23 Aug 2019 13:27:44 +0000 Subject: [New-bugs-announce] [issue37929] IDLE: OK sometimes fails to close configdialog Message-ID: <1566566864.85.0.308464800085.issue37929@roundup.psfhosted.org> New submission from Terry J. Reedy : If one opens configdialog when there is no Shell and then hits OK, the dialog fails to close. The following Squeezer-related traceback appears in associated console when there is one. Exception in Tkinter callback Traceback (most recent call last): File "F:\dev\3x\lib\tkinter\__init__.py", line 1885, in __call__ return self.func(*args) File "F:\dev\3x\lib\idlelib\configdialog.py", line 172, in ok self.apply() File "F:\dev\3x\lib\idlelib\configdialog.py", line 186, in apply self.activate_config_changes() File "F:\dev\3x\lib\idlelib\configdialog.py", line 240, in activate_config_changes klass.reload() File "F:\dev\3x\lib\idlelib\squeezer.py", line 222, in reload instance.load_font() File "F:\dev\3x\lib\idlelib\squeezer.py", line 318, in load_font Font(text, font=text.cget('font')).measure('0') File "F:\dev\3x\lib\idlelib\delegator.py", line 10, in __getattr__ attr = getattr(self.delegate, name) # May raise AttributeError AttributeError: 'NoneType' object has no attribute 'cget' Either the squeezer instance should be destroyed and removed along with Shell or, if keeping it is intentional, it should be removed from the update list and reinstated or if not doing that is intentional, it must either check 'if text is not None' before the access or catch the attribute error after. Tal, which? ---------- assignee: terry.reedy components: IDLE messages: 350281 nosy: taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: OK sometimes fails to close configdialog type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 09:49:47 2019 From: report at bugs.python.org (Oguz_eren) Date: Fri, 23 Aug 2019 13:49:47 +0000 Subject: [New-bugs-announce] [issue37930] make fails when compiling Python 2.6 from source (posixmodule.c) Message-ID: <1566568187.35.0.524066802113.issue37930@roundup.psfhosted.org> New submission from Oguz_eren : I'm trying to compile Python 2.6.8 on Kubuntu 19.04. Current system version is Python 2.7.16. See below the error message I receive, during make. oguz at dikanka:~$ tar -xvzf Python-2.6.8.tgz oguz at dikanka:~$ cd Python-2.6.8/ oguz at dikanka:~/Python-2.6.8$ ./configure ? ? oguz at dikanka:~/Python-2.6.8$ make ? ? gcc -pthread -Xlinker -export-dynamic -o python \ Modules/python.o \ libpython2.6.a -lpthread -ldl -lutil -lm /usr/bin/ld: libpython2.6.a(posixmodule.o): in function `posix_tmpnam': /home/oguz/Python-2.6.8/./Modules/posixmodule.c:7261: warning: the use of `tmpnam_r' is dangerous, better use ` mkstemp' /usr/bin/ld: libpython2.6.a(posixmodule.o): in function `posix_tempnam': /home/oguz/Python-2.6.8/./Modules/posixmodule.c:7216: warning: the use of `tempnam' is dangerous, better use `m kstemp' Segmentation fault (core dumped) make: *** [Makefile:414: sharedmods] Error 139 I found this python bug to be very similar to my case, but it's a rather old bug : https://bugs.python.org/issue535545 Here it says, posixmodule fails because resource.h is not detected properly by configure. Similarly in my case, pyconfig.h file's HAVE_SYS_RESOURCE_H line is commented out. I also realized that my c lib headers are not under /usr/include/sys and /usr/include/bits. Instead, I can find them under the following locations : oguz at dikanka:/usr/include/linux$ find /usr/include -name resource.h /usr/include/asm-generic/resource.h /usr/include/linux/resource.h /usr/include/x86_64-linux-gnu/asm/resource.h /usr/include/x86_64-linux-gnu/bits/resource.h /usr/include/x86_64-linux-gnu/sys/resource.h I tried symlinks to x86_64-linux-gnu/sys and x86_64-linux-gnu/bits, but the the issue persists. Any ideas ? Thanks in advance! ---------- components: Installation messages: 350283 nosy: Oguz_eren priority: normal severity: normal status: open title: make fails when compiling Python 2.6 from source (posixmodule.c) type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 11:39:47 2019 From: report at bugs.python.org (Benoit Hudson) Date: Fri, 23 Aug 2019 15:39:47 +0000 Subject: [New-bugs-announce] [issue37931] crash reimporting posix after Py_Finalize on mac Message-ID: <1566574787.75.0.458129578699.issue37931@roundup.psfhosted.org> New submission from Benoit Hudson : On OSX, with --enable-shared, if you shut down python and reinitialize, you can get a crash. Repro steps: see attached crashy.c gcc -I. -I ../Include/ -L. crashy.c -lpython3.9 ./a.out On OSX, with --enable-shared, you get a crash! On other platforms including on OSX static builds, no crash. Not a regression: it's present in 2.7 (where we actually hit it). PR incoming, the fix is simple. It has to do with caching (on OSX shared builds only) a stale value for the environment when building os.environ the first time; if you change the environment, the second initialization reads garbage when building os.environ. ---------- components: Interpreter Core, macOS files: crashy.c messages: 350309 nosy: Benoit Hudson, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: crash reimporting posix after Py_Finalize on mac type: crash versions: Python 3.9 Added file: https://bugs.python.org/file48559/crashy.c _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 16:58:17 2019 From: report at bugs.python.org (Sean Robertson) Date: Fri, 23 Aug 2019 20:58:17 +0000 Subject: [New-bugs-announce] [issue37932] ConfigParser.items(section) with allow_no_value returns empty strings Message-ID: <1566593897.26.0.561388114417.issue37932@roundup.psfhosted.org> New submission from Sean Robertson : Hello and thanks for reading. I've also experienced this in Python 3.7, and I believe it to be around since 3.2. The allow_no_value option of ConfigParser allows None values when (de)serializing configuration files. Yet calling the "items(section)" method of a ConfigParser returns empty strings for values instead of None. See below for an MRE. Thanks, Sean Python 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from configparser import ConfigParser >>> a = ConfigParser(allow_no_value=True) >>> a.add_section('foo') >>> a.set('foo', 'bar') >>> a.items('foo') [('bar', '')] >>> a.items('foo', raw=True) [('bar', None)] >>> a.get('foo', 'bar') >>> list(a['foo'].items()) [('bar', None)] ---------- components: Library (Lib) messages: 350331 nosy: Sean Robertson priority: normal severity: normal status: open title: ConfigParser.items(section) with allow_no_value returns empty strings type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 17:23:01 2019 From: report at bugs.python.org (Thomas Caswell) Date: Fri, 23 Aug 2019 21:23:01 +0000 Subject: [New-bugs-announce] [issue37933] faulthandler causes segfaults Message-ID: <1566595381.91.0.0185169812386.issue37933@roundup.psfhosted.org> New submission from Thomas Caswell : Changing faulthandler to only allocate it's stack when use causes python to segfault with import faulthandler faulthandler.cancel_dump_traceback_later() https://bugs.python.org/issue37851 https://github.com/python/cpython/pull/15358 ---------- components: Library (Lib) messages: 350332 nosy: tcaswell, vstinner priority: normal severity: normal status: open title: faulthandler causes segfaults type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 23 17:49:06 2019 From: report at bugs.python.org (Kyle Stanley) Date: Fri, 23 Aug 2019 21:49:06 +0000 Subject: [New-bugs-announce] [issue37934] Docs: Clarify NotImplemented use cases Message-ID: <1566596946.3.0.57144717386.issue37934@roundup.psfhosted.org> New submission from Kyle Stanley : In the documentation for the NotImplemented constant (https://docs.python.org/3/library/constants.html#NotImplemented), the only use case mentioned is for binary special methods, (such as object.__eq__(other)). However, based on a conversation in a recent PR (https://github.com/python/cpython/pull/15327#discussion_r316561140), there seems be several other use cases as well. It's quite useful to utilize when it's desired to express that a particular operation is not supported, without raising an error. Expanding upon the use cases will provide readers an idea of other cases in which the constant could be useful. Also, the current wording could potentially come across as implying that the _only_ use case for the constant is for binary special methods, and as far as I'm aware, that is not the case. ---------- assignee: docs at python components: Documentation keywords: easy messages: 350333 nosy: aeros167, docs at python, eric.araujo, ezio.melotti, mdk, willingc priority: normal severity: normal stage: needs patch status: open title: Docs: Clarify NotImplemented use cases type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 02:31:41 2019 From: report at bugs.python.org (Shai) Date: Sat, 24 Aug 2019 06:31:41 +0000 Subject: [New-bugs-announce] [issue37935] Improve performance of pathlib.scandir() Message-ID: <1566628301.14.0.386871182456.issue37935@roundup.psfhosted.org> New submission from Shai : I recently have taken a look in the source code of the pathlib module, and I saw something weird there: when the module used the scandir function, it first converted its iterator into a list and then used it in a for loop. The list wasn't used anywhere else, so I think the conversion to list is just a waste of performance. In addition, I noticed that the scandir iterator is never closed (it's not used in a with statement and its close method isn't called). I know that the iterator is closed automatically when it's garbaged collected, but according to the docs, it's advisable to close it explicitly. I've created a pull request that fixes these issues: PR 15331 In the PR, I changed the code so the scandir iterator is used directly instead of being converted into a list and I wrapped its usage in a with statement to close resources properly. ---------- components: Library (Lib) messages: 350354 nosy: Shai priority: normal pull_requests: 15142 severity: normal status: open title: Improve performance of pathlib.scandir() type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 03:15:12 2019 From: report at bugs.python.org (Greg Price) Date: Sat, 24 Aug 2019 07:15:12 +0000 Subject: [New-bugs-announce] [issue37936] gitignore file is too broad Message-ID: <1566630912.92.0.00349300289224.issue37936@roundup.psfhosted.org> New submission from Greg Price : There are a number of files that we track in the repo, but are nevertheless covered by `.gitignore`. This *mostly* doesn't change anything, because Git itself only cares what `.gitignore` has to say about files that aren't already tracked. But: * It affects any new files someone might add that are covered by the same unintentionally-broad patterns. In that case it'd be likely to cause some confused debugging into why Git wasn't seeing the file; or possibly loss of work, if the person didn't notice that the file had never been committed to Git. * More immediately, other tools that aren't Git but consult the Git ignore rules don't necessarily implement this wrinkle. In particular this is unfortunately a WONTFIX bug in ripgrep / `rg`: https://github.com/BurntSushi/ripgrep/issues/1127 . I learned of the `rg` bug (and, for that matter, refreshed myself on just how Git itself handles this case) after some confusion today where I was looking with for references to a given macro, thought I'd looked at all of them... and then later noticed through `git log -p -S` a reference in `PC/pyconfig.h` with no subsequent change deleting it. Turned out it was indeed there and I needed to take account of it. Here's the list of affected files: $ git ls-files -i --exclude-standard .gitignore Doc/Makefile Lib/test/data/README Modules/Setup PC/pyconfig.h Tools/freeze/test/Makefile Tools/msi/core/core.wixproj Tools/msi/core/core.wxs Tools/msi/core/core_d.wixproj Tools/msi/core/core_d.wxs Tools/msi/core/core_en-US.wxl Tools/msi/core/core_files.wxs Tools/msi/core/core_pdb.wixproj Tools/msi/core/core_pdb.wxs Tools/unicode/Makefile Fortunately this is not hard to fix. The semantics of `.gitignore` have a couple of gotchas, but once you know them it's not really any more complicated to get the behavior exactly right. And I've previously spent the hour or two to read up on it... and when I forget, I just consult my own short notes :), at the top of this file: https://github.com/zulip/zulip/blob/master/.gitignore I have a minimal fix which takes care of all the files above. I'll post that shortly, and I may also write up a more thorough fix that tries to make it easy not to fall into the same Git pitfall again. ---------- components: Build messages: 350355 nosy: Greg Price priority: normal severity: normal status: open title: gitignore file is too broad versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 06:00:35 2019 From: report at bugs.python.org (Ram Rachum) Date: Sat, 24 Aug 2019 10:00:35 +0000 Subject: [New-bugs-announce] [issue37937] Mention ``frame.f_trace`` in :func:`sys.settrace` docs. Message-ID: <1566640835.5.0.307491692572.issue37937@roundup.psfhosted.org> Change by Ram Rachum : ---------- assignee: docs at python components: Documentation nosy: cool-RR, docs at python priority: normal pull_requests: 15149 severity: normal status: open title: Mention ``frame.f_trace`` in :func:`sys.settrace` docs. type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 07:28:50 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Sat, 24 Aug 2019 11:28:50 +0000 Subject: [New-bugs-announce] [issue37938] refactor PyLong_As*() functions Message-ID: <1566646130.69.0.631400652637.issue37938@roundup.psfhosted.org> New submission from Sergey Fedoseev : PyLong_As*() functions have a lot of duplicated code, creating draft PR with attempt to deduplicate them. ---------- components: Interpreter Core messages: 350367 nosy: sir-sigurd priority: normal severity: normal status: open title: refactor PyLong_As*() functions _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 10:54:52 2019 From: report at bugs.python.org (Yugi) Date: Sat, 24 Aug 2019 14:54:52 +0000 Subject: [New-bugs-announce] [issue37939] os.path.normpath change some characters of a path into kinda 'hex number' Message-ID: <1566658492.51.0.598061763854.issue37939@roundup.psfhosted.org> New submission from Yugi : I was trying to handle path to work on both '/' and '\' but when I tried to run the code like they said on: https://stackoverflow.com/questions/3167154/how-to-split-a-dos-path-into-its-components-in-python I have no idea why the terminal on my PC doesnt have the same output like everybody was discussing at the time the questions and answers were posted. OS: ubuntu 16.04 LTS, Intel Core i5-7500, 16GB/1TB, Intel HD Graphics 630 python version: 3.5.2 I borrowed a mac pro 2015 to check if it had the same output like my PC but it had not. my friend has python 3.7.1 installed and the output is: ['d:\\stuff\\morestuff\\furtherdown\\THEFILE.txt'] (on my PC, it is: ['d:\\stuff\\morestuff\x0curtherdown\\THEFILE.txt']). I'm totally new to Python and I'm very sorry if this issue is already reported. Thank you! ---------- files: Screenshot from 2019-08-24 21-41-52.png messages: 350371 nosy: Yugi priority: normal severity: normal status: open title: os.path.normpath change some characters of a path into kinda 'hex number' type: behavior versions: Python 3.5 Added file: https://bugs.python.org/file48560/Screenshot from 2019-08-24 21-41-52.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 10:55:33 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 24 Aug 2019 14:55:33 +0000 Subject: [New-bugs-announce] [issue37940] Add xml.tool to pretty print XML like json.tool Message-ID: <1566658533.41.0.722111821042.issue37940@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Now that XML has pretty print option with issue14465 would it be handy to add a command line tool pretty printer similar to json.tool? This can be written as one-liner similar to json pretty printing but I think it's a good option and having a command line tool also helps in piping the output to other commands like filtering particular tags. I tried searching mailing list and couldn't find any discussions along these lines. There were some concerns around using external tools and in https://bugs.python.org/issue14465#msg324098 . I thought to open this to gather feedback. Branch : https://github.com/tirkarthi/cpython/tree/bpo14465-xml-tool python -m xml.tool /tmp/person.xml Idly Dosa # Get all breakfast tags python -m xml.tool /tmp/person.xml | grep breakfast Idly Dosa ---------- components: Library (Lib) messages: 350372 nosy: eli.bendersky, rhettinger, scoder, serhiy.storchaka, xtreak priority: normal severity: normal status: open title: Add xml.tool to pretty print XML like json.tool type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 13:59:15 2019 From: report at bugs.python.org (Julian Berman) Date: Sat, 24 Aug 2019 17:59:15 +0000 Subject: [New-bugs-announce] [issue37941] python -m and runpy.run_module set different __name__ by default Message-ID: <1566669555.03.0.62423685183.issue37941@roundup.psfhosted.org> New submission from Julian Berman : This seems brutally simple, to the point where I'm concerned I'm missing something (or have seen this issue filed elsewhere but can't find it), but `python -m` and `runpy.run_module` don't set the same __name__ -- specifically `runpy.run_module`, when given a non-package, defaults to setting __name__ to `mod_name`. So, given package/foo.py, with the "common" `if __name__ == "__main__":` check at the bottom, `python -m package.foo` successfully executes, but `runpy.run_module("package.foo")` exits silently, unless explicitly passed `runpy.run_module("package.foo", run_name="__main__"). [n.b. pep517.{build,check} is a specific example of such a module that advertises itself as wanting to be executed via `python -m`] issue16737 seems related but not exactly the same from what I can tell. ---------- messages: 350387 nosy: Julian priority: normal severity: normal status: open title: python -m and runpy.run_module set different __name__ by default _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 15:09:51 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 24 Aug 2019 19:09:51 +0000 Subject: [New-bugs-announce] [issue37942] Generate correct error check for PyFloat_AsDouble Message-ID: <1566673791.31.0.298012277207.issue37942@roundup.psfhosted.org> New submission from Raymond Hettinger : The API for PyFloat_AsDouble() returns -1.0 to indicate an error. PyErr_Occurred() should only be called if there is a -1.0 return code. This is the normal practice for those calls and it is a bit faster because it avoids unnecessary external call. ---------- components: Argument Clinic messages: 350395 nosy: larry, rhettinger priority: normal severity: normal status: open title: Generate correct error check for PyFloat_AsDouble type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 24 19:06:17 2019 From: report at bugs.python.org (Jens Troeger) Date: Sat, 24 Aug 2019 23:06:17 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue37943=5D_mimetypes=2Eguess?= =?utf-8?q?=5Fextension=28=29_doesn=E2=80=99t_get_JPG_right?= Message-ID: <1566687977.17.0.676940180845.issue37943@roundup.psfhosted.org> New submission from Jens Troeger : I think this one?s quite easy to reproduce: Python 3.7.4 (default, Jul 11 2019, 01:08:00) [Clang 10.0.1 (clang-1001.0.46.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import mimetypes >>> mimetypes.guess_extension("image/jpg") # Expected ".jpg" >>> mimetypes.guess_extension("image/jpeg") # Expected ".jpg" '.jpe' According to MDN https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Complete_list_of_MIME_types only "image/jpeg" is a valid MIME type; however, I?ve seen quite a bit of "image/jpg" out in the wild and I think that ought to be accounted for too. Before I look into submitting a PR I wanted to confirm that this is an issue that ought to be fixed. I think it is. ---------- components: Library (Lib) messages: 350408 nosy: _savage priority: normal severity: normal status: open title: mimetypes.guess_extension() doesn?t get JPG right type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 25 09:41:13 2019 From: report at bugs.python.org (Origami Tobiichi) Date: Sun, 25 Aug 2019 13:41:13 +0000 Subject: [New-bugs-announce] [issue37944] About json.load(s Message-ID: <1566740473.44.0.553990225097.issue37944@roundup.psfhosted.org> Change by Origami Tobiichi : ---------- components: Extension Modules nosy: Origami Tobiichi priority: normal severity: normal status: open title: About json.load(s type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 25 13:33:21 2019 From: report at bugs.python.org (Tim Golden) Date: Sun, 25 Aug 2019 17:33:21 +0000 Subject: [New-bugs-announce] [issue37945] test_locale failing Message-ID: <1566754401.97.0.888593384986.issue37945@roundup.psfhosted.org> New submission from Tim Golden : On a Win10 machine I'm consistently seeing test_locale (and test__locale) fail. I'll attach pythoninfo. ====================================================================== ERROR: test_getsetlocale_issue1813 (test.test_locale.TestMiscellaneous) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\tim\work-in-progress\cpython\lib\test\test_locale.py", line 531, in test_getsetlocale_issue1813 locale.setlocale(locale.LC_CTYPE, loc) File "C:\Users\tim\work-in-progress\cpython\lib\locale.py", line 604, in setlocale return _setlocale(category, locale) locale.Error: unsupported locale setting ---------- assignee: tim.golden components: Library (Lib) files: pythoninfo.txt messages: 350466 nosy: tim.golden priority: normal severity: normal status: open title: test_locale failing type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file48561/pythoninfo.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 25 22:29:39 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 26 Aug 2019 02:29:39 +0000 Subject: [New-bugs-announce] [issue37946] Add the Bessel functions of the first and second kind to the math module Message-ID: <1566786579.57.0.437396852549.issue37946@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : After repeatedly having to add 3rd party libraries only for the these functions or having to implement them myself quick and dirty based on numerical integration, I suggest add the Bessel functions of the first and second kind to the math module. These functions tend to appear a lot (but not restricted to) when evaluating systems that are defined on cylindrical geometries, restrictions or approximations (like the proximate solution to Kepler's equation as a truncated Fourier sine series) and many other special functions can be described as series involving them. Based on the fact that many libc implementations include them I think the cost-benefit of exposing them when available is acceptable. ---------- components: Extension Modules messages: 350477 nosy: mark.dickinson, pablogsal, rhettinger priority: normal severity: normal status: open title: Add the Bessel functions of the first and second kind to the math module type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Aug 25 23:09:10 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 26 Aug 2019 03:09:10 +0000 Subject: [New-bugs-announce] [issue37947] symtable_handle_namedexpr does not adjust correctly the recursion level Message-ID: <1566788950.92.0.897432675692.issue37947@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : The symtable_handle_namedexpr function does not adjust correctly the recursion level when exiting (also, is actually not returning anything in a function defined as non-void but the return value is not used so is not technically undefined behavior). ---------- components: Interpreter Core messages: 350478 nosy: pablogsal priority: normal severity: normal status: open title: symtable_handle_namedexpr does not adjust correctly the recursion level versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 02:29:12 2019 From: report at bugs.python.org (Arne Recknagel) Date: Mon, 26 Aug 2019 06:29:12 +0000 Subject: [New-bugs-announce] [issue37948] get_type_hints fails if there are un-annotated fields in a dataclass Message-ID: <1566800952.19.0.982220665774.issue37948@roundup.psfhosted.org> New submission from Arne Recknagel : When declaring a dataclass with make_dataclass, it is valid to omit type information for fields. __annotations__ understands it and just adds typing.Any, but typing.get_type_hints fails with a cryptic error message: >>> import dataclasses >>> import typing >>> A = dataclasses.make_dataclass('A', ['a_var']) >>> A.__annotations__ {'a_var': 'typing.Any'} >>> typing.get_type_hints(A) Traceback (most recent call last): File "", line 1, in File "/user/venvs/python_3.7/lib/python3.7/typing.py", line 973, in get_type_hints value = _eval_type(value, base_globals, localns) File "/user/venvs/python_3.7/lib/python3.7/typing.py", line 260, in _eval_type return t._evaluate(globalns, localns) File "/user/venvs/python_3.7/lib/python3.7/typing.py", line 464, in _evaluate eval(self.__forward_code__, globalns, localns), File "", line 1, in NameError: name 'typing' is not defined Adding typing.Any explicitly is an obvious workaround: >>> B = dataclasses.make_dataclass('B', [('a_var', typing.Any)]) >>> typing.get_type_hints(B) {'a_var': typing.Any} There is already a bug filed regarding datalcasses and get_type_hints which might be related: https://bugs.python.org/issue34776 ---------- messages: 350488 nosy: arne, eric.smith priority: normal severity: normal status: open title: get_type_hints fails if there are un-annotated fields in a dataclass type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 02:39:48 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Mon, 26 Aug 2019 06:39:48 +0000 Subject: [New-bugs-announce] [issue37949] Create empty __annotations__ dictionaries lazily Message-ID: <1566801588.69.0.745320126697.issue37949@roundup.psfhosted.org> New submission from Raymond Hettinger : Right now, every heap function is given a new annotations dictionary even if annotations aren't used. How about we create these lazily in order to speed-up function creation time, improve start-up time, and save space. >>> def f(): pass >>> def g(): return 1 >>> f.__annotations__ {} >>> g.__annotations__ {} >>> hex(id(f.__annotations__)) '0x109207e40' >>> hex(id(g.__annotations__)) '0x1092296c0' ---------- components: Interpreter Core messages: 350490 nosy: rhettinger priority: low severity: normal status: open title: Create empty __annotations__ dictionaries lazily type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 03:35:54 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 26 Aug 2019 07:35:54 +0000 Subject: [New-bugs-announce] [issue37950] ast.dump() with incomplete node Message-ID: <1566804954.58.0.654310469231.issue37950@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are several issues in ast.dump() with incompletely initialized node. Some fields and attributes of AST nodes are optional, but creating an AST node without them leads ast.dump() to fail or to produce incorrect result. 1. With annotate_fields=False ast.dump() outputs subnodes as positional arguments of the node constructor. >>> ast.dump(ast.Raise(exc=ast.Name(id='a', ctx=ast.Load()), cause=ast.Name(id='b', ctx=ast.Load())), annotate_fields=False) "Raise(Name('a', Load()), Name('b', Load()))" But if miss the optional exc field it outputs incorrect output: >>> ast.dump(ast.Raise(cause=ast.Name(id='a', ctx=ast.Load())), annotate_fields=False) "Raise(Name('a', Load()))" which is not distinguished from the case when you pass only the exc field and miss the cause field (both are optional): >>> ast.dump(ast.Raise(exc=ast.Name(id='a', ctx=ast.Load())), annotate_fields=False) "Raise(Name('a', Load()))" 2. The documentation of ast.dump() says that its result with annotate_fields=True is impossible to evaluate, but this is not true, because keyword arguments are supported by AST node constructors. 3. Attributes end_lineno and end_col_offset are optional, but if you miss them ast.dump() with include_attributes=True will fail: >>> ast.dump(ast.Raise(lineno=3, col_offset=4), include_attributes=True) Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython3.8/Lib/ast.py", line 126, in dump return _format(node) File "/home/serhiy/py/cpython3.8/Lib/ast.py", line 118, in _format rv += ', '.join('%s=%s' % (a, _format(getattr(node, a))) File "/home/serhiy/py/cpython3.8/Lib/ast.py", line 118, in rv += ', '.join('%s=%s' % (a, _format(getattr(node, a))) AttributeError: 'Raise' object has no attribute 'end_lineno' 4. Even if you specify all attributes, the output looks weird if you do not specify any field (note a space after "("): >>> ast.dump(ast.Raise(lineno=3, col_offset=4, end_lineno=3, end_col_offset=24), include_attributes=True) 'Raise( lineno=3, col_offset=4, end_lineno=3, end_col_offset=24)' ---------- assignee: serhiy.storchaka components: Library (Lib) messages: 350506 nosy: serhiy.storchaka priority: normal severity: normal status: open title: ast.dump() with incomplete node type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 04:42:32 2019 From: report at bugs.python.org (Christian Heimes) Date: Mon, 26 Aug 2019 08:42:32 +0000 Subject: [New-bugs-announce] [issue37951] Disallow fork in a subinterpreter broke subprocesses in mod_wsgi daemon mode Message-ID: <1566808952.7.0.710040972846.issue37951@roundup.psfhosted.org> New submission from Christian Heimes : BPO https://bugs.python.org/issue34651 disabled fork in subinterpreters. The patch also disabled fork() in _posixsubprocess.fork_exec(). This broke the ability to spawn subprocesses in mod_wsgi daemons, which use subinterpreters. Any attempt to spawn (fork + exec) a subprocess fails with "RuntimeError: fork not supported for subinterpreters": ... File "/usr/lib64/python3.8/subprocess.py", line 829, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/usr/lib64/python3.8/subprocess.py", line 1608, in _execute_child self.pid = _posixsubprocess.fork_exec( RuntimeError: fork not supported for subinterpreters Also see https://bugzilla.redhat.com/show_bug.cgi?id=1745450 ---------- components: Extension Modules, Interpreter Core keywords: 3.8regression messages: 350511 nosy: christian.heimes, eric.snow, vstinner priority: critical severity: normal status: open title: Disallow fork in a subinterpreter broke subprocesses in mod_wsgi daemon mode versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 04:49:04 2019 From: report at bugs.python.org (Christer Weinigel) Date: Mon, 26 Aug 2019 08:49:04 +0000 Subject: [New-bugs-announce] [issue37952] Add support for export_keying_material to SSL library Message-ID: <1566809344.46.0.817861457106.issue37952@roundup.psfhosted.org> New submission from Christer Weinigel : Add support for the export_keying_material function to the SSL library. Tested with Python 3.7.4 and Python master branch: https://github.com/wingel/cpython/tree/export_keying_material-3.7.4 https://github.com/wingel/cpython/tree/export_keying_material-master Is this the correct format for a patch? Should I include the automatically generated clinic changes in my patch or not? What about the "versionadded::" string in the documentation? Should I include a line like that or does it only generate unneccessary conflicts? Anything else I need to do? ---------- assignee: christian.heimes components: SSL messages: 350512 nosy: christian.heimes, wingel71 priority: normal severity: normal status: open title: Add support for export_keying_material to SSL library type: enhancement versions: Python 3.7, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 10:06:04 2019 From: report at bugs.python.org (Dominic Littlewood) Date: Mon, 26 Aug 2019 14:06:04 +0000 Subject: [New-bugs-announce] [issue37953] Fix ForwardRef equality checks Message-ID: <1566828364.46.0.753751170409.issue37953@roundup.psfhosted.org> New submission from Dominic Littlewood <11dlittlewood at gmail.com>: Apologies for issuing a pull request without an associated issue. I'm kind of new to this. Nevermind, I'm making one now. The typing module currently contains a bug where ForwardRefs change their hash and equality once they are evaluated. Consider the following code: import typing ref = typing.ForwardRef('MyClass') ref_ = typing.ForwardRef('MyClass') class MyClass: def __add__(self, other: ref): ... # We evaluate one forward reference, but not the other. typing.get_type_hints(MyClass.__add__) # Equality is violated print(ref == ref_) # False # This can cause duplication in Unions. # The following prints: # typing.Union[ForwardRef('MyClass'), ForwardRef('MyClass')] # when it should be: typing.Union[ForwardRef('MyClass')] wrong = typing.Union[ref, ref_] print(wrong) # The union also does not compare equality properly should_be_equal = typing.Union[ref] print(should_be_equal == wrong) # False # In fact this applies to any generic print(typing.Callable[[ref],None] == typing.Callable[[ref_],None]) # False ---------- components: Library (Lib) messages: 350531 nosy: plokmijnuhby priority: normal severity: normal status: open title: Fix ForwardRef equality checks type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 10:30:13 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 26 Aug 2019 14:30:13 +0000 Subject: [New-bugs-announce] [issue37954] Multiple tests are leaking references in AMD64 Windows8.1 Refleaks 3.x and x86 Gentoo Refleaks 3.x buildbots Message-ID: <1566829813.13.0.292904797891.issue37954@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : BUILDBOT FAILURE REPORT ======================= Builder name: AMD64 Windows8.1 Refleaks 3.x Builder url: https://buildbot.python.org/all/#/builders/80/ Build url: https://buildbot.python.org/all/#/builders/80/builds/683 Failed tests ------------ Test leaking resources ---------------------- - test_tools is leaking references - test_exceptions is leaking references - test_compile is leaking references - test_trace is leaking references - test_dis is leaking references - test_builtin is leaking references - test_syntax is leaking references - test_named_expressions is leaking memory blocks - test_syntax is leaking memory blocks - test_symtable is leaking references - test_typing is leaking references - test_future is leaking references - test_global is leaking references - test_opcodes is leaking references - test_grammar is leaking references - test_scope is leaking references - test_named_expressions is leaking references - test_ast is leaking references - test_ast is leaking memory blocks Build summary ------------- == Tests result: FAILURE then FAILURE == 373 tests OK. 10 slowest tests: - test_multiprocessing_spawn: 25 min 17 sec - test_asyncio: 19 min 45 sec - test_mailbox: 15 min 38 sec - test_distutils: 14 min 8 sec - test_concurrent_futures: 13 min 58 sec - test_zipfile: 9 min 36 sec - test_venv: 8 min 8 sec - test_compileall: 6 min 37 sec - test_lib2to3: 6 min 11 sec - test_regrtest: 5 min 43 sec 16 tests failed: test_ast test_builtin test_compile test_dis test_exceptions test_future test_global test_grammar test_named_expressions test_opcodes test_scope test_symtable test_syntax test_tools test_trace test_typing 30 tests skipped: test_curses test_dbm_gnu test_dbm_ndbm test_devpoll test_epoll test_fcntl test_fork1 test_gdb test_grp test_ioctl test_kqueue test_multiprocessing_fork test_multiprocessing_forkserver test_nis test_openpty test_ossaudiodev test_pipes test_poll test_posix test_pty test_pwd test_readline test_resource test_spwd test_syslog test_threadsignals test_wait3 test_wait4 test_xxtestfuzz test_zipfile64 20 re-run tests: test_ast test_asyncgen test_builtin test_compile test_compileall test_dis test_exceptions test_future test_global test_grammar test_named_expressions test_opcodes test_scope test_symtable test_syntax test_tarfile test_threading test_tools test_trace test_typing Total duration: 1 hour 21 min Tracebacks ---------- ```traceback Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\proactor_events.py", line 765, in _loop_self_reading f.result() # may raise File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request .Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Error on reading from the event loop self pipe loop: Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 453, in finish_recv return ov.getresult() OSError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\proactor_events.py", line 765, in _loop_self_reading f.result() # may raise File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request .Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Error on reading from the event loop self pipe loop: Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 453, in finish_recv return ov.getresult() OSError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\proactor_events.py", line 765, in _loop_self_reading f.result() # may raise File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request . Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\proactor_events.py", line 765, in _loop_self_reading f.result() # may raise File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request .Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Error on reading from the event loop self pipe loop: Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 453, in finish_recv return ov.getresult() OSError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\proactor_events.py", line 765, in _loop_self_reading f.result() # may raise File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request .Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Error on reading from the event loop self pipe loop: Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 453, in finish_recv return ov.getresult() OSError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Error on reading from the event loop self pipe loop: Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 453, in finish_recv return ov.getresult() OSError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\proactor_events.py", line 765, in _loop_self_reading f.result() # may raise File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 808, in _poll value = callback(transferred, key, ov) File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 457, in finish_recv raise ConnectionResetError(*exc.args) ConnectionResetError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request .Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Cancelling an overlapped future failed future: <_OverlappedFuture pending overlapped= cb=[BaseProactorEventLoop._loop_self_reading()]> Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 66, in _cancel_overlapped self._ov.cancel() OSError: [WinError 6] The handle is invalid Error on reading from the event loop self pipe loop: Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.refleak\build\lib\asyncio\windows_events.py", line 453, in finish_recv return ov.getresult() OSError: [WinError 995] The I/O operation has been aborted because of either a thread exit or an application request ``` Current builder status ---------------------- The builder is failing currently Commits ------- Other builds with similar failures ---------------------------------- - https://buildbot.python.org/all/#/builders/1/builds/694 - https://buildbot.python.org/all/#/builders/224/builds/70 - https://buildbot.python.org/all/#/builders/223/builds/84 ---------- messages: 350533 nosy: lukasz.langa, pablogsal priority: release blocker severity: normal status: open title: Multiple tests are leaking references in AMD64 Windows8.1 Refleaks 3.x and x86 Gentoo Refleaks 3.x buildbots versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 11:11:48 2019 From: report at bugs.python.org (Paulo Henrique Silva) Date: Mon, 26 Aug 2019 15:11:48 +0000 Subject: [New-bugs-announce] [issue37955] mock.patch incorrect reference to Mock Message-ID: <1566832308.08.0.460509709565.issue37955@roundup.psfhosted.org> New submission from Paulo Henrique Silva : When explaining the usage of keyword arguments on mock.patch: ``` patch() takes arbitrary keyword arguments. These will be passed to the Mock (or new_callable) on construction. ``` default new_callable is MagicMock and it should be mentioned here instead of Mock (even tough MagickMock inherits from it). ---------- assignee: docs at python components: Documentation messages: 350537 nosy: docs at python, phsilva priority: normal severity: normal status: open title: mock.patch incorrect reference to Mock versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 12:25:17 2019 From: report at bugs.python.org (mael arnaud) Date: Mon, 26 Aug 2019 16:25:17 +0000 Subject: [New-bugs-announce] [issue37956] UUID authorize version 6+ with variant RFC 4122 Message-ID: <1566836717.73.0.582050404986.issue37956@roundup.psfhosted.org> New submission from mael arnaud : The docs stipulates: UUID.version The UUID version number (1 through 5, meaningful only when the variant is RFC_4122). But you can absolutely do: >>> uuid.UUID("25cdb2ef-3764-bb01-9b17-433defc74464") which yields: >>> uuid.UUID("25cdb2ef-3764-bb01-9b17-433defc74464").variant 'specified in RFC 4122' >>> uuid.UUID("25cdb2ef-3764-bb01-9b17-433defc74464").version 11 Since versions above 5 are not defined in the RFC 4122, is it relevant to allow such UUIDs to exist? Every other tool on the internet seem to reject them (but I guess they are regex based). At the very least, maybe the docs should mention something about it, since I'm not sure anyone expects a UUID to be valid and have a version above 5? ---------- components: Library (Lib) messages: 350546 nosy: Erawpalassalg priority: normal severity: normal status: open title: UUID authorize version 6+ with variant RFC 4122 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 20:03:54 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 27 Aug 2019 00:03:54 +0000 Subject: [New-bugs-announce] [issue37957] Allow regrtest to receive a file with test (and subtests) to ignore Message-ID: <1566864234.54.0.166621143796.issue37957@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : When building Python in some uncommon platforms (I am looking at you Solaris and AIX) there are some known tests that will fail. Right now, regrtest has the ability to ignore entire tests using the -x option and to receive a filter file using the --matchfile filter. The problem with the --matchfile option is that it receives a file with patterns to accept and when you want to ignore a couple of tests and subtests, is too cumbersome to lists ALL tests that are not the ones that you want to accept. The problem with -x is that is not easy to ignore just a subtests that fail and the whole tests needs to be ignored. So I suggest to add a new command line option similar to --matchfile but the other way around. Another possibility is allowing to reverse the meaning of the matchfile argument, but I find that a bit more confusing. ---------- components: Tests messages: 350584 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: Allow regrtest to receive a file with test (and subtests) to ignore type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Aug 26 23:52:58 2019 From: report at bugs.python.org (Daniel Olshansky) Date: Tue, 27 Aug 2019 03:52:58 +0000 Subject: [New-bugs-announce] [issue37958] Adding get_profile_dict to pstats Message-ID: <1566877978.54.0.497370622216.issue37958@roundup.psfhosted.org> New submission from Daniel Olshansky : pstats is really useful or profiling and printing the output of the execution of some block of code, but I've found on multiple occasions that sometimes I'd like to access this output directly in an easily usable dictionary on which I can further analyze or manipulate. My proposal is to add a function called get_profile_dict inside of pstats that'll automatically return this data. Please note that the PR I've put up is just a first version to get some feedback on, and it needs to be updated if the community chooses to move forward with it (e.g. we shouldn't be calling get_print_list inside of get_profile_dict in production code). ---------- components: Extension Modules messages: 350591 nosy: Daniel Olshansky, pablogsal priority: normal pull_requests: 15216 severity: normal status: open title: Adding get_profile_dict to pstats type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 27 02:59:15 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 27 Aug 2019 06:59:15 +0000 Subject: [New-bugs-announce] [issue37959] test_os.test_copy_file_range() killed by SIGSYS (12) on FreeBSD CURRENT buildbot Message-ID: <1566889155.63.0.559202876423.issue37959@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/168/builds/1356 test_nop (test.test_os.FSEncodingTests) ... ok test_access (test.test_os.FileTests) ... ok test_closerange (test.test_os.FileTests) ... ok test_copy_file_range (test.test_os.FileTests) ... *** Signal 12 Stop. make: stopped in /usr/home/buildbot/python/3.x.koobs-freebsd-current/build program finished with exit code 1 elapsedTime=1562.592656 man signal says: "12 SIGSYS create core image non-existent system call invoked" I bet that the "non-existent system call" is: copy_file_range. The configure script says: "checking for copy_file_range... yes" Se also bpo-37711: "regrtest: re-run failed tests in subprocesses". ---------- components: Tests messages: 350609 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_os.test_copy_file_range() killed by SIGSYS (12) on FreeBSD CURRENT buildbot versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 27 03:49:40 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 27 Aug 2019 07:49:40 +0000 Subject: [New-bugs-announce] [issue37960] repr() of buffered and text streams silences too many exceptions Message-ID: <1566892180.39.0.647932070898.issue37960@roundup.psfhosted.org> New submission from Serhiy Storchaka : __repr__() implementations of buffered and text streams try to include the value of "name" and "mode" attributes in the result. But they silence too wide range of exceptions (all subclasses of Exception) when try to get these values. This includes such exceptions as MemoryError or RecursionError which can be occurred in virtually any code. The proposed PR narrows the range of silenced exceptions to the necessary minimum: expected AttributeError and ValueError. The latter is raised if the underlying stream was detached. ---------- components: IO, Library (Lib) messages: 350614 nosy: serhiy.storchaka priority: normal severity: normal status: open title: repr() of buffered and text streams silences too many exceptions type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 27 04:23:25 2019 From: report at bugs.python.org (Julien Danjou) Date: Tue, 27 Aug 2019 08:23:25 +0000 Subject: [New-bugs-announce] [issue37961] Tracemalloc traces do not include original stack trace length Message-ID: <1566894205.32.0.427554502238.issue37961@roundup.psfhosted.org> New submission from Julien Danjou : When using the tracemalloc module, the maximum number of frames that are captured is specified at startup via a value to the `start` method. However, if the number of frames is truncated, there's no way to know the original length of the stack traces. ---------- components: Interpreter Core messages: 350616 nosy: jd priority: normal severity: normal status: open title: Tracemalloc traces do not include original stack trace length versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 27 08:07:38 2019 From: report at bugs.python.org (Zeth) Date: Tue, 27 Aug 2019 12:07:38 +0000 Subject: [New-bugs-announce] [issue37962] Improve ISO 8601 timezone support in the datetime.fromisoformat() method Message-ID: <1566907658.52.0.492553753506.issue37962@roundup.psfhosted.org> New submission from Zeth : The datetime.datetime.fromisoformat() method unnecessarily rejects datetime strings that are valid under ISO 8601 if timezone uses the UTC designator or it only has hours. In ISO 8601, section 4.2.5.1: "When it is required to indicate the difference between local time and UTC of day, the representation of the difference can be expressed in hours and minutes, or hours only." And Section 4.2.4, UTC shall be expressed "by the UTC designator [Z]". A key use case of the latter is being able to parse JavaScript Date objects (e.g. dates that have come from a web frontend or a JSON document). This considerably improves the usefulness of the datetime.fromisoformat method. ---------- messages: 350630 nosy: zeth priority: normal pull_requests: 15224 severity: normal status: open title: Improve ISO 8601 timezone support in the datetime.fromisoformat() method _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 27 10:51:02 2019 From: report at bugs.python.org (=?utf-8?q?Thomas_G=C3=BCttler?=) Date: Tue, 27 Aug 2019 14:51:02 +0000 Subject: [New-bugs-announce] [issue37963] No URL for docs of pth files Message-ID: <1566917462.27.0.631733525423.issue37963@roundup.psfhosted.org> New submission from Thomas G?ttler : if you google for "python pth" you get to the "sites" docs. It would be very nice if you could create a direct URL to the docs of pth files. This makes it easier to point new comers into the right direction if you answer questions at stackoverflow (or other places). See: https://www.google.com/search?q=python+pth ---------- messages: 350634 nosy: Thomas G?ttler priority: normal severity: normal status: open title: No URL for docs of pth files _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 27 10:53:19 2019 From: report at bugs.python.org (Vinay Sharma) Date: Tue, 27 Aug 2019 14:53:19 +0000 Subject: [New-bugs-announce] [issue37964] F_GETPATH is not available in fcntl.fcntl Message-ID: <1566917599.68.0.248827592945.issue37964@roundup.psfhosted.org> New submission from Vinay Sharma : F_GETPATH cmd/operator is not present in fcntl module. This is specific to macos and is only available in macos. https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/fcntl.2.html This can be also be verified using `man fcntl` ---------- components: Library (Lib) messages: 350635 nosy: twouters, vinay0410 priority: normal severity: normal status: open title: F_GETPATH is not available in fcntl.fcntl type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 27 16:47:14 2019 From: report at bugs.python.org (Maarten) Date: Tue, 27 Aug 2019 20:47:14 +0000 Subject: [New-bugs-announce] [issue37965] CCompiler has_function displays warning Message-ID: <1566938834.78.0.244631078662.issue37965@roundup.psfhosted.org> New submission from Maarten : When using the `has_function` method of a CCompiler object, the compiler will emit a warning because the main function has no return type specified. https://github.com/python/cpython/blob/8c9e9b0cd5b24dfbf1424d1f253d02de80e8f5ef/Lib/distutils/ccompiler.py#L784-L786 This warning is emitted: /tmp/clockq2_azlzj.c:2:1: warning: return type defaults to ?int? [-Wimplicit-int] 2 | main (int argc, char **argv) { | ^~~~ This happens under Linux, gcc 9 ---------- components: Distutils files: 0001-Fix-compiler-warning-of-distutils-CCompiler.test_fun.patch keywords: patch messages: 350645 nosy: dstufft, eric.araujo, maarten priority: normal severity: normal status: open title: CCompiler has_function displays warning type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48563/0001-Fix-compiler-warning-of-distutils-CCompiler.test_fun.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Aug 27 23:52:56 2019 From: report at bugs.python.org (Greg Price) Date: Wed, 28 Aug 2019 03:52:56 +0000 Subject: [New-bugs-announce] [issue37966] is_normalized is much slower than the standard's algorithm Message-ID: <1566964376.46.0.968905414274.issue37966@roundup.psfhosted.org> New submission from Greg Price : In 3.8 we add a new function `unicodedata.is_normalized`. The result is equivalent to `str == unicodedata.normalize(form, str)`, but the implementation uses a version of the "quick check" algorithm from UAX #15 as an optimization to try to avoid having to copy the whole string. This was added in issue #32285, commit 2810dd7be. However, it turns out the code doesn't actually implement the same algorithm as UAX #15, and as a result we often miss the optimization and end up having to compute the whole normalized string after all. Here's a quick demo on my desktop. We pass a long string made entirely out of a character for which the quick-check algorithm always says `NO`, it's not normalized: $ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' -- \ 'unicodedata.is_normalized("NFD", s)' 50 loops, best of 5: 4.39 msec per loop $ build.base/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' -- \ 's == unicodedata.normalize("NFD", s)' 50 loops, best of 5: 4.41 msec per loop That's the same 4.4 ms (for a 1 MB string) with or without the attempted optimization. Here it is after a patch that makes the algorithm run as in the standard: $ build.dev/python -m timeit -s 'import unicodedata; s = "\uf900"*500000' -- \ 'unicodedata.is_normalized("NFD", s)' 5000000 loops, best of 5: 58.2 nsec per loop Nearly 5 orders of magnitude faster -- the difference between O(N) and O(1). The root cause of the issue is that our `is_normalized` static helper, which the new function relies on, was never written as a full implementation of the quick-check algorithm. The full algorithm can return YES, MAYBE, or NO; but originally this helper's only caller was the implementation of `unicodedata.normalize`, which only cares about YES vs. MAYBE-or-NO. So the helper often returns MAYBE when the standard algorithm would say NO. (More precisely, perhaps: it's fine that this helper was never a full implementation... but it didn't say that anywhere, even while referring to the standard algorithm, and as a result set us up for future confusion.) That's exactly what's happening in the example above: the standard quick-check algorithm would say NO, but our helper says MAYBE. Which for `unicodedata.is_normalized` means it has to go compute the whole normalized string. ---------- messages: 350651 nosy: Greg Price priority: normal severity: normal status: open title: is_normalized is much slower than the standard's algorithm versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 28 03:38:37 2019 From: report at bugs.python.org (mattip) Date: Wed, 28 Aug 2019 07:38:37 +0000 Subject: [New-bugs-announce] [issue37967] release candidate is not gpg signed (and missing release workflow)? Message-ID: <1566977917.52.0.520116504637.issue37967@roundup.psfhosted.org> New submission from mattip : Over at [multibuild](https://github.com/pypa/manylinux/pull/333#issuecomment-519802858), they ran into an issue trying to build c-extensions with the 3.8rc3 since it seems it is not gpg signed. I could not find a HOWTO_RELEASE document to check that the release workflow includes signing the package. One exists in Tools/msi/README.txt. ---------- components: Installation messages: 350660 nosy: mattip priority: normal severity: normal status: open title: release candidate is not gpg signed (and missing release workflow)? type: security versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 28 08:58:20 2019 From: report at bugs.python.org (Yehuda Katz) Date: Wed, 28 Aug 2019 12:58:20 +0000 Subject: [New-bugs-announce] [issue37968] The turtle Message-ID: <1566997100.11.0.991948132497.issue37968@roundup.psfhosted.org> New submission from Yehuda Katz : 1 - turtle bug: If I don't put parenthesis at the end of a line, I don't get error message. Try this: ==================== from turtle import * fd(50); rt done() ==================== 2 - request: Highly desirable a function that draws a circle with radius n AROUND THE TURTLE. Thank you and have a wonderful day. ---------- files: untitled.py messages: 350662 nosy: Yehuda priority: normal severity: normal status: open title: The turtle versions: Python 3.7 Added file: https://bugs.python.org/file48564/untitled.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 28 10:54:51 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Wed, 28 Aug 2019 14:54:51 +0000 Subject: [New-bugs-announce] [issue37969] urllib.parse functions reporting false equivalent URIs Message-ID: <1567004091.58.0.0475239472757.issue37969@roundup.psfhosted.org> New submission from G?ry : The Python library documentation of the `urllib.parse.urlunparse `_ and `urllib.parse.urlunsplit `_ functions states: This may result in a slightly different, but equivalent URL, if the URL that was parsed originally had unnecessary delimiters (for example, a ? with an empty query; the RFC states that these are equivalent). So with the URI:: >>> import urllib.parse >>> urllib.parse.urlunparse(urllib.parse.urlparse("http://example.com/?")) 'http://example.com/' >>> urllib.parse.urlunsplit(urllib.parse.urlsplit("http://example.com/?")) 'http://example.com/' But `RFC 3986 `_ states the exact opposite: Normalization should not remove delimiters when their associated component is empty unless licensed to do so by the scheme specification. For example, the URI "http://example.com/?" cannot be assumed to be equivalent to any of the examples above. Likewise, the presence or absence of delimiters within a userinfo subcomponent is usually significant to its interpretation. The fragment component is not subject to any scheme-based normalization; thus, two URIs that differ only by the suffix "#" are considered different regardless of the scheme. So maybe `urllib.parse.urlunparse` ? `urllib.parse.urlparse` and `urllib.parse.urlunsplit` ? `urllib.parse.urlsplit` are not supposed to be used for `syntax-based normalization `_ of URIs. But still, both `urllib.parse.urlparse` or `urllib.parse.urlsplit` lose the "delimiter + empty component" information of the URI string, so they report false equivalent URIs:: >>> import urllib.parse >>> urllib.parse.urlparse("http://example.com/?") == urllib.parse.urlparse("http://example.com/") True >>> urllib.parse.urlsplit("http://example.com/?") == urllib.parse.urlsplit("http://example.com/") True P.-S. ? Is there a syntax-based normalization function of URIs in the Python library? ---------- components: Library (Lib) messages: 350663 nosy: Jeremy.Hylton, maggyero, orsenthil priority: normal severity: normal status: open title: urllib.parse functions reporting false equivalent URIs type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 28 12:26:39 2019 From: report at bugs.python.org (Zachary Ware) Date: Wed, 28 Aug 2019 16:26:39 +0000 Subject: [New-bugs-announce] [issue37970] urllib.parse docstrings incomplete Message-ID: <1567009599.74.0.471882594509.issue37970@roundup.psfhosted.org> New submission from Zachary Ware : For example, urlsplit: >>> from urllib.parse import urlsplit >>> help(urlsplit) Help on function urlsplit in module urllib.parse: urlsplit(url, scheme='', allow_fragments=True) Parse a URL into 5 components: :///?# Return a 5-tuple: (scheme, netloc, path, query, fragment). Note that we don't break the components up in smaller bits (e.g. netloc is a single string) and we don't expand % escapes. The current docstring does not describe the `scheme` or `allow_fragments` arguments. Also, the note about not splitting netloc is misleading; the components of netloc (username, password, hostname, and port) are available as extra attributes of the returned SplitResult. urlparse has similar issues; other functions could stand to be checked. ---------- assignee: docs at python components: Documentation, Library (Lib) keywords: newcomer friendly messages: 350668 nosy: docs at python, zach.ware priority: normal severity: normal stage: needs patch status: open title: urllib.parse docstrings incomplete versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 28 14:32:52 2019 From: report at bugs.python.org (Joran van Apeldoorn) Date: Wed, 28 Aug 2019 18:32:52 +0000 Subject: [New-bugs-announce] [issue37971] Wrong trace with multiple decorators (linenumber wrong in frame) Message-ID: <1567017172.25.0.486267767179.issue37971@roundup.psfhosted.org> New submission from Joran van Apeldoorn : When applying multiple decorators to a function, a traceback from the top generator shows the bottom generator instead. For example ---------------- def printingdec(f): raise Exception() return f def dummydec(f): return f @printingdec @dummydec def foo(): pass ---------------- gives Traceback (most recent call last): File "bug.py", line 9, in @dummydec File "bug.py", line 2, in printingdec raise Exception() Exception instead of Traceback (most recent call last): File "bug.py", line 8, in @printingdec File "bug.py", line 2, in printingdec raise Exception() Exception Digging around with sys._getframe() it seems that the frame's linenumber is set wrong internally, leading to the wrong line being displayed. The ast does display the correct linenumber. ---------- messages: 350686 nosy: control-k priority: normal severity: normal status: open title: Wrong trace with multiple decorators (linenumber wrong in frame) type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 28 16:33:01 2019 From: report at bugs.python.org (blhsing) Date: Wed, 28 Aug 2019 20:33:01 +0000 Subject: [New-bugs-announce] [issue37972] unittest.mock.call does not chain __getitem__ to another _Call object Message-ID: <1567024381.55.0.529292089661.issue37972@roundup.psfhosted.org> New submission from blhsing : As reported on StackOverflow: https://stackoverflow.com/questions/57636747/how-to-perform-assert-has-calls-for-a-getitem-call The following code would output: [call(), call().foo(), call().foo().__getitem__('bar')] from unittest.mock import MagicMock, call mm = MagicMock() mm().foo()['bar'] print(mm.mock_calls) but trying to use that list with mm.assert_has_calls([call(), call().foo(), call().foo().__getitem__('bar')]) would result in: TypeError: tuple indices must be integers or slices, not str ---------- components: Library (Lib) messages: 350688 nosy: Ben Hsing priority: normal severity: normal status: open title: unittest.mock.call does not chain __getitem__ to another _Call object type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 28 23:16:57 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Thu, 29 Aug 2019 03:16:57 +0000 Subject: [New-bugs-announce] [issue37973] improve docstrings of sys.float_info Message-ID: <1567048617.81.0.748558704563.issue37973@roundup.psfhosted.org> New submission from Sergey Fedoseev : In [8]: help(sys.float_info) ... | | dig | DBL_DIG -- digits | This is not very helpful, https://docs.python.org/3/library/sys.html#sys.float_info is more verbose, so probably docstrings should be updated from where. ---------- assignee: docs at python components: Documentation messages: 350703 nosy: docs at python, sir-sigurd priority: normal severity: normal status: open title: improve docstrings of sys.float_info type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Aug 28 23:30:45 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Thu, 29 Aug 2019 03:30:45 +0000 Subject: [New-bugs-announce] [issue37974] zip() docstring should say 'iterator' instead of 'object with __next__()' Message-ID: <1567049445.71.0.232910822835.issue37974@roundup.psfhosted.org> New submission from Sergey Fedoseev : In [3]: help(zip) class zip(object) | zip(*iterables) --> zip object | | Return a zip object whose .__next__() method returns a tuple where | the i-th element comes from the i-th iterable argument. The .__next__() | method continues until the shortest iterable in the argument sequence | is exhausted and then it raises StopIteration. This description is awkward and should use term 'iterator' as https://docs.python.org/3/library/functions.html#zip does. The same applies to chain(), count() and zip_longest() from itertools. ---------- assignee: docs at python components: Documentation messages: 350704 nosy: docs at python, sir-sigurd priority: normal severity: normal status: open title: zip() docstring should say 'iterator' instead of 'object with __next__()' _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 05:01:20 2019 From: report at bugs.python.org (Aleksey) Date: Thu, 29 Aug 2019 09:01:20 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue37975=5D_Typo_in_the_docum?= =?utf-8?q?entation_by_C-API_DateTime_Objects=C2=B6?= Message-ID: <1567069280.84.0.885119110407.issue37975@roundup.psfhosted.org> New submission from Aleksey : In the documentation by Python 3.5 C-API DateTime Objects (https://docs.python.org/3.5/c-api/datetime.html) method PyDateTime_DELTA_GET_MICROSECOND has not "S" in the end of method name. But in the header file "datetime.h", this method has "S" and so named PyDateTime_DELTA_GET_MICROSECONDS. ---------- assignee: docs at python components: Documentation messages: 350758 nosy: Nukopol, docs at python priority: normal severity: normal status: open title: Typo in the documentation by C-API DateTime Objects? type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 06:30:10 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Thu, 29 Aug 2019 10:30:10 +0000 Subject: [New-bugs-announce] [issue37976] zip() shadows TypeError raised in __iter__() of source iterable Message-ID: <1567074610.14.0.32757366853.issue37976@roundup.psfhosted.org> New submission from Sergey Fedoseev : zip() shadows TypeError raised in __iter__() of source iterable: In [21]: class Iterable: ...: def __init__(self, n): ...: self.n = n ...: def __iter__(self): ...: return iter(range(self.n)) ...: In [22]: zip(Iterable('one')) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 zip(Iterable('one')) TypeError: zip argument #1 must support iteration ---------- messages: 350763 nosy: sir-sigurd priority: normal severity: normal status: open title: zip() shadows TypeError raised in __iter__() of source iterable _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 09:28:33 2019 From: report at bugs.python.org (Daniel Pope) Date: Thu, 29 Aug 2019 13:28:33 +0000 Subject: [New-bugs-announce] [issue37977] Big red pickle security warning should stress the point even more Message-ID: <1567085313.43.0.156950423126.issue37977@roundup.psfhosted.org> New submission from Daniel Pope : CVEs related to unpickling untrusted data continue to come up a few times a year: https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=pickle This is certainly the tip of the iceberg. In a previous role I noted several internal services that could be compromised with maliciously crafted pickles. In my current role I can already see two internal services that look vulnerable. And in both organisations, little attention was paid to pickle data exchanged with other users over network filesystems, which may allow privilege escalation. Chatting to Alex Willmer after his Europython talk in 2018 (https://github.com/moreati/pickle-fuzz/blob/master/Rehabilitating%20Pickle.pdf) we discussed that the red warning in the docs is still not prominent enough, even after moving it to the top of the page in https://bugs.python.org/issue9105. The warning currently says: "Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source." I would suggest several improvements: * Simpler, more direct English. * Explain the severity of vulnerability that doing this will cause. * Link to the hmac module which can be used to prevent tampering. * Link to the json module which is safer if less powerful. * Simply making the red box bigger (adding more text) will increase the prominence of the warning. ---------- assignee: docs at python components: Documentation messages: 350777 nosy: docs at python, lordmauve priority: normal severity: normal status: open title: Big red pickle security warning should stress the point even more type: security versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 09:48:49 2019 From: report at bugs.python.org (JemyTan) Date: Thu, 29 Aug 2019 13:48:49 +0000 Subject: [New-bugs-announce] [issue37978] Importing an unused package causes the function of another package to malfunction Message-ID: <1567086529.72.0.126651693669.issue37978@roundup.psfhosted.org> New submission from JemyTan : After commenting out the first line of code, the result of the program is different. I stepped in to debug(the program runs to line 30), and found that when it hit the code"from numpy.dual import inv as func" It seems to use different inv(). ------------------------------------ Environment: python: 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)] sklearn: 0.21.3 numpy: 1.17.0 ---------- components: Windows files: np.py messages: 350780 nosy: JemyTan, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Importing an unused package causes the function of another package to malfunction type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48566/np.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 10:39:34 2019 From: report at bugs.python.org (Paul Ganssle) Date: Thu, 29 Aug 2019 14:39:34 +0000 Subject: [New-bugs-announce] [issue37979] Document an alternative to ISO 8601 parsing Message-ID: <1567089574.63.0.345364917917.issue37979@roundup.psfhosted.org> New submission from Paul Ganssle : Per Antoine's comment in the discourse thread ( https://discuss.python.org/t/parse-z-timezone-suffix-in-datetime/2220/6 ): > ... the doc isn?t helpful, as it doesn?t give any alternative. I think we can link to dateutil.parser.isoparse as an alternative. I'm happy to field other options for ISO 8601 parsers instead, though considering that fromisoformat is adapted from dateutil.parser.isoparse, it seems reasonable to link them. ---------- assignee: p-ganssle components: Documentation messages: 350784 nosy: p-ganssle priority: low severity: normal stage: needs patch status: open title: Document an alternative to ISO 8601 parsing type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 12:39:02 2019 From: report at bugs.python.org (Thomas Caswell) Date: Thu, 29 Aug 2019 16:39:02 +0000 Subject: [New-bugs-announce] [issue37980] regression when passing numpy bools to sorted(..., reverse=r) Message-ID: <1567096742.32.0.970581980709.issue37980@roundup.psfhosted.org> New submission from Thomas Caswell : In python37, numpy1.17 the following runs without warning. import numpy as np sorted([1, 2], reverse=np.bool_(True)) with python38 this emits a DeprecationWarning: In future, it will be an error for 'np.bool_' scalars to be interpreted as an index I bisected this back to https://github.com/python/cpython/pull/11952 which looks very intentional, however it is tripping deprecation warnings in numpy to prevent things like `[1, 2, 3][np.bool_(False), np.bool_(True)]` from doing surprising things. I have previously gotten the sense that core would like to know about these sorts of regressions rather than just handling them down-stream. I will be opening an issue with numpy as well. ---------- files: numpy_warning_bisect.log messages: 350800 nosy: tcaswell priority: normal severity: normal status: open title: regression when passing numpy bools to sorted(..., reverse=r) versions: Python 3.8 Added file: https://bugs.python.org/file48568/numpy_warning_bisect.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 12:48:04 2019 From: report at bugs.python.org (=?utf-8?q?Alexander_Podg=C3=B3rski?=) Date: Thu, 29 Aug 2019 16:48:04 +0000 Subject: [New-bugs-announce] [issue37981] Can't install Python 3.7.4 x64 on Win 8.1 Message-ID: <1567097284.11.0.163984980296.issue37981@roundup.psfhosted.org> New submission from Alexander Podg?rski : Have double "No Python 3.7 installation detected" error ---------- components: Installation files: python_installation_log.txt messages: 350802 nosy: Alexander Podg?rski priority: normal severity: normal status: open title: Can't install Python 3.7.4 x64 on Win 8.1 versions: Python 3.7 Added file: https://bugs.python.org/file48571/python_installation_log.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 15:17:07 2019 From: report at bugs.python.org (Brad Solomon) Date: Thu, 29 Aug 2019 19:17:07 +0000 Subject: [New-bugs-announce] [issue37982] Add a --minify argument to json.tool Message-ID: <1567106227.18.0.64278078149.issue37982@roundup.psfhosted.org> New submission from Brad Solomon : I propose adding a command line `--minify` flag to the json/tool.py module. This flag, if specified, uses `indent=None` and `separators=(',', ':')` to eliminate indent and separator whitespace in the output. Minifying JSON (as is also done frequently with JS, CSS, and sometimes HTML) is common practice, and would be useful to have as a command-line tool rather than a user needing to use an online resource to do so. ---------- components: IO messages: 350817 nosy: bsolomon1124 priority: normal severity: normal status: open title: Add a --minify argument to json.tool type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 15:53:40 2019 From: report at bugs.python.org (Keith F. Kelly) Date: Thu, 29 Aug 2019 19:53:40 +0000 Subject: [New-bugs-announce] [issue37983] macOS: os.lchmod() incorrectly removed by 2.7.16 Message-ID: <1567108420.94.0.550275276253.issue37983@roundup.psfhosted.org> New submission from Keith F. Kelly : Apparently the fix for https://bugs.python.org/issue34652 was incorrect, or got incorrectly backported to, the 2.7 tree, because as of 2.7.16, the os.lchmod() built-in API is unexpectedly missing on MacOS, which is breaking our existing code. ---------- components: macOS messages: 350822 nosy: keithfkelly, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: macOS: os.lchmod() incorrectly removed by 2.7.16 type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 18:17:04 2019 From: report at bugs.python.org (Yhojann Aguilera) Date: Thu, 29 Aug 2019 22:17:04 +0000 Subject: [New-bugs-announce] [issue37984] Unable parse csv on latin iso or binary mode Message-ID: <1567117024.15.0.215995088673.issue37984@roundup.psfhosted.org> New submission from Yhojann Aguilera : Unable parse a csv with latin iso charset. with open('./exported.csv', newline='') as csvFileHandler: csvHandler = csv.reader(csvFileHandler, delimiter=';', quotechar='"') for line in csvHandler: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd1 in position 1032: invalid continuation byte I try using a binary mode on open() but says: binary mode doesn't take a newline argument. Ok, replace newline to binary char: newline=b'', but says: open() argument 6 must be str or None, not bytes. Ok, remove newline argument: _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?). Ok, csv module no support binary read mode. Try use latin iso: with open('./exported.csv', mode='r', encoding='ISO-8859', newline='') as csvFileHandler: UnicodeDecodeError: 'charmap' codec can't decode byte 0xd1 in position 1032: character maps to But the charset is latin iso: $ file exported.csv exported.csv: ISO-8859 text, with very long lines, with CRLF line terminators Ok, change to ISO-8859-8: UnicodeDecodeError: 'charmap' codec can't decode byte 0xd1 in position 1032: character maps to Unable load the file. Why not give the option to work binary? the delimiters can be represented with binary values. ---------- components: Unicode messages: 350836 nosy: Yhojann Aguilera, ezio.melotti, vstinner priority: normal severity: normal status: open title: Unable parse csv on latin iso or binary mode type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Aug 29 20:04:37 2019 From: report at bugs.python.org (Davis Herring) Date: Fri, 30 Aug 2019 00:04:37 +0000 Subject: [New-bugs-announce] [issue37985] WFERR_UNMARSHALLABLE breaks recursion limit Message-ID: <1567123477.95.0.275651611674.issue37985@roundup.psfhosted.org> New submission from Davis Herring : Most of the "p->depth--;" lines associated with "p->error = WFERR_UNMARSHALLABLE;" are spurious, and can crash the interpreter if enough of them prevent reaching MAX_MARSHAL_STACK_DEPTH. (The only exceptions are in 2.7, where some of them are followed by a return that skips another "p->depth--;".) ---------- components: Interpreter Core messages: 350841 nosy: herring priority: normal severity: normal status: open title: WFERR_UNMARSHALLABLE breaks recursion limit type: crash versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 30 05:00:41 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Fri, 30 Aug 2019 09:00:41 +0000 Subject: [New-bugs-announce] [issue37986] Improve perfomance of PyLong_FromDouble() Message-ID: <1567155641.35.0.135066549616.issue37986@roundup.psfhosted.org> New submission from Sergey Fedoseev : This patch simplifies fast path for floats that fit into C long and moves it from float.__trunc__ to PyLong_FromDouble(). +---------------------+---------------------+------------------------------+ | Benchmark | long-from-float-ref | long-from-float | +=====================+=====================+==============================+ | int(1.) | 39.5 ns | 37.3 ns: 1.06x faster (-6%) | +---------------------+---------------------+------------------------------+ | int(2.**20) | 46.4 ns | 45.6 ns: 1.02x faster (-2%) | +---------------------+---------------------+------------------------------+ | int(2.**30) | 52.5 ns | 49.0 ns: 1.07x faster (-7%) | +---------------------+---------------------+------------------------------+ | int(2.**60) | 50.0 ns | 49.2 ns: 1.02x faster (-2%) | +---------------------+---------------------+------------------------------+ | int(-2.**63) | 76.6 ns | 48.6 ns: 1.58x faster (-37%) | +---------------------+---------------------+------------------------------+ | int(2.**80) | 77.1 ns | 72.5 ns: 1.06x faster (-6%) | +---------------------+---------------------+------------------------------+ | int(2.**120) | 91.5 ns | 87.7 ns: 1.04x faster (-4%) | +---------------------+---------------------+------------------------------+ | math.ceil(1.) | 57.4 ns | 32.9 ns: 1.74x faster (-43%) | +---------------------+---------------------+------------------------------+ | math.ceil(2.**20) | 60.5 ns | 41.3 ns: 1.47x faster (-32%) | +---------------------+---------------------+------------------------------+ | math.ceil(2.**30) | 64.2 ns | 43.9 ns: 1.46x faster (-32%) | +---------------------+---------------------+------------------------------+ | math.ceil(2.**60) | 66.3 ns | 42.3 ns: 1.57x faster (-36%) | +---------------------+---------------------+------------------------------+ | math.ceil(-2.**63) | 67.7 ns | 43.1 ns: 1.57x faster (-36%) | +---------------------+---------------------+------------------------------+ | math.ceil(2.**80) | 66.6 ns | 65.6 ns: 1.01x faster (-1%) | +---------------------+---------------------+------------------------------+ | math.ceil(2.**120) | 79.9 ns | 80.5 ns: 1.01x slower (+1%) | +---------------------+---------------------+------------------------------+ | math.floor(1.) | 58.4 ns | 31.2 ns: 1.87x faster (-47%) | +---------------------+---------------------+------------------------------+ | math.floor(2.**20) | 61.0 ns | 39.6 ns: 1.54x faster (-35%) | +---------------------+---------------------+------------------------------+ | math.floor(2.**30) | 64.2 ns | 43.9 ns: 1.46x faster (-32%) | +---------------------+---------------------+------------------------------+ | math.floor(2.**60) | 62.1 ns | 40.1 ns: 1.55x faster (-35%) | +---------------------+---------------------+------------------------------+ | math.floor(-2.**63) | 64.1 ns | 39.9 ns: 1.61x faster (-38%) | +---------------------+---------------------+------------------------------+ | math.floor(2.**80) | 62.2 ns | 62.7 ns: 1.01x slower (+1%) | +---------------------+---------------------+------------------------------+ | math.floor(2.**120) | 77.0 ns | 77.8 ns: 1.01x slower (+1%) | +---------------------+---------------------+------------------------------+ I'm going to speed-up conversion of larger floats in a follow-up PR. ---------- components: Interpreter Core files: bench-long-from-float.py messages: 350861 nosy: sir-sigurd priority: normal pull_requests: 15285 severity: normal status: open title: Improve perfomance of PyLong_FromDouble() type: performance versions: Python 3.9 Added file: https://bugs.python.org/file48573/bench-long-from-float.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 30 06:58:46 2019 From: report at bugs.python.org (=?utf-8?b?0JjQstCw0L0g0JrQvtGB0LzQsNGC0YvRhQ==?=) Date: Fri, 30 Aug 2019 10:58:46 +0000 Subject: [New-bugs-announce] [issue37987] retrun collection item in for cycle with finally continue Message-ID: <1567162726.95.0.517092408058.issue37987@roundup.psfhosted.org> New submission from ???? ???????? : https://bugs.python.org/issue32489 This closed issue allow continue in finally clasue but now it can lead to crash in below case. def crash(): for i in [1, 2, 3]: try: return i finally: continue crash() I try use Python 3.8.0b3 (default, Aug 1 2019, 21:20:41) on ubuntu Linux 4.15.0-55-generic #60~16.04.2-Ubuntu SMP Thu Jul 4 09:03:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux ---------- components: Interpreter Core messages: 350867 nosy: serhiy.storchaka, ???? ???????? priority: normal severity: normal status: open title: retrun collection item in for cycle with finally continue type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 30 07:02:37 2019 From: report at bugs.python.org (SYAM PARAKASH,AJAY KUMAR) Date: Fri, 30 Aug 2019 11:02:37 +0000 Subject: [New-bugs-announce] [issue37988] Issue found during language name processing in a list Message-ID: <1567162957.47.0.206860100059.issue37988@roundup.psfhosted.org> New submission from SYAM PARAKASH,AJAY KUMAR : I found an error in processing language name like Malayalam,English.. in a list created using Python 3.6.1 ---------- files: IMG_20190830_162306__1567162571_88362.jpg messages: 350869 nosy: AjaySyam priority: normal severity: normal status: open title: Issue found during language name processing in a list type: behavior Added file: https://bugs.python.org/file48576/IMG_20190830_162306__1567162571_88362.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 30 17:22:40 2019 From: report at bugs.python.org (Hubert) Date: Fri, 30 Aug 2019 21:22:40 +0000 Subject: [New-bugs-announce] [issue37990] gc.collect prints debug stats incorrectly Message-ID: <1567200160.56.0.846460221053.issue37990@roundup.psfhosted.org> New submission from Hubert : Example: Python 3.9.0a0 (heads/master:39d87b5471, Aug 30 2019, 23:19:13) [GCC 9.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import gc >>> gc.set_debug(gc.DEBUG_STATS) >>> gc.collect() gc: collecting generation 2... gc: objects in each generation: 589 4120 0 gc: objects in permanent generation: 0 gc: done, 0 unreachable, 0 uncollectable, %.4fs elapsed 0 ---------- components: Interpreter Core messages: 350890 nosy: chivay priority: normal severity: normal status: open title: gc.collect prints debug stats incorrectly versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 30 18:50:11 2019 From: report at bugs.python.org (Yusif) Date: Fri, 30 Aug 2019 22:50:11 +0000 Subject: [New-bugs-announce] [issue37991] What is this? What is problem? Message-ID: <1567205411.64.0.979790178456.issue37991@roundup.psfhosted.org> New submission from Yusif : Windows 10 32-bit operating system,x64-based processor ---------- components: Windows files: Python 3.5.3 (64-bit)_20190831024317.log messages: 350896 nosy: paul.moore, steve.dower, tim.golden, yusif, zach.ware priority: normal severity: normal status: open title: What is this? What is problem? type: compile error versions: Python 3.5 Added file: https://bugs.python.org/file48577/Python 3.5.3 (64-bit)_20190831024317.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Aug 30 21:20:47 2019 From: report at bugs.python.org (Sam Wainwright) Date: Sat, 31 Aug 2019 01:20:47 +0000 Subject: [New-bugs-announce] [issue37992] Change datetime.MINYEAR to allow for negative years Message-ID: <1567214447.35.0.437361974873.issue37992@roundup.psfhosted.org> Change by Sam Wainwright : ---------- components: Library (Lib) nosy: Sam Wainwright priority: normal severity: normal status: open title: Change datetime.MINYEAR to allow for negative years type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 31 04:04:35 2019 From: report at bugs.python.org (Christoph Gohlke) Date: Sat, 31 Aug 2019 08:04:35 +0000 Subject: [New-bugs-announce] [issue37993] os.path.realpath on Windows resolves mapped network drives Message-ID: <1567238675.05.0.269696747191.issue37993@roundup.psfhosted.org> New submission from Christoph Gohlke : Re https://bugs.python.org/issue9949: Is it intended that Python-3.8.0b4 now also resolves mapped network drives and drives created with `subst`? I would not expect this from the documentation at https://docs.python.org/3.8/library/os.path.html#os.path.realpath. The documentation refers to symbolic links and junctions, which are different from mapped network and subst drives (AFAIU). For example, mapping `\\SERVER\Programs` as `X:` drive: ``` Python 3.8.0b4 (tags/v3.8.0b4:d93605d, Aug 29 2019, 23:21:28) [MSC v.1916 64 bit (AMD64)] on win32 >>> import sys, os >>> sys.executable 'X:\\Python38\\python.exe' >>> os.path.realpath(sys.executable) '\\\\SERVER\\Programs\\Python38\\python.exe' ``` ``` Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)] on win32 >>> import sys, os >>> sys.executable 'X:\\Python37\\python.exe' >>> os.path.realpath(sys.executable) 'X:\\Python37\\python.exe' ``` It seems this change causes an error in pytest-5.1.2 during numpy-1.17.1 tests: ``` X:\Python38>python.exe -c"import numpy;numpy.test()" NumPy version 1.17.1 NumPy relaxed strides checking option: True ============================================= ERRORS ============================================== __________________________________ ERROR collecting test session __________________________________ lib\site-packages\_pytest\config\__init__.py:440: in _importconftest return self._conftestpath2mod[conftestpath] E KeyError: local('\\\\SERVER\\programs\\python38\\lib\\site-packages\\numpy\\conftest.py') During handling of the above exception, another exception occurred: lib\site-packages\_pytest\config\__init__.py:446: in _importconftest mod = conftestpath.pyimport() lib\site-packages\py\_path\local.py:721: in pyimport raise self.ImportMismatchError(modname, modfile, self) E py._path.local.LocalPath.ImportMismatchError: ('numpy.conftest', 'X:\\Python38\\lib\\site-packages\\numpy\\conftest.py', local('\\\\SERVER\\programs\\python38\\lib\\site-packages\\numpy\\conftest.py')) During handling of the above exception, another exception occurred: lib\site-packages\_pytest\runner.py:220: in from_call result = func() lib\site-packages\_pytest\runner.py:247: in call = CallInfo.from_call(lambda: list(collector.collect()), "collect") lib\site-packages\_pytest\main.py:485: in collect yield from self._collect(arg) lib\site-packages\_pytest\main.py:512: in _collect col = self._collectfile(pkginit, handle_dupes=False) lib\site-packages\_pytest\main.py:581: in _collectfile ihook = self.gethookproxy(path) lib\site-packages\_pytest\main.py:424: in gethookproxy my_conftestmodules = pm._getconftestmodules(fspath) lib\site-packages\_pytest\config\__init__.py:420: in _getconftestmodules mod = self._importconftest(conftestpath) lib\site-packages\_pytest\config\__init__.py:454: in _importconftest raise ConftestImportFailure(conftestpath, sys.exc_info()) E _pytest.config.ConftestImportFailure: (local('\\\\SERVER\\programs\\python38\\lib\\site-packages\\numpy\\conftest.py'), (, ImportMismatchError('numpy.conftest', 'X:\\Python38\\lib\\site-packages\\numpy\\conftest.py', local('\\\\SERVER\\programs\\python38\\lib\\site-packages\\numpy\\conftest.py')), )) !!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!! 1 error in 16.39s ``` ---------- components: Library (Lib), Windows messages: 350910 nosy: cgohlke, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.path.realpath on Windows resolves mapped network drives type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 31 04:10:29 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 31 Aug 2019 08:10:29 +0000 Subject: [New-bugs-announce] [issue37994] Fix silencing all errors if an attribute lookup fails Message-ID: <1567239029.01.0.244778819227.issue37994@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are still sites in the CPython code where all errors of failed attribute lookup are silenced or overridden by other exception. This can hide such exceptions like MemoryError, RecursionError or KeyboardInterrupt and lead to incorrect result (as the attribute was just absent instead of looking it up was interrapted by side causes). Only AttributeError is expected to signal an absence of an attribute, and only it can be silenced. The proposed PR fixes most of such cases. There are still few sites where *all* errors are ignored (for example when report an error to the stderr or like), they should be considered separately. ---------- components: Extension Modules, Interpreter Core messages: 350911 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Fix silencing all errors if an attribute lookup fails type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 31 05:29:28 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 31 Aug 2019 09:29:28 +0000 Subject: [New-bugs-announce] [issue37995] Multiline ast.dump() Message-ID: <1567243768.69.0.362429469264.issue37995@roundup.psfhosted.org> New submission from Serhiy Storchaka : ast.dump() is mainly useful for debugging purposes. Unfortunately the output is too long and complex even for simple examples. It contains too much nested calls and lists. >>> import ast >>> node = ast.parse('spam(eggs, "and cheese")') >>> print(ast.dump(node)) Module(body=[Expr(value=Call(func=Name(id='spam', ctx=Load()), args=[Name(id='eggs', ctx=Load()), Constant(value='and cheese', kind=None)], keywords=[]))], type_ignores=[]) It is worse if include more information: >>> print(ast.dump(node, include_attributes=True)) Module(body=[Expr(value=Call(func=Name(id='spam', ctx=Load(), lineno=1, col_offset=0, end_lineno=1, end_col_offset=4), args=[Name(id='eggs', ctx=Load(), lineno=1, col_offset=5, end_lineno=1, end_col_offset=9), Constant(value='and cheese', kind=None, lineno=1, col_offset=11, end_lineno=1, end_col_offset=23)], keywords=[], lineno=1, col_offset=0, end_lineno=1, end_col_offset=24), lineno=1, col_offset=0, end_lineno=1, end_col_offset=24)], type_ignores=[]) And for larger examples it is almost unusable. I propose to make ast.dump() producing a multiline indented output. Add the optional "indent" parameter. If it is a non-negative number or a string, the output if formatted with the specified indentation. If it is None (by default), the output is a single string. >>> print(ast.dump(node, indent=3)) Module( body=[ Expr( value=Call( func=Name( id='spam', ctx=Load()), args=[ Name( id='eggs', ctx=Load()), Constant( value='and cheese', kind=None)], keywords=[]))], type_ignores=[]) Looks better, no? I am not sure about closing parenthesis. Should they be attached to the last item (as above) or split on a separate line (as below)? Or use some heuristic to make the output more readable and compact? >>> print(ast.dump(node, indent=3)) Module( body=[ Expr( value=Call( func=Name( id='spam', ctx=Load() ), args=[ Name( id='eggs', ctx=Load() ), Constant( value='and cheese', kind=None ) ], keywords=[] ) ) ], type_ignores=[] ) ---------- components: Library (Lib) messages: 350913 nosy: benjamin.peterson, brett.cannon, rhettinger, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Multiline ast.dump() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 31 12:15:28 2019 From: report at bugs.python.org (Bob Kline) Date: Sat, 31 Aug 2019 16:15:28 +0000 Subject: [New-bugs-announce] [issue37996] 2to3 introduces unwanted extra backslashes for unicode characters in regular expressions Message-ID: <1567268128.24.0.557534391734.issue37996@roundup.psfhosted.org> New submission from Bob Kline : - UNWANTED = re.compile("""['".,?!:;()[\]{}<>\u201C\u201D\u00A1\u00BF]+""") + UNWANTED = re.compile("""['".,?!:;()[\]{}<>\\u201C\\u201D\\u00A1\\u00BF]+""") The non-ASCII characters in the original string are perfectly legitimate str characters, using valid standard escapes recognized and handled by the Python parser. It is unnecessary to lengthen the string argument passed to re.compile() and defer the conversion of the doubled escapes for the regular expression engine to handle. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 350922 nosy: bkline priority: normal severity: normal status: open title: 2to3 introduces unwanted extra backslashes for unicode characters in regular expressions type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Aug 31 19:39:17 2019 From: report at bugs.python.org (Gabriel C) Date: Sat, 31 Aug 2019 23:39:17 +0000 Subject: [New-bugs-announce] [issue37997] Segfault when using pickle with exceptions and dynamic class inheritance Message-ID: <1567294757.43.0.590587702854.issue37997@roundup.psfhosted.org> New submission from Gabriel C : The following results in a segfault on 3.7.4 (running on macOS high sierra) and 3.5 (running on ubuntu 16.04). It does not happen in python 2.7. The problem seems to involve two effects. The first is the creation of a class with a dynamic type that inherits from a MemoryError (but any exception will work). The second is pickling an instance of that MemoryError (note that the dynamic type is never pickled). If a basic (non-exception) class is used instead of the MemoryError (see uncomment case 1), then the segfault does not occur in my testing. If the exception being pickled is a different type to the one used as the base class for the dynamic type, the segfault does not occur. Note that unpickling the MemoryError actually raises another exception, this raised exception (see uncomment case 2) is raised by the dynamic type class __init__ method. It seems that in trying to unpickle the MemoryError, pickle attempts to create an instance of the unrelated dynamic type instead. Examining the stack trace (uncomment case 3), shows the raised exception is indeed originating from attempting to unpickle the MemoryError. The segfault does not happen immediately, but instead after several attempts. It can happen after as few as 5 attempts or as many as 100. ``` import pickle def CreateDynamicClass(basetype): class DynamicClassImpl(basetype): def __init__(self): super(DynamicClassImpl, self).__init__() return DynamicClassImpl() class TestClass(object): pass N_attemps = 1000 pickle_list = [] def load_list(): for i in range(N_attemps): test_exception = MemoryError("Test" + str(i)) #test_exception = TestClass() # Uncomment case 1 pickle_list.append(pickle.dumps(test_exception)) load_list() def unload_list(): for i in range(N_attemps): try: test_object_instance = pickle.loads(pickle_list.pop()) test_dynamic_object = CreateDynamicClass(MemoryError) #test_dynamic_object = CreateDynamicClass(TestClass) # Uncomment case 1 except Exception as e: print("Exception at iteration {}: {}".format(i, e)) # Uncomment case 2 #raise # Uncomment case 3 pass unload_list() ``` ---------- components: Library (Lib) messages: 350932 nosy: gabrielc priority: normal severity: normal status: open title: Segfault when using pickle with exceptions and dynamic class inheritance type: crash versions: Python 3.5, Python 3.7 _______________________________________ Python tracker _______________________________________