From report at bugs.python.org Fri Apr 1 07:16:28 2016 From: report at bugs.python.org (Martin Panter) Date: Fri, 01 Apr 2016 11:16:28 +0000 Subject: [New-bugs-announce] [issue26685] Raise errors from socket.close() Message-ID: <1459509388.35.0.801044830009.issue26685@psf.upfronthosting.co.za> New submission from Martin Panter: While looking at a recent failure of test_fileno() in test_urllibnet, I discovered that socket.close() ignores the return value of the close() system call. It looks like it has always been this way: . On the other hand, both FileIO.close() and os.close() raise an exception if the underlying close() call fails. So I propose to make socket.close() also raise an exception for any failure indicated by the underlying close() call. The benefit is that a programming error causing EBADF would be more easily noticed. ---------- components: Extension Modules files: socket.close.patch keywords: patch messages: 262735 nosy: martin.panter priority: normal severity: normal stage: patch review status: open title: Raise errors from socket.close() type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42342/socket.close.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 1 13:46:30 2016 From: report at bugs.python.org (Mark Sapiro) Date: Fri, 01 Apr 2016 17:46:30 +0000 Subject: [New-bugs-announce] [issue26686] email.parser stops parsing headers too soon. Message-ID: <1459532790.12.0.415003614976.issue26686@psf.upfronthosting.co.za> New submission from Mark Sapiro: Given an admittedly defective (the folded Content-Type: isn't indented) message part with the following headers/body ------------------------------- Content-Disposition: inline; filename="04EBD_xxxx.xxxx_A546BB.zip" Content-Type: application/x-rar-compressed; x-unix-mode=0600; name="04EBD_xxxx.xxxx_A546BB.zip" Content-Transfer-Encoding: base64 UmFyIRoHAM+QcwAADQAAAAAAAABKRXQgkC4ApAMAAEAHAAACJLrQXYFUfkgdMwkAIAAAAGEw ZjEwZi5qcwDwrrI/DB2NDI0TzcGb3Gpb8HzsS0UlpwELvdyWnVaBQt7Sl2zbJpx1qqFCGGk6 ... ------------------------------- email.parser parses the headers as ------------------------------- Content-Disposition: inline; filename="04EBD_xxxx.xxxx_A546BB.zip" Content-Type: application/x-rar-compressed; x-unix-mode=0600; ------------------------------- and the body as ------------------------------- name="04EBD_xxxx.xxxx_A546BB.zip" Content-Transfer-Encoding: base64 UmFyIRoHAM+QcwAADQAAAAAAAABKRXQgkC4ApAMAAEAHAAACJLrQXYFUfkgdMwkAIAAAAGEw ZjEwZi5qcwDwrrI/DB2NDI0TzcGb3Gpb8HzsS0UlpwELvdyWnVaBQt7Sl2zbJpx1qqFCGGk6 ... ------------------------------- and shows no defects. This is wrong. RFC5322 section 2.1 is clear that everything up to the first empty line is headers. Even the docstring in the email/parser.py module says "The header block is terminated either by the end of the string or by a blank line." Since the message is defective, it isn't clear what the correct result should be, but I think Headers: Content-Disposition: inline; filename="04EBD_xxxx.xxxx_A546BB.zip" Content-Type: application/x-rar-compressed; x-unix-mode=0600; Content-Transfer-Encoding: base64 Body: UmFyIRoHAM+QcwAADQAAAAAAAABKRXQgkC4ApAMAAEAHAAACJLrQXYFUfkgdMwkAIAAAAGEw ZjEwZi5qcwDwrrI/DB2NDI0TzcGb3Gpb8HzsS0UlpwELvdyWnVaBQt7Sl2zbJpx1qqFCGGk6 ... Defects: name="04EBD_xxxx.xxxx_A546BB.zip" would be more appropriate. The problem is that the Content-Transfer-Encoding: base64 header is not in the headers so that get_payload(decode=True) doesn't decode the base64 encoded body making malware recognition difficult. ---------- components: Library (Lib) messages: 262750 nosy: msapiro priority: normal severity: normal status: open title: email.parser stops parsing headers too soon. type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 1 14:37:10 2016 From: report at bugs.python.org (Berker Peksag) Date: Fri, 01 Apr 2016 18:37:10 +0000 Subject: [New-bugs-announce] [issue26687] Use Py_RETURN_NONE in sqlite3 module Message-ID: <1459535830.63.0.539238949684.issue26687@psf.upfronthosting.co.za> New submission from Berker Peksag: The attached patch replaces all "Py_INCREF(Py_None); return Py_None;" lines with the Py_RETURN_NONE macro in sqlite3 module. ---------- components: Extension Modules messages: 262754 nosy: berker.peksag priority: normal severity: normal stage: patch review status: open title: Use Py_RETURN_NONE in sqlite3 module type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 1 16:39:29 2016 From: report at bugs.python.org (Ashley Anderson) Date: Fri, 01 Apr 2016 20:39:29 +0000 Subject: [New-bugs-announce] [issue26688] unittest2 referenced in unittest.mock documentation Message-ID: <1459543169.0.0.250942982333.issue26688@psf.upfronthosting.co.za> New submission from Ashley Anderson: I noticed a few references to `unittest2` in the documentation in the `unittest.mock` "getting started" section: https://docs.python.org/3.6/library/unittest.mock-examples.html#patch-decorators I am attaching a patch that just changes these occurrences from `unittest2` to `unittest`. ---------- assignee: docs at python components: Documentation files: unittest2.patch keywords: patch messages: 262767 nosy: aganders3, docs at python priority: normal severity: normal status: open title: unittest2 referenced in unittest.mock documentation versions: Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42346/unittest2.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 2 20:31:21 2016 From: report at bugs.python.org (Sylvain Corlay) Date: Sun, 03 Apr 2016 00:31:21 +0000 Subject: [New-bugs-announce] [issue26689] Add `has_flag` method to `distutils.CCompiler` Message-ID: <1459643481.74.0.720699620632.issue26689@psf.upfronthosting.co.za> New submission from Sylvain Corlay: I would be very useful to have a `has_flag` method in `distutils.CCompiler` similar to `has_function`, allowing to check if the compiler supports certain flags. Cmake has a `CHECK_CXX_COMPILER_FLAG` macro for that purpose, which checks if a simple C++ file compiles with the said flag. ---------- components: Distutils messages: 262805 nosy: dstufft, eric.araujo, sylvain.corlay priority: normal severity: normal status: open title: Add `has_flag` method to `distutils.CCompiler` type: enhancement versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 4 15:25:58 2016 From: report at bugs.python.org (mike bayer) Date: Mon, 04 Apr 2016 19:25:58 +0000 Subject: [New-bugs-announce] [issue26690] PyUnicode_Decode breaks when Python / sqlite3 is built with sqlite 3.12.0 Message-ID: <1459797958.48.0.726816942134.issue26690@psf.upfronthosting.co.za> New submission from mike bayer: So I really don't know *where* the issue is in this one, because I don't know enough about the different bits. Step 1: Save this C program to demo.c: #include static PyObject * unicode_thing(PyObject *self, PyObject *value) { char *str; Py_ssize_t len; if (value == Py_None) Py_RETURN_NONE; if (PyString_AsStringAndSize(value, &str, &len)) return NULL; return PyUnicode_Decode(str, len, "utf-8", "ignore"); } static PyMethodDef UnicodeMethods[] = { {"unicode_thing", unicode_thing, METH_O, "do a unicode thing."}, {NULL, NULL, 0, NULL} /* Sentinel */ }; PyMODINIT_FUNC initdemo(void) { (void) Py_InitModule("demo", UnicodeMethods); } Step 2: Build with a setup.py: from distutils.core import setup, Extension module1 = Extension('demo', sources = ['demo.c']) setup (name = 'PackageName', version = '1.0', description = 'This is a demo package', ext_modules = [module1]) $ python setup.py build_ext 3. Run with a normal Python that is *not* built against SQLite 3.12.0: $ python Python 2.7.10 (default, Sep 8 2015, 17:20:17) [GCC 5.1.1 20150618 (Red Hat 5.1.1-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import demo >>> demo.unicode_thing('sql') u'sql' 4. Now build Python 2.7.11 *with* SQLite 3.12.0 in the -I / -L paths. Run the same program *With* that Python: $ /opt/Python-2.7.11-sqlite-3.12.0/bin/python Python 2.7.11 (default, Apr 4 2016, 14:20:47) [GCC 5.3.1 20151207 (Red Hat 5.3.1-2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import demo >>> demo.unicode_thing('sql') u's\x00q' Somehow the presence of sqlite-3.12.0 in the build is breaking the Python interpreter. I think. I really don't know. The bad news is, this is the code for SQLAlchemy's unicode processor, and as SQlite 3.12.0 starts getting put in distros, the world is going to break. So this is kind of really hard to test, I don't understand it, and it's totally urgent. Any insights would be appreciated! ---------- components: Extension Modules messages: 262864 nosy: zzzeek priority: normal severity: normal status: open title: PyUnicode_Decode breaks when Python / sqlite3 is built with sqlite 3.12.0 versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 4 19:28:08 2016 From: report at bugs.python.org (Brett Cannon) Date: Mon, 04 Apr 2016 23:28:08 +0000 Subject: [New-bugs-announce] [issue26691] Update the typing module to match what's in github.com/python/typing Message-ID: <1459812488.48.0.790121011173.issue26691@psf.upfronthosting.co.za> New submission from Brett Cannon: The code in Lib/typing.py is outdated compared to what's at github.com/python/typing. ---------- assignee: gvanrossum messages: 262879 nosy: brett.cannon, gvanrossum priority: normal severity: normal status: open title: Update the typing module to match what's in github.com/python/typing _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 4 20:46:11 2016 From: report at bugs.python.org (Satrajit Ghosh) Date: Tue, 05 Apr 2016 00:46:11 +0000 Subject: [New-bugs-announce] [issue26692] cgroups support in multiprocessing Message-ID: <1459817171.82.0.158211239936.issue26692@psf.upfronthosting.co.za> New submission from Satrajit Ghosh: multiprocessing cpucount returns the number of cpus on the system as returned by /proc/cpuinfo. this is true even on machines where linux kernel cgroups is being used to restrict cpu usage for a given process. this results in significant thread swithcing on systems with many cores. some ideas have been implemented in the following repos to handle cgroups: https://github.com/peo3/cgroup-utils http://cpachecker.googlecode.com/svn-history/r12889/trunk/scripts/benchmark/runexecutor.py it would be nice if multiprocessing was a little more intelligent and queried process characteristics. ---------- components: Library (Lib) messages: 262881 nosy: Satrajit Ghosh priority: normal severity: normal status: open title: cgroups support in multiprocessing type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 4 21:51:45 2016 From: report at bugs.python.org (skydoom) Date: Tue, 05 Apr 2016 01:51:45 +0000 Subject: [New-bugs-announce] [issue26693] Exception exceptions.AttributeError '_shutdown' in Message-ID: <1459821105.82.0.884891417565.issue26693@psf.upfronthosting.co.za> New submission from skydoom: I did a search and find the issue 1947, however it's set to "not a bug". In its last note, it's suggested a 'packaging/environment issue'. But since I can reliably reproduce the issue with the "official python package"(that installed by the system, such as I am not building the python from source), Also, the same issue does not occurred on python 2.6.2, but 3.4.3 and 3.5.1. Even though it seems the "AssertionError" message is non-harmful but it's still a bit annoying. I am wondering if you can take a look my issue? Please compile the attached source codes to reproduce my issue. Note that it only occurred when we (explicitly or implicitly) import the 'threading' module. If we comment out that line, it works fine. ---------- components: Library (Lib) files: test2.C messages: 262884 nosy: skydoom priority: normal severity: normal status: open title: Exception exceptions.AttributeError '_shutdown' in type: behavior versions: Python 3.4, Python 3.5 Added file: http://bugs.python.org/file42368/test2.C _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 5 05:51:17 2016 From: report at bugs.python.org (=?utf-8?q?Szymon_Kuli=C5=84ski?=) Date: Tue, 05 Apr 2016 09:51:17 +0000 Subject: [New-bugs-announce] [issue26694] Disasembler fall with Key Error while disassemble obfuscated code. Message-ID: <1459849877.74.0.679437404849.issue26694@psf.upfronthosting.co.za> New submission from Szymon Kuli?ski: Many obfuscators use simple technice for block disasemblation. Add broken instructions (for example unknown op codes) and use flow control (SETUP_EXCEPT or JUMP_FORWARD) to skip broken instructions. Interpreter work in right way skipping broken instruction or catch error and go to except instructions but disasembler iterate over all instructions and every where assume that code is correct and doing something like : elif op in hasname: print '(' + co.co_names[oparg] + ')', Which fails because variable oparg not in co_names table or refer to not existing name or const. Why dis lib not assume that code can be broken and try disassemble it as good as it can any way. 15 JUMP_IF_TRUE 3 (to 19) 18 (33333333) 19 LOAD_NAME 1 (b) Or if we rely on the assumption that if code disasseblation done with no problem this mean that code is good. We can add flag where we can disassemble unsteady code or even add other method like dis_unsafe or something like that. Include: obfuscated and unobfuscated pyc files for testing. Change proposition: Cherry-pick code dis module from 3.5 python with some changes required to normal working. Working example included. ---------- components: Library (Lib) files: example.zip messages: 262895 nosy: Szymon.Kuli?ski priority: normal severity: normal status: open title: Disasembler fall with Key Error while disassemble obfuscated code. type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file42371/example.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 5 11:01:46 2016 From: report at bugs.python.org (Josh Rosenberg) Date: Tue, 05 Apr 2016 15:01:46 +0000 Subject: [New-bugs-announce] [issue26695] pickle and _pickle accelerator have different behavior when unpickling an object with falsy __getstate__ return Message-ID: <1459868506.3.0.857595967516.issue26695@psf.upfronthosting.co.za> New submission from Josh Rosenberg: According to a note on the pickle docs ( https://docs.python.org/3/library/pickle.html#object.__getstate__ ): "If __getstate__() returns a false value, the __setstate__() method will not be called upon unpickling." The phrasing is a little odd (since according to the __setstate__ docs, there is a behavior for classes without __setstate__ where it just assigns the contents of the pickled state dict to the __dict__ of the object), but to me, this means that any falsy value should prevent any __setstate__-like behavior. But this is not how it works. Both the C accelerator and Python code treat None specially (they don't pickle state at all if it's None), which prevents __setstate__ or the __setstate__-like fallback from being executed. But if it's any other falsy value, the behaviors differ, and diverge from the docs. Specifically, on load of a pickle with a non-None falsy state (say, False itself, or 0, or () or []): Without __setstate__: Pure Python pickle: Does not execute fallback code, behaves as expected (it just stored state it will never use), matching spirit of docs C accelerated _pickle: Fails on anything but the empty dict with an UnpicklingError: state is not a dictionary, violating spirit of docs With __setstate__: Both versions call __setstate__ even though the documentation explicitly says they will not. Seems like if nothing else, the docs should agree with the code, and the C and Python modules should agree on behavior. I would not be at all surprised if outside code depends on being able to pickle falsy state and have its __setstate__ receive the falsy state (if nothing else, when the state is a container or number, being empty or 0 would be reasonable; failing to call __setstate__ in that case would be surprising). So it's probably not a good idea to make the implementation match the docs. My proposal would be that at pickle time, if the class lacks __setstate__, treat any falsy return value as None. This means: 1. pickles are smaller (no storing junk that the default __setstate__-like behavior can't use) 2. pickles are valid (no UnpicklingError from the default __setstate__-like behavior) The docs would also have to change, to indicate that, if defined, __setstate__ will be called even if __getstate__ returned a falsy (but not None) value. Downside is the description of what happens is a little complex, since the behavior for non-None falsy values differs depending on the presence of a real __setstate__. Upside is that any code depending on the current behavior of falsy state being passed to __setstate__ keeps working, CPython and other interpreters will match behavior, and classes without __setstate__ will have smaller pickles. ---------- assignee: docs at python components: Documentation messages: 262908 nosy: docs at python, josh.r priority: normal severity: normal status: open title: pickle and _pickle accelerator have different behavior when unpickling an object with falsy __getstate__ return versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 5 14:39:04 2016 From: report at bugs.python.org (Brett Cannon) Date: Tue, 05 Apr 2016 18:39:04 +0000 Subject: [New-bugs-announce] [issue26696] Document collections.abc.ByteString Message-ID: <1459881544.79.0.834895685887.issue26696@psf.upfronthosting.co.za> New submission from Brett Cannon: [typing.ByteString](https://docs.python.org/3.5/library/typing.html#typing.ByteString) references collections.abc.ByteString, but no such type is documented. ---------- components: Library (Lib) messages: 262918 nosy: brett.cannon, gvanrossum priority: normal severity: normal stage: needs patch status: open title: Document collections.abc.ByteString versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 5 17:33:16 2016 From: report at bugs.python.org (Eric Johnson) Date: Tue, 05 Apr 2016 21:33:16 +0000 Subject: [New-bugs-announce] [issue26697] tkFileDialog crash on askopenfilename Python 2.7 64-bit Win7 Message-ID: <1459891996.57.0.225918514546.issue26697@psf.upfronthosting.co.za> New submission from Eric Johnson: Attempting to run tkFileDialog.askopenfilename() using Python 64-bit on Windows 7 crashes. Running SysWOW64\cmd.exe: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\SysWOW64>python Python 2.7.11rc1 (v2.7.11rc1:82dd9545bd93, Nov 21 2015, 23:25:27) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import Tkinter >>> Tkinter.Tcl().eval('info patchlevel') '8.5.15' >>> import tkFileDialog >>> filename = tkFileDialog.askopenfilename() C:\Windows\SysWOW64> The application abruptly stops. Running the same application with Python 32-bit does not crash. ---------- components: Tkinter files: fileopendialog.py messages: 262923 nosy: Eric Johnson priority: normal severity: normal status: open title: tkFileDialog crash on askopenfilename Python 2.7 64-bit Win7 type: crash versions: Python 2.7 Added file: http://bugs.python.org/file42374/fileopendialog.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 5 22:53:48 2016 From: report at bugs.python.org (=?utf-8?q?Westley_Mart=C3=ADnez?=) Date: Wed, 06 Apr 2016 02:53:48 +0000 Subject: [New-bugs-announce] [issue26698] IDLE DPI Awareness Message-ID: <1459911228.34.0.00983454800259.issue26698@psf.upfronthosting.co.za> New submission from Westley Mart?nez: IDLE is blurry on High DPI Windows, because IDLE is not DPI aware. IDLE should be made to be DPI aware so that the text is more readable. ---------- components: IDLE, Library (Lib), Tkinter messages: 262930 nosy: westley.martinez priority: normal severity: normal status: open title: IDLE DPI Awareness type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 03:29:49 2016 From: report at bugs.python.org (Mark Dickinson) Date: Wed, 06 Apr 2016 07:29:49 +0000 Subject: [New-bugs-announce] [issue26699] locale.str docstring is incorrect: "Convert float to integer" Message-ID: <1459927789.02.0.138982751213.issue26699@psf.upfronthosting.co.za> New submission from Mark Dickinson: [Observed by one of my colleagues] The locale.str docstring currently looks like this (and apparently has been this way since the dawn of time): def str(val): """Convert float to integer, taking the locale into account.""" The output of str doesn't *look* like an integer on my machine. :-) Python 2.7.10 |Master 2.1.0.dev1829 (64-bit)| (default, Oct 21 2015, 09:09:19) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.6)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import locale >>> locale.setlocale(locale.LC_NUMERIC, 'fr_FR') 'fr_FR' >>> locale.str(34.56) '34,56' ---------- assignee: docs at python components: Documentation keywords: easy messages: 262936 nosy: docs at python, mark.dickinson priority: normal severity: normal stage: needs patch status: open title: locale.str docstring is incorrect: "Convert float to integer" type: behavior versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 03:59:10 2016 From: report at bugs.python.org (Raymond Hettinger) Date: Wed, 06 Apr 2016 07:59:10 +0000 Subject: [New-bugs-announce] [issue26700] Make digest_size a class variable Message-ID: <1459929550.45.0.656140899063.issue26700@psf.upfronthosting.co.za> New submission from Raymond Hettinger: It would be nicer if this worked: >>> hashlib.md5.digest_size 64 ---------- assignee: gregory.p.smith components: Extension Modules messages: 262940 nosy: gregory.p.smith, rhettinger priority: normal severity: normal status: open title: Make digest_size a class variable type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 05:01:52 2016 From: report at bugs.python.org (Robert Smallshire) Date: Wed, 06 Apr 2016 09:01:52 +0000 Subject: [New-bugs-announce] [issue26701] Documentation for int constructor mentions __int__ but not __trunc__ Message-ID: <1459933312.1.0.479151701513.issue26701@psf.upfronthosting.co.za> New submission from Robert Smallshire: The documentation for the int(x) constructor explains that if possible, it delegates to x.__int__(). The documentation does not explain that there is a fallback to x.__trunc__() if x.__int__() is not available. The only mention of __trunc__ in the Python documentation is in the entry for math.trunc; the documentation for the numbers module does not describe the underlying special methods. Given that all Real numbers are required to implement __trunc__ but only Integral subclasses are required to implement __int__ this could be important to implementers of other Real types, although in practice I imagine that most Real types will implement __int__ as float does. ---------- assignee: docs at python components: Documentation messages: 262941 nosy: Robert Smallshire2, docs at python priority: normal severity: normal status: open title: Documentation for int constructor mentions __int__ but not __trunc__ type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 10:43:37 2016 From: report at bugs.python.org (Barry A. Warsaw) Date: Wed, 06 Apr 2016 14:43:37 +0000 Subject: [New-bugs-announce] [issue26702] A better assert statement Message-ID: <1459953817.06.0.108745447257.issue26702@psf.upfronthosting.co.za> New submission from Barry A. Warsaw: Too many times I hit failing assert statements, and have no idea what value is causing the assertion to fail. Sure, you can provide a value to print (instead of just the failing code) but it seems to be fairly rarely used. And it can also lead to code duplication. As an example, I saw this today in some installed code: assert k.replace('.', '').replace('-', '').replace('_', '').isalum() So now I have to sudo edit the installed system file, duplicate everything up to but not including the .isalum() as the second argument to assert, then try to reproduce the problem. IWBNI assert could make this better. One idea would be to split the value and the conditional being asserted on that value, but we can't use two-argument asserts for that. Crazy syntax thought: reuse the 'with' keyword: assert k.replace('.', '').replace('-', '').replace('_', '') with isalum where the part before the 'with' is 'value' and the part after the 'with' is conditional, and the two parts together imply the expression. This would be equivalent to: if __debug__: if not value.conditional(): raise AssertionError(expression, value, conditional) I suppose you then want to support arguments: assert foo with can_bar, 1, 2, x=3 but maybe that's YAGNI and we can just say to use a better 2-value assert in those more complicated cases. ---------- messages: 262946 nosy: barry priority: normal severity: normal status: open title: A better assert statement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 14:07:05 2016 From: report at bugs.python.org (JoshN) Date: Wed, 06 Apr 2016 18:07:05 +0000 Subject: [New-bugs-announce] [issue26703] Socket state corrupts when original assignment goes out of scope Message-ID: <1459966025.84.0.875779742702.issue26703@psf.upfronthosting.co.za> New submission from JoshN: Creating a socket in one thread and sharing it with another will cause the socket to corrupt as soon as the thread it was created in exits. Example code: import socket, threading, time, os def start(): a = socket.socket(socket.AF_INET, socket.SOCK_STREAM) a.bind(("", 8080)) a.set_inheritable(True) thread = threading.Thread(target=abc, args=(a.fileno(),)) thread.start() time.sleep(2) print("Main thread exiting, socket is still valid: " + str(a) + "\n") def abc(b): sock = socket.socket(fileno=b) for _ in range(3): print("Passed as an argument:" + str(sock) + "\n=====================") time.sleep(1.1) start() Note that, as soon as the main thread exits, the socket isn't closed, nor is the fd=-1, etc. Doing anything with this corrupted object throws WinError 10038 ('operation performed on something that is not a socket'). I should note that the main thread exiting doesn't seem to be the cause, it is the original object containing the socket going out of scope that causes the socket to become corrupted. -JoshN ---------- components: IO messages: 262951 nosy: JoshN priority: normal severity: normal status: open title: Socket state corrupts when original assignment goes out of scope type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 16:01:26 2016 From: report at bugs.python.org (Anthony Sottile) Date: Wed, 06 Apr 2016 20:01:26 +0000 Subject: [New-bugs-announce] [issue26704] unittest.mock.patch: Double patching instance method: AttributeError: Mock object has no attribute '__name__' Message-ID: <1459972886.32.0.234738617564.issue26704@psf.upfronthosting.co.za> New submission from Anthony Sottile: Originally from https://github.com/testing-cabal/mock/issues/350 ## Example ```python from unittest import mock class C(object): def f(self): pass c = C() with mock.patch.object(c, 'f', autospec=True): with mock.patch.object(c, 'f', autospec=True): pass ``` ## Python3.3 ``` $ test.py $ ``` ## Python3.4 / 3.5 / 3.6 (From gitbhub.com/python/cpython at fa3fc6d7) ``` Traceback (most recent call last): File "test.py", line 10, in with mock.patch.object(c, 'f', autospec=True): File "/home/asottile/workspace/cpython/Lib/unittest/mock.py", line 1320, in __enter__ _name=self.attribute, **kwargs) File "/home/asottile/workspace/cpython/Lib/unittest/mock.py", line 2220, in create_autospec _check_signature(original, new, skipfirst=skipfirst) File "/home/asottile/workspace/cpython/Lib/unittest/mock.py", line 112, in _check_signature _copy_func_details(func, checksig) File "/home/asottile/workspace/cpython/Lib/unittest/mock.py", line 117, in _copy_func_details funcopy.__name__ = func.__name__ File "/home/asottile/workspace/cpython/Lib/unittest/mock.py", line 578, in __getattr__ raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute '__name__' ``` ---------- components: Library (Lib) messages: 262960 nosy: asottile priority: normal severity: normal status: open title: unittest.mock.patch: Double patching instance method: AttributeError: Mock object has no attribute '__name__' type: crash versions: Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 16:03:37 2016 From: report at bugs.python.org (Aviv Palivoda) Date: Wed, 06 Apr 2016 20:03:37 +0000 Subject: [New-bugs-announce] [issue26705] logging.Handler.handleError should be called from logging.Handler.handle Message-ID: <1459973017.82.0.820573021891.issue26705@psf.upfronthosting.co.za> New submission from Aviv Palivoda: Currently all the stdlib logging handlers (except BufferingHandler) emit method have the following structure: def emit(self, record): try: // do the emit except Exception: self.handleError(record) I suggest changing this so that the handle method will do the exception handling of the emit: def handle(self, record): rv = self.filter(record) if rv: self.acquire() try: self.emit(record) except Exception: self.handleError(record) finally: self.release() return rv Now the emit() method can be override without the need to handle it's own exceptions. I think this is more clear with the current documentation as well. For example in the handleError function it says that "This method should be called from handlers when an exception is encountered during an emit() call". In addition in the only example that implement the emit() function https://docs.python.org/3/howto/logging-cookbook.html#speaking-logging-messages there is no error handling at all. ---------- components: Library (Lib) files: logging-handle-error.patch keywords: patch messages: 262962 nosy: palaviv, vinay.sajip priority: normal severity: normal status: open title: logging.Handler.handleError should be called from logging.Handler.handle type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42381/logging-handle-error.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 16:36:27 2016 From: report at bugs.python.org (Shaun Walbridge) Date: Wed, 06 Apr 2016 20:36:27 +0000 Subject: [New-bugs-announce] [issue26706] Update OpenSSL version in readme Message-ID: <1459974987.74.0.0926215654006.issue26706@psf.upfronthosting.co.za> New submission from Shaun Walbridge: Sync documentation with the OpenSSL version update (1.0.2g vs 1.0.2f). Mismatch present in both head and 3.5 branch. ---------- assignee: docs at python components: Documentation files: readme-openssl.diff keywords: patch messages: 262964 nosy: docs at python, scw priority: normal severity: normal status: open title: Update OpenSSL version in readme type: enhancement versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42383/readme-openssl.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 6 21:34:01 2016 From: report at bugs.python.org (John Lehr) Date: Thu, 07 Apr 2016 01:34:01 +0000 Subject: [New-bugs-announce] [issue26707] plistlib fails to parse bplist with 0x80 UID values Message-ID: <1459992841.57.0.391610307965.issue26707@psf.upfronthosting.co.za> New submission from John Lehr: libplist raises an invalid file exception on loading properly formed binary plists containing UID (0x80) values. The binary files were tested for form with plutil. Comments at line 706 state the value is defined but not in use in plists, and the object is not handled. However, I find them frequently in bplists, e.g., iOS Snapchat application files. I have attached a proposed patch that I have tested on these files and can now successfully parse them with the _read_object method in the _BinaryPlistParser class. My proposed patch is pasted below for others consideration while waiting for the issue to be resolved. 706,707c706,708 < # tokenH == 0x80 is documented as 'UID' and appears to be used for < # keyed-archiving, not in plists. --- > elif tokenH == 0x80: #UID > s = self._get_size(tokenL) > return self._fp.read(s).decode('ascii') Thanks for your consideration. ---------- components: Library (Lib) files: plistlib_uid.diff keywords: patch messages: 262974 nosy: slo.sleuth priority: normal severity: normal status: open title: plistlib fails to parse bplist with 0x80 UID values type: crash versions: Python 3.5 Added file: http://bugs.python.org/file42388/plistlib_uid.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 7 02:18:17 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 07 Apr 2016 06:18:17 +0000 Subject: [New-bugs-announce] [issue26708] Constify C string pointers in the posix module Message-ID: <1460009897.32.0.801268253531.issue26708@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch adds the "const" qualifier to char and wchar_t pointers in the posix module to prevents possible bugs. These pointers point to internal data of PyBytes or PyUnicode objects or to C string literals, and unintentional changing the content is a hard bug. I expect that the patch can also eliminate some compiler warnings. Since large part of the code is Windows specific, the patch needs to be tested on Windows. ---------- components: Extension Modules files: posixmodule_constify.patch keywords: patch messages: 262978 nosy: larry, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Constify C string pointers in the posix module type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42389/posixmodule_constify.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 7 05:46:22 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 07 Apr 2016 09:46:22 +0000 Subject: [New-bugs-announce] [issue26709] Year 2038 problem in plistlib Message-ID: <1460022382.4.0.161874143104.issue26709@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Plistlib fails to load dates before year 1901 and after year 2038 in binary format on platforms with 32-bit time_t. >>> data = plistlib.dumps(datetime.datetime(1901, 1, 1), fmt=plistlib.FMT_BINARY) >>> plistlib.loads(data) Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython/Lib/plistlib.py", line 1006, in loads fp, fmt=fmt, use_builtin_types=use_builtin_types, dict_type=dict_type) File "/home/serhiy/py/cpython/Lib/plistlib.py", line 997, in load return p.parse(fp) File "/home/serhiy/py/cpython/Lib/plistlib.py", line 623, in parse return self._read_object(self._object_offsets[top_object]) File "/home/serhiy/py/cpython/Lib/plistlib.py", line 688, in _read_object return datetime.datetime.utcfromtimestamp(f + (31 * 365 + 8) * 86400) OverflowError: timestamp out of range for platform time_t >>> data = plistlib.dumps(datetime.datetime(2039, 1, 1), fmt=plistlib.FMT_BINARY) >>> plistlib.loads(data) Traceback (most recent call last): File "", line 1, in File "/home/serhiy/py/cpython/Lib/plistlib.py", line 1006, in loads fp, fmt=fmt, use_builtin_types=use_builtin_types, dict_type=dict_type) File "/home/serhiy/py/cpython/Lib/plistlib.py", line 997, in load return p.parse(fp) File "/home/serhiy/py/cpython/Lib/plistlib.py", line 623, in parse return self._read_object(self._object_offsets[top_object]) File "/home/serhiy/py/cpython/Lib/plistlib.py", line 688, in _read_object return datetime.datetime.utcfromtimestamp(f + (31 * 365 + 8) * 86400) OverflowError: timestamp out of range for platform time_t Proposed patch fixes this issue. ---------- components: Library (Lib) files: plistlib_large_timestamp.patch keywords: patch messages: 262986 nosy: belopolsky, ronaldoussoren, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Year 2038 problem in plistlib type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42391/plistlib_large_timestamp.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 7 11:10:26 2016 From: report at bugs.python.org (Marc Abramowitz) Date: Thu, 07 Apr 2016 15:10:26 +0000 Subject: [New-bugs-announce] [issue26710] ConfigParser: Values in DEFAULT section override defaults passed to constructor Message-ID: <1460041826.4.0.857287965169.issue26710@psf.upfronthosting.co.za> New submission from Marc Abramowitz: My expectation was that any defaults I passed to ConfigParser when creating one would override values in the DEFAULT section of the config file. This is because I'd like the DEFAULT section to have the default values, but then I want to be able to override those with settings from environment variables. However, this is not the way it works. The defaults in the file take precedence over the defaults passed to the constructor. I didn't see a mention of this in the docs, but I might've missed it. Take this short program (`configparsertest.py`): ``` import configparser cp = configparser.ConfigParser({'foo': 'dog'}) print(cp.defaults()) cp.read('app.ini') print(cp.defaults()) ``` and this config file (`app.ini`): ``` [DEFAULT] foo = bar ``` I was expecting that I would see foo equal to dog twice, but what I get is: ``` $ python configparsertest.py OrderedDict([('foo', 'dog')]) OrderedDict([('foo', 'bar')]) ``` The reason that I want the programmatic default values to override the default values in the file is that I want the file to have low-precedence defaults that are used as a last resort, and I want to be able to override the defaults with the values from environment variables. As a concrete example, imagine that I have a config file for the stdlib `logging` module that looks something like this: ``` [DEFAULT] logging_logger_root_level = WARN ... [logger_root] level = %(logging_logger_root_level)s handlers = console ``` The desired behavior is that normally the app would use the WARN level for logging, but I'd like to be able to do something like: ``` $ LOGGING_LOGGER_ROOT_LEVEL=DEBUG python my_app.py ``` to get DEBUG logging. Maybe there is some other mechanism to accomplish this? ---------- components: Library (Lib) messages: 262989 nosy: Marc.Abramowitz priority: normal severity: normal status: open title: ConfigParser: Values in DEFAULT section override defaults passed to constructor type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 7 16:06:42 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 07 Apr 2016 20:06:42 +0000 Subject: [New-bugs-announce] [issue26711] Fix comparison of plistlib.Data Message-ID: <1460059602.28.0.307914137302.issue26711@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch fixes several bugs in plistlib.Data.__eq__(). * isinstance(other, str) was used instead of isinstance(other, bytes). Data always wraps bytes and should be comparable with bytes. str was correct type in Python 2. * id(self) == id(other) is always false, because if other is self, the first condition (isinstance(other, self.__class__)) should be true. NotImplemented should be returned as fallback. This allows comparing with Data subclasses and correct work of __ne__(). * The __eq__() method should be used instead of the equality operator. This is needed for correct work in case if value is bytes subclass with overloaded __eq__(). ---------- components: Library (Lib) files: plistlib_data_eq.patch keywords: patch messages: 263001 nosy: ronaldoussoren, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Fix comparison of plistlib.Data type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42394/plistlib_data_eq.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 8 05:02:55 2016 From: report at bugs.python.org (Martin Panter) Date: Fri, 08 Apr 2016 09:02:55 +0000 Subject: [New-bugs-announce] [issue26712] Unify (r)split(), (l/r)strip() method tests Message-ID: <1460106175.71.0.116518416175.issue26712@psf.upfronthosting.co.za> New submission from Martin Panter: This follows on from Issue 26257, where I moved tests for these string/bytes methods into a more common class. So this patch merges and removes some existing tests from test_bytes.py that cover the same ground. I copied a couple tests to the common class that tend to test negative and degenerate cases, which the original common tests were weak on. ---------- components: Tests files: split-strip.patch keywords: patch messages: 263011 nosy: martin.panter, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Unify (r)split(), (l/r)strip() method tests type: enhancement versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42397/split-strip.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 8 10:53:50 2016 From: report at bugs.python.org (flying sheep) Date: Fri, 08 Apr 2016 14:53:50 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue26713=5D_Change_f-literal_g?= =?utf-8?q?rammar_so_that_escaping_isn=E2=80=99t_possible_or_necessary?= Message-ID: <1460127230.9.0.116557671628.issue26713@psf.upfronthosting.co.za> New submission from flying sheep: code inside of the braces of an f-literal should have the exact same lexing rules than outside *except* for an otherwise unparsable !, :, or } signifying the end of the replacement field 1. every other language with template literals has it that way 2. it makes sense that the content of the f-literal is a ?hole? in which normal code goes until a !, : or } signifies its end 3. escaping code that will be evaluated reeks of ?eval? even though it isn?t as it is now, it?s very confusing as the contents are neither code nor string content. that might be one reason why many people get it wrong and think it can be stored unevaluatedly and thus provides a security risk (which is obv. wrong) the whole section after ?A consequence of sharing the same syntax as regular string literals is?? has to be removed and made unnecessary by allowing everything otherwise legal inside. e.g. f'spam{(lambda: 1)():<4}' would be legal and be exactly the same as '{:<4}'.format((lambda: 1)()) ---------- messages: 263026 nosy: flying sheep priority: normal severity: normal status: open title: Change f-literal grammar so that escaping isn?t possible or necessary _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 8 13:44:51 2016 From: report at bugs.python.org (Gregory P. Smith) Date: Fri, 08 Apr 2016 17:44:51 +0000 Subject: [New-bugs-announce] [issue26714] telnetlib.Telnet should act as a context manager Message-ID: <1460137491.63.0.283583119898.issue26714@psf.upfronthosting.co.za> New submission from Gregory P. Smith: Telnet instances should support the context manager protocol so they can be used in with statements. >>> import telnetlib >>> with telnetlib.Telnet('192.168.86.7') as tn: ... pass ... Traceback (most recent call last): File "", line 1, in AttributeError: __exit__ ---------- components: Library (Lib) keywords: easy messages: 263031 nosy: gregory.p.smith priority: normal severity: normal status: open title: telnetlib.Telnet should act as a context manager type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 8 15:01:10 2016 From: report at bugs.python.org (Giga Image) Date: Fri, 08 Apr 2016 19:01:10 +0000 Subject: [New-bugs-announce] [issue26715] can not deactivate venv (deactivate.bat) if the venv was activated by activate.ps1. Message-ID: <1460142070.89.0.361834041489.issue26715@psf.upfronthosting.co.za> New submission from Giga Image: Win10/Python 3.5.1 If virtual environment was activated using powershell script, it can not deactivate the environment using only provided deactivate.bat. Pre-condition : Virtual environment already in place. 1. Open elevated Powershell (Administrator access). 2. Activate virtual environment using activate.ps1 (must). 3 Deactivate the environment in powershell using deactivate.bat (since there is no deactivate.ps1). Observation : Virtual environment never exit. Expected: deactivate script should be working as expected (as the script name suggests). NOTE: See attached screenshot. ---------- components: Windows files: one.JPG messages: 263036 nosy: Giga Image, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: can not deactivate venv (deactivate.bat) if the venv was activated by activate.ps1. type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42401/one.JPG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 8 15:50:33 2016 From: report at bugs.python.org (Jack Zhou) Date: Fri, 08 Apr 2016 19:50:33 +0000 Subject: [New-bugs-announce] [issue26716] EINTR handling in fcntl Message-ID: <1460145033.33.0.165294517839.issue26716@psf.upfronthosting.co.za> New submission from Jack Zhou: According to PEP 475, standard library modules should handle EINTR, but this appears to not be the case for the fcntl module. Test script: import fcntl import signal import os def handle_alarm(signum, frame): print("Received alarm in process {}!".format(os.getpid())) child = os.fork() if child: signal.signal(signal.SIGALRM, handle_alarm) signal.alarm(1) with open("foo", "w") as f: print("Locking in process {}...".format(os.getpid())) fcntl.flock(f, fcntl.LOCK_EX) print("Locked in process {}.".format(os.getpid())) os.waitpid(child, 0) else: signal.signal(signal.SIGALRM, handle_alarm) signal.alarm(1) with open("foo", "w") as f: print("Locking in process {}...".format(os.getpid())) fcntl.flock(f, fcntl.LOCK_EX) print("Locked in process {}.".format(os.getpid())) ---------- components: IO messages: 263042 nosy: Jack Zhou priority: normal severity: normal status: open title: EINTR handling in fcntl type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 8 16:48:05 2016 From: report at bugs.python.org (Anthony Sottile) Date: Fri, 08 Apr 2016 20:48:05 +0000 Subject: [New-bugs-announce] [issue26717] wsgiref.simple_server: mojibake with cp1252 bytes in PATH_INFO Message-ID: <1460148485.6.0.00224363970956.issue26717@psf.upfronthosting.co.za> New submission from Anthony Sottile: Patch attached with test. In summary: A request to the url b'/\x80' appears to the application as a request to b'\xc2\x80' -- The issue being the latin1 decoded PATH_INFO is re-encoded as UTF-8 and then decoded as latin1 (on the wire) b'\x80' -(decode latin1)-> u'\x80' -(encode utf-8)-> b'\xc2\x80' -(decode latin1)-> b'\xc2\x80' My patch cuts out the encode(utf-8)->decode(latin1) ---------- components: Library (Lib) files: patch messages: 263043 nosy: Anthony Sottile priority: normal severity: normal status: open title: wsgiref.simple_server: mojibake with cp1252 bytes in PATH_INFO versions: Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42402/patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 8 21:02:03 2016 From: report at bugs.python.org (Kevin Modzelewski) Date: Sat, 09 Apr 2016 01:02:03 +0000 Subject: [New-bugs-announce] [issue26718] super.__init__ leaks memory if called multiple times Message-ID: <1460163723.35.0.208266777223.issue26718@psf.upfronthosting.co.za> New submission from Kevin Modzelewski: The super() __init__ function fills in the fields of a super object without checking if they were already set. If someone happens to call __init__ again, the previously-set references will end up getting forgotten and leak memory. For example: import sys print(sys.gettotalrefcount()) sp = super(int, 1) for i in range(100000): super.__init__(sp, float, 1.0) print(sys.gettotalrefcount()) ---------- components: Interpreter Core messages: 263053 nosy: Kevin Modzelewski priority: normal severity: normal status: open title: super.__init__ leaks memory if called multiple times versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 9 06:06:03 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 09 Apr 2016 10:06:03 +0000 Subject: [New-bugs-announce] [issue26719] More efficient formatting of ints and floats in json Message-ID: <1460196363.48.0.347398118701.issue26719@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch provide more efficient solution of issue18264. Instead of creating new int or float object and then converting it to string, the patch directly uses functions that does int and float conversion to string. ---------- components: Extension Modules, Library (Lib) files: json_int_float_formatting.patch keywords: patch messages: 263077 nosy: amaury.forgeotdarc, barry, cvrebert, eli.bendersky, eric.snow, ethan.furman, ezio.melotti, giampaolo.rodola, gvanrossum, ncoghlan, pitrou, python-dev, rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: More efficient formatting of ints and floats in json type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42410/json_int_float_formatting.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 9 08:08:46 2016 From: report at bugs.python.org (Martin Panter) Date: Sat, 09 Apr 2016 12:08:46 +0000 Subject: [New-bugs-announce] [issue26720] memoryview from BufferedWriter becomes garbage Message-ID: <1460203726.07.0.470306343445.issue26720@psf.upfronthosting.co.za> New submission from Martin Panter: >>> class Raw(RawIOBase): ... def writable(self): return True ... def write(self, b): ... global written ... written = b ... return len(b) ... >>> writer = BufferedWriter(Raw()) >>> writer.write(b"blaua") 5 >>> raw = writer.detach() >>> written >>> written.tobytes() b'blaua' >>> del writer >>> written.tobytes() # Garbage b'\x80f\xab\x00\x00' Assuming this is pointing into unallocated memory, maybe it could trigger a segfault, though I haven?t seen that. I haven?t looked at the implementation. But I am guessing that BufferedWriter is passing a view of its internal buffer to write(). For Python 2, perhaps the fix is to check if that memoryview is still referenced, and allocate a new buffer if so. 3.5 should probably inherit this fix. Another option for 3.6 might be to call release() when write() returns. This should be documented (along with the fact that memoryview is possible in the first place; see Issue 20699). ---------- components: IO messages: 263083 nosy: martin.panter priority: normal severity: normal status: open title: memoryview from BufferedWriter becomes garbage type: behavior versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 9 08:19:55 2016 From: report at bugs.python.org (Martin Panter) Date: Sat, 09 Apr 2016 12:19:55 +0000 Subject: [New-bugs-announce] [issue26721] Avoid socketserver.StreamRequestHandler.wfile doing partial writes Message-ID: <1460204395.72.0.250418211057.issue26721@psf.upfronthosting.co.za> New submission from Martin Panter: This is a follow-on from Issue 24291. Currently, for stream servers (as opposed to datagram servers), the wfile attribute is a raw SocketIO object. This means that wfile.write() is a simple wrapper around socket.send(), and can do partial writes. There is a comment inherited from Python 2 that reads: # . . . we make # wfile unbuffered because (a) often after a write() we want to # read and we need to flush the line; (b) big writes to unbuffered # files are typically optimized by stdio even when big reads # aren't. Python 2 only has one kind of ?file? object, and it seems partial writes are impossible: . But in Python 3, unbuffered mode means that the lower-level RawIOBase API is involved rather than the higher-level BufferedIOBase API. I propose to change the ?wfile? attribute to be a BufferedIOBase object, yet still be ?unbuffered?. This could be implemented with a class that looks something like class _SocketWriter(BufferedIOBase): """Simple writable BufferedIOBase implementation for a socket Does not hold data in a buffer, avoiding any need to call flush().""" def write(self, b): self._sock.sendall(b) return len(b) ---------- components: Library (Lib) messages: 263084 nosy: martin.panter priority: normal severity: normal status: open title: Avoid socketserver.StreamRequestHandler.wfile doing partial writes type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 9 08:52:24 2016 From: report at bugs.python.org (Alexander Marshalov) Date: Sat, 09 Apr 2016 12:52:24 +0000 Subject: [New-bugs-announce] [issue26722] Fold compare operators on constants (peephole) Message-ID: <1460206344.63.0.870327472601.issue26722@psf.upfronthosting.co.za> New submission from Alexander Marshalov: Missed peephole optimization: 1 > 2 -> False 3 < 4 -> True 5 == 6 -> False 6 != 7 -> True 7 >= 8 -> False 8 <= 9 -> True 10 is 11 -> False 12 is not 13 -> True 14 in (15, 16, 17) -> False 18 not in (19, 20, 21) -> True ---------- components: Interpreter Core files: peephole_compareops.patch keywords: patch messages: 263089 nosy: amper priority: normal severity: normal status: open title: Fold compare operators on constants (peephole) type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42414/peephole_compareops.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 9 14:40:00 2016 From: report at bugs.python.org (Chi Hsuan Yen) Date: Sat, 09 Apr 2016 18:40:00 +0000 Subject: [New-bugs-announce] [issue26723] Add an option to skip _decimal module Message-ID: <1460227200.51.0.75268683171.issue26723@psf.upfronthosting.co.za> New submission from Chi Hsuan Yen: As said by Stefan Krah in http://bugs.python.org/issue23496#msg236886, Android ports can use pure python decimal module. Of course I can, but _decimal is always built and error messages are spamming. This change allow disabling _decimal from ./configure. For the complete build process, have a look into scripts and patches from https://github.com/yan12125/python3-android. ---------- components: Build, Cross-Build, Extension Modules files: allow-disable-libmpdec.patch keywords: patch messages: 263105 nosy: Alex.Willmer, Chi Hsuan Yen priority: normal severity: normal status: open title: Add an option to skip _decimal module type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42415/allow-disable-libmpdec.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 9 16:03:30 2016 From: report at bugs.python.org (anton-ryzhov) Date: Sat, 09 Apr 2016 20:03:30 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue26724=5D_Serialize_dict_wit?= =?utf-8?q?h_non-string_keys_to_JSON_=E2=80=94_unexpected_result?= Message-ID: <1460232210.14.0.690770793503.issue26724@psf.upfronthosting.co.za> New submission from anton-ryzhov: JSON doesn't allow to have non-sting keys in objects, so json.dumps converts its to string. But if several keys has one string representation ? we'll get damaged result as follows: >>> import json >>> json.dumps({1: 2, "1": "2"}) '{"1": 2, "1": "2"}' I think it should raise ValueError in this case. I've tested this case on 2.7, 3.4 and on trunk version 3.6. ---------- components: Library (Lib) messages: 263108 nosy: anton-ryzhov priority: normal severity: normal status: open title: Serialize dict with non-string keys to JSON ? unexpected result type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 9 16:22:26 2016 From: report at bugs.python.org (Steven Reed) Date: Sat, 09 Apr 2016 20:22:26 +0000 Subject: [New-bugs-announce] [issue26725] list() destroys map object data Message-ID: <1460233346.79.0.373711296036.issue26725@psf.upfronthosting.co.za> New submission from Steven Reed: Example repro: Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> x=map(bool,[1,0,0,1,1,0]) >>> x >>> list(x) [True, False, False, True, True, False] >>> list(x) [] >>> x ---------- components: Library (Lib) messages: 263111 nosy: Steven Reed priority: normal severity: normal status: open title: list() destroys map object data type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 9 18:53:33 2016 From: report at bugs.python.org (Grady Martin) Date: Sat, 09 Apr 2016 22:53:33 +0000 Subject: [New-bugs-announce] [issue26726] Incomplete Internationalization in Argparse Module Message-ID: <1460242413.52.0.89369314936.issue26726@psf.upfronthosting.co.za> New submission from Grady Martin: The attached, teensy-weensy patch passes to gettext() a string which had previously been passed as-is. Relevant mailing list message: http://article.gmane.org/gmane.comp.python.devel/157035 ---------- files: argparse_i18n.patch keywords: patch messages: 263116 nosy: IronGrid priority: normal severity: normal status: open title: Incomplete Internationalization in Argparse Module type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42416/argparse_i18n.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 10 06:16:26 2016 From: report at bugs.python.org (Henri Starmans) Date: Sun, 10 Apr 2016 10:16:26 +0000 Subject: [New-bugs-announce] [issue26727] ctypes.util.find_msvcrt() does not work in python 3.5.1 Message-ID: <1460283386.16.0.225166865628.issue26727@psf.upfronthosting.co.za> New submission from Henri Starmans: Function find_msvcrt() returns None in Python 3.5.1, I expected 'msvcr100.dll'. test code: from ctypes.util import find_msvcrt print(find_msvcrt()) ---------- components: Windows messages: 263126 nosy: Henri Starmans, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ctypes.util.find_msvcrt() does not work in python 3.5.1 type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 10 08:22:09 2016 From: report at bugs.python.org (irdb) Date: Sun, 10 Apr 2016 12:22:09 +0000 Subject: [New-bugs-announce] [issue26728] make pdb.set_trace() accept debugger commands as arguments and run them after entering the debugger Message-ID: <1460290929.06.0.199912788484.issue26728@psf.upfronthosting.co.za> New submission from irdb: I usually insert the following line in the middle of code to start the debugger from there: import pdb; pdb.set_trace() More often than not, I need to run some commands immediately after entering the debug mode, e.g. watch (display) some variables, create some additional break points, etc. AFAIK currently you have to enter those commands manually on each run and there is no simple way to pass those commands from the source code. Of-course one can invoke pdb as a script ("python3 -m pdb -c ...") and pass the desired commands to the script. But still using pdb.set_trace() is a popular method and I think it would be very useful to have set_trace() accept a list of strings as arguments and execute them right after entering the debugger. ---------- components: Interpreter Core messages: 263135 nosy: irdb priority: normal severity: normal status: open title: make pdb.set_trace() accept debugger commands as arguments and run them after entering the debugger type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 10 13:54:40 2016 From: report at bugs.python.org (Erik Welch) Date: Sun, 10 Apr 2016 17:54:40 +0000 Subject: [New-bugs-announce] [issue26729] Incorrect __text_signature__ for sorted Message-ID: <1460310880.55.0.462687598027.issue26729@psf.upfronthosting.co.za> New submission from Erik Welch: The first argument to sorted is positional-only, so the text signature should be: sorted($module, iterable, /, key=None, reverse=False) instead of sorted($module, iterable, key=None, reverse=False) To reproduce the issue, attempt to use "iterable" as a keyword argument: >>> import inspect >>> sig = inspect.signature(sorted) >>> sig.bind(iterable=[]) # should raise, but doesn't >>> sorted(iterable=[]) # raises TypeError ---------- components: Extension Modules, Library (Lib) files: sorted_1.diff keywords: patch messages: 263145 nosy: eriknw priority: normal severity: normal status: open title: Incorrect __text_signature__ for sorted type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42422/sorted_1.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 10 15:11:43 2016 From: report at bugs.python.org (James Hennessy) Date: Sun, 10 Apr 2016 19:11:43 +0000 Subject: [New-bugs-announce] [issue26730] SpooledTemporaryFile doesn't correctly preserve data for text (non-binary) SpooledTemporaryFile objects when Unicode characters are written Message-ID: <1460315503.46.0.361192457442.issue26730@psf.upfronthosting.co.za> New submission from James Hennessy: The tempfile.SpooledTemporaryFile class doesn't correctly preserve data for text (non-binary) SpooledTemporaryFile objects when Unicode characters are written. The attached program demonstrates the failure. It creates a SpooledTemporaryFile object, writes 20 string characters to it, and then tries to read them back. If the SpooledTemporaryFile has rolled over to disk, as it does in the demonstration program, then the data is not read back correctly. Instead, an exception is recognized due to the data in the SpooledTemporaryFile being corrupted. The problem is this statement in tempfile.py, in the rollover() method: newfile.seek(file.tell(), 0) The "file" variable references a StringIO object, whose tell() and seek() methods count in characters, not bytes, yet this value is applied to a TemporaryFile object, whose tell() and seek() methods deal in bytes, not characters. The demonstration program writes 10 characters to the SpooledTemporaryFile. Since 10 exceeds the rollover size of 5, the implementation writes the 10 characters to the TemporaryFile and then seeks to position 10 in the TemporaryFile, which it thinks is the end of the stream. But those 10 characters got encoded to 30 bytes, and seek position 10 is in the middle of the UTF-8 sequence for the fourth character. The next write to the SpooledTemporaryFile starts overlaying bytes from there. The attempt to read back the data fails because the byte stream no longer represents a valid UTF-8 stream of data. The related problem is the inconsistency of the behavior of tell() and seek() for text (non-binary) SpooledTemporaryFile objects. If the data hasn't yet rolled over to a TemporaryFile, they count in string characters. If the data has rolled over, they count in bytes. A quick fix for this is to remove the seek() in the rollover() method. I presume it's there to preserve the stream position if an explicit call to rollover() is made, since for an implicit call, the position would be at the end of the stream already. This quick fix, therefore, would introduce an external incompatibility in the behavior of rollover(). Another possibility is to never use a StringIO object, but to always buffer data in a BytesIO object, as is done for binary SpooledTemporaryFile objects. This has the advantage of "fixing" the tell() and seek() inconsistency, making them count bytes all the time. The downside, of course, is that data that doesn't end up being rolled over to a TemporaryFile gets encoded and decoded, a round trip that could otherwise be avoided. This problem can be circumvented by a user of SpooledTemporaryFile by explicitly seeking to the end of the stream after every write to the SpooledTemporaryFile object: spool.seek(0, io.SEEK_END) ---------- components: Library (Lib) files: showbug.py messages: 263147 nosy: James Hennessy priority: normal severity: normal status: open title: SpooledTemporaryFile doesn't correctly preserve data for text (non-binary) SpooledTemporaryFile objects when Unicode characters are written type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file42423/showbug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 10 16:21:53 2016 From: report at bugs.python.org (Matt Peters) Date: Sun, 10 Apr 2016 20:21:53 +0000 Subject: [New-bugs-announce] [issue26731] subprocess on windows leaks stdout/stderr handle to child process when stdout/stderr overridden Message-ID: <1460319713.01.0.037399080694.issue26731@psf.upfronthosting.co.za> New submission from Matt Peters: Tested in on Windows 8.1 with python 2.7.5. I have a parent process that creates a child process and calls communicate to get stdout/stderr from the child. The child process calls a persistent process, with stdout/stderr/stdin set to os.devnull, and then exits without waiting on the child process. Sample code is below. The child process exits successfully, but communicate on the the parent process does not return until the persistent process is terminated. Expected behavior is that the child process closes its stdout/stderr pipes on exit, and those pipes are not open anywhere else, so the parent process returns from communicate once the child process exits. One fix that stops the bug from manifesting is to edit subprocess.py:954 and pass in FALSE for inherit_handles in the call to _subprocess.CreateProcess rather than passing in int(not close_fds). With the current code there is no way for the user of subprocess to trigger this behavior, because close_fds is necessarily False when redirecting stdout/stderr/stdin due to an exception raised in the Popen constructor. I believe the proper fix to set close_fds to True, and pass in the handles through startupinfo, if any one of the pipes has been redirected. This will require some changes to _get_handles and some significant testing. A workaround fix that is easier to implement is to remove the assertion in the Popen constructor and allow the caller to specify close_fds=True even when redirecting one of the inputs. Test case: Three programs: parent.py, child.py, and persistent.py. Launch parent.py. Behavior: child.py returns immediately resident.py exits after 10 seconds parent.py prints its output and exits immediately after resident.py exits Expected Behavior: child.py returns immediately parent.py prints its output and exits immediately after child.py exits resident.py exits after 10 seconds ############### parent.py ########################## import subprocess proc = subprocess.Popen("python child.py", stdout=subprocess.PIPE, stderr=subprocess.STDOUT) (output, error) = proc.communicate() print 'parent complete' ############### child.py ########################### import os import subprocess with open(os.devnull, 'w') as devnull: proc = subprocess.Popen('python resident.py', stdout=devnull, stderr=devnull, stdin=devnull) ############### resident.py ######################## import time time.sleep(10) ---------- components: Windows messages: 263149 nosy: paul.moore, saifujinaro, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: subprocess on windows leaks stdout/stderr handle to child process when stdout/stderr overridden versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 10 17:58:21 2016 From: report at bugs.python.org (Kevin Quick) Date: Sun, 10 Apr 2016 21:58:21 +0000 Subject: [New-bugs-announce] [issue26732] multiprocessing sentinel resource leak Message-ID: <1460325501.58.0.225914662536.issue26732@psf.upfronthosting.co.za> New submission from Kevin Quick: The sentinel creates a named pipe, but the parent's end of the pipe is inherited by subsequently created children. import multiprocessing,signal,sys def sproc(x): signal.pause() for each in range(int(sys.argv[1])): multiprocessing.Process(target=sproc, args=(each,)).start() signal.pause() Running the above on Linux with varying numbers of child processes (expressed as the argument to the above) and using techniques like "$ sudo ls /proc/NNNN/fd" it is possible to see an ever growing number of pipe connections for subsequent children. ---------- components: Library (Lib) messages: 263153 nosy: quick-b priority: normal severity: normal status: open title: multiprocessing sentinel resource leak type: resource usage versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 11 06:19:34 2016 From: report at bugs.python.org (Xiang Zhang) Date: Mon, 11 Apr 2016 10:19:34 +0000 Subject: [New-bugs-announce] [issue26733] staticmethod and classmethod are ignored when disassemble class Message-ID: <1460369974.2.0.716803365246.issue26733@psf.upfronthosting.co.za> New submission from Xiang Zhang: Though the documentation tells when disassembling a class, it disassembles all methods for dis.dis, but staticmethod and classmethod are ignored. I don't know whether this is intended. I write to patch to add staticmethod and classmethod. But unfortunately when I write tests, one unrelated test fails and I cannot figure out why. ---------- files: add_staticmethod_and_classmethod_when_dis.dis_a_class.patch keywords: patch messages: 263176 nosy: xiang.zhang priority: normal severity: normal status: open title: staticmethod and classmethod are ignored when disassemble class Added file: http://bugs.python.org/file42429/add_staticmethod_and_classmethod_when_dis.dis_a_class.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 11 12:50:02 2016 From: report at bugs.python.org (Bar Harel) Date: Mon, 11 Apr 2016 16:50:02 +0000 Subject: [New-bugs-announce] [issue26734] Repeated mmap\munmap calls during temporary allocation Message-ID: <1460393402.14.0.550689084671.issue26734@psf.upfronthosting.co.za> New submission from Bar Harel: After asking a question regarding performance in StackOverflow, I received an answer which seemed like a design problem in object allocation. This is the question: http://stackoverflow.com/q/36548518/1658617 Seems like it ignores the garbage allocation settings (as timeit is supposed to disable it as far as I know) and I might not be proficient in low-level programming but there should be a way to implement it that doesn't cause endless allocations. ---------- components: Benchmarks, Interpreter Core, Tests messages: 263189 nosy: bar.harel, brett.cannon, pitrou priority: normal severity: normal status: open title: Repeated mmap\munmap calls during temporary allocation type: performance versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 11 13:35:08 2016 From: report at bugs.python.org (Matthew Ryan) Date: Mon, 11 Apr 2016 17:35:08 +0000 Subject: [New-bugs-announce] [issue26735] os.urandom(2500) fails on Solaris 11.3 Message-ID: <1460396108.06.0.541477402311.issue26735@psf.upfronthosting.co.za> New submission from Matthew Ryan: On Solaris 11.3 (intel tested, but I assume issue is on SPARC as well), I found the following fails: import os os.urandom(2500) The above throws OSError: [Errno 22] Invalid argument. It turns out that the Solaris version of getrandom() is limited to returning no more than 1024 bytes, per the manpage: The getrandom() and getentropy() functions fail if: EINVAL The flags are not set to GRND_RANDOM, GRND_NONBLOCK or both, or bufsz is <= 0 or > 1024. I've attached a possible patch for this issue, against the 3.5.1 source tree. ---------- files: python3-getrandom.patch keywords: patch messages: 263191 nosy: mryan1539 priority: normal severity: normal status: open title: os.urandom(2500) fails on Solaris 11.3 type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42433/python3-getrandom.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 11 14:28:20 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 11 Apr 2016 18:28:20 +0000 Subject: [New-bugs-announce] [issue26736] Use HTTPS protocol in links Message-ID: <1460399300.6.0.985736947814.issue26736@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch makes links in the docs to use the HTTPS protocol if possible. All changed links are tested manually. ---------- assignee: docs at python components: Documentation files: links_https.patch keywords: patch messages: 263197 nosy: alex, christian.heimes, docs at python, dstufft, georg.brandl, giampaolo.rodola, janssen, pitrou, serhiy.storchaka, tim.golden priority: normal severity: normal stage: patch review status: open title: Use HTTPS protocol in links type: security versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42435/links_https.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 11 14:38:37 2016 From: report at bugs.python.org (Bayo Opadeyi) Date: Mon, 11 Apr 2016 18:38:37 +0000 Subject: [New-bugs-announce] [issue26737] csv.DictReader throws generic error when fieldnames is accessed on non-text file Message-ID: <1460399917.95.0.594939363664.issue26737@psf.upfronthosting.co.za> New submission from Bayo Opadeyi: If you use the csv.DictReader to open a non-text file and try to access fieldnames on it, it crashes with a generic error instead of something specific. ---------- messages: 263199 nosy: boyombo priority: normal severity: normal status: open title: csv.DictReader throws generic error when fieldnames is accessed on non-text file versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 11 15:57:32 2016 From: report at bugs.python.org (dileep k) Date: Mon, 11 Apr 2016 19:57:32 +0000 Subject: [New-bugs-announce] [issue26738] listname.strip does not work right if the name ends with an 'o' Message-ID: <1460404652.51.0.927437963251.issue26738@psf.upfronthosting.co.za> New submission from dileep k: 12:54:38 | ~ | #1 $ python -V Python 2.7.6 12:54:41 | ~ | #2 $ python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 'media.log'.strip('.log') 'media' >>> 'video.log'.strip('.log') 'vide' >>> The output should have been 'video' instead of 'vide' ! ---------- components: Library (Lib) messages: 263203 nosy: dileep k priority: normal severity: normal status: open title: listname.strip does not work right if the name ends with an 'o' type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 11 17:24:19 2016 From: report at bugs.python.org (MICHAEL JACOBSON) Date: Mon, 11 Apr 2016 21:24:19 +0000 Subject: [New-bugs-announce] [issue26739] Errno 10035 a non-blocking socket operation could not be completed immediately Message-ID: <1460409859.89.0.789703447955.issue26739@psf.upfronthosting.co.za> New submission from MICHAEL JACOBSON: So far I've got past the "bug in program" stage of debugging, but this came up: IDLE internal error in runcode() Traceback (most recent call last): File "C:\Python27\lib\idlelib\rpc.py", line 235, in asyncqueue self.putmessage((seq, request)) File "C:\Python27\lib\idlelib\rpc.py", line 332, in putmessage n = self.sock.send(s[:BUFSIZE]) error: [Errno 10035] A non-blocking socket operation could not be completed immediately I have no idea what a "socket" is so if you know please tell me! ---------- components: Windows files: Skier messages: 263208 nosy: MICHAEL JACOBSON, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Errno 10035 a non-blocking socket operation could not be completed immediately type: resource usage versions: Python 2.7 Added file: http://bugs.python.org/file42436/Skier _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 12 04:32:18 2016 From: report at bugs.python.org (Tomas Tomecek) Date: Tue, 12 Apr 2016 08:32:18 +0000 Subject: [New-bugs-announce] [issue26740] tarfile: accessing (listing and extracting) tarball fails with UnicodeDecodeError Message-ID: <1460449938.56.0.948695426963.issue26740@psf.upfronthosting.co.za> New submission from Tomas Tomecek: I have a tarball (generated by docker-1.10 via `docker export`) and am trying to extract it with python 2.7 tarfile: ``` with tarfile.open(name=tarball_path) as tar_fd: tar_fd.extractall(path=path) ``` Output from a pytest run: ``` /usr/lib64/python2.7/tarfile.py:2072: in extractall for tarinfo in members: /usr/lib64/python2.7/tarfile.py:2507: in next tarinfo = self.tarfile.next() /usr/lib64/python2.7/tarfile.py:2355: in next tarinfo = self.tarinfo.fromtarfile(self) /usr/lib64/python2.7/tarfile.py:1254: in fromtarfile return obj._proc_member(tarfile) /usr/lib64/python2.7/tarfile.py:1276: in _proc_member return self._proc_pax(tarfile) /usr/lib64/python2.7/tarfile.py:1406: in _proc_pax value = value.decode("utf8") _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ input = '\x01\x00\x00\x02\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', errors = 'strict' def decode(input, errors='strict'): > return codecs.utf_8_decode(input, errors, True) E UnicodeDecodeError: 'utf8' codec can't decode byte 0xc0 in position 4: invalid start byte /usr/lib64/python2.7/encodings/utf_8.py:16: UnicodeDecodeError ``` Since I know nothing about tars, I have no idea if this is a bug or there is a proper solution/workaround. When using GNU tar, I'm able to to list and extract the tarball. ---------- components: Unicode messages: 263237 nosy: Tomas Tomecek, ezio.melotti, haypo priority: normal severity: normal status: open title: tarfile: accessing (listing and extracting) tarball fails with UnicodeDecodeError versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 12 18:49:07 2016 From: report at bugs.python.org (STINNER Victor) Date: Tue, 12 Apr 2016 22:49:07 +0000 Subject: [New-bugs-announce] [issue26741] subprocess.Popen should emit a ResourceWarning in destructor if the process is still running Message-ID: <1460501347.41.0.70782898945.issue26741@psf.upfronthosting.co.za> New submission from STINNER Victor: A subprocess.Popen object contains many resources: pipes, a child process (its pid, and an handle on Windows), etc. IMHO it's not safe to rely on the destructor to release all resources. I would prefer to release resources explicitly. For example, use proc.wait() or "with proc:". Attached patch emits a ResourceWarning in Popen destructor if the status of the child process was not read yet. The patch changes also _execute_child() to set the returncode on error, if the child process raised a Python exception. It avoids to emit a ResourceWarning on this case. The patch fixes also unit tests to release explicitly resources. self.addCleanup(p.stdout.close) is not enough: use "with proc:" instead. TODO: fix also the Windows implementation of _execute_child(). ---------- components: Library (Lib) messages: 263281 nosy: haypo, martin.panter, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: subprocess.Popen should emit a ResourceWarning in destructor if the process is still running type: resource usage versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 12 20:51:35 2016 From: report at bugs.python.org (STINNER Victor) Date: Wed, 13 Apr 2016 00:51:35 +0000 Subject: [New-bugs-announce] [issue26742] imports in test_warnings changes warnings.filters Message-ID: <1460508695.22.0.184623410488.issue26742@psf.upfronthosting.co.za> New submission from STINNER Victor: --- $ ./python -Wd -m test -j0 test_warnings Run tests in parallel using 6 child processes 0:00:01 [1/1] test_warnings (...) Warning -- warnings.filters was modified by test_warnings 1 test altered the execution environment: test_warnings Total duration: 0:00:02 --- The problem are these two lines in test_warnings/__init__.py: --- py_warnings = support.import_fresh_module('warnings', blocked=['_warnings']) c_warnings = support.import_fresh_module('warnings', fresh=['_warnings']) --- Each fresh "import warnings" calls _processoptions(sys.warnoptions) which can change warning filters. Attached patch saves/restores warnings.filter to fix the resource warning from the test suite. Note: the warning is not emited if tests are run sequentially (without the -jN option). ---------- files: test_warnings.patch keywords: patch messages: 263291 nosy: haypo priority: normal severity: normal status: open title: imports in test_warnings changes warnings.filters versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42449/test_warnings.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 12 20:56:46 2016 From: report at bugs.python.org (Raghu) Date: Wed, 13 Apr 2016 00:56:46 +0000 Subject: [New-bugs-announce] [issue26743] Unable to import random with python2.7 on power pc based machine Message-ID: <1460509006.04.0.361461231746.issue26743@psf.upfronthosting.co.za> New submission from Raghu: Hi, I am trying to import random on a power pc based machine and I see this exception. Could you please help me? root at host# python Python 2.7.3 (default, Apr 3 2016, 22:31:30) [GCC 4.8.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import random Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/random.py", line 58, in NV_MAGICCONST = 4 * _exp(-0.5)/_sqrt(2.0) ValueError: math domain error >>> ---------- components: Library (Lib) messages: 263292 nosy: ragreddy priority: normal severity: normal status: open title: Unable to import random with python2.7 on power pc based machine type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 12 21:57:49 2016 From: report at bugs.python.org (Ma Lin) Date: Wed, 13 Apr 2016 01:57:49 +0000 Subject: [New-bugs-announce] [issue26744] print() function hangs on MS-Windows 10 Message-ID: <1460512669.99.0.0354506250162.issue26744@psf.upfronthosting.co.za> New submission from Ma Lin: My OS is MS-Windows 10 X86-64 (Home edition), with the lastest update (now it's 10586.164). I have two programs, they occasionally infinite hang. After a few months observation, I provide these infomation: 1, print() function cause the infinite hang. 2, If it hangs, simply press ENTER key, it goes on without any problem. 3, Both pure-console program and console in Tkinter program have this issue. 4, I tried offical Python 3.4.4 64bit and 3.5.1 64bit, they all have this issue. I didn't try other versions because my programs need Python 3.4+. 5, IIRC, the problem was since Windows 10 Threshold 2 (12/Nov/2015), but I'm not very sure about this. ---------- components: Windows messages: 263295 nosy: Ma Lin, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: print() function hangs on MS-Windows 10 versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 12 22:37:14 2016 From: report at bugs.python.org (Xiang Zhang) Date: Wed, 13 Apr 2016 02:37:14 +0000 Subject: [New-bugs-announce] [issue26745] Redundant code in _PyObject_GenericSetAttrWithDict Message-ID: <1460515034.42.0.500786155919.issue26745@psf.upfronthosting.co.za> New submission from Xiang Zhang: It seems some code in _PyObject_GenericSetAttrWithDict is not necessary. There is no need to check data descriptor again using PyDescr_IsData. And the second if (f != NULL) {} will never function. ---------- components: Interpreter Core files: _PyObject_GenericSetAttrWithDict.patch keywords: patch messages: 263297 nosy: xiang.zhang priority: normal severity: normal status: open title: Redundant code in _PyObject_GenericSetAttrWithDict versions: Python 3.6 Added file: http://bugs.python.org/file42450/_PyObject_GenericSetAttrWithDict.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 13 04:38:49 2016 From: report at bugs.python.org (Stefan Krah) Date: Wed, 13 Apr 2016 08:38:49 +0000 Subject: [New-bugs-announce] [issue26746] struct.pack(): trailing padding bytes on x64 Message-ID: <1460536729.2.0.723853097724.issue26746@psf.upfronthosting.co.za> New submission from Stefan Krah: On the x64 architecture gcc adds trailing padding bytes after the last struct member. NumPy does the same: >>> import numpy as np >>> >>> t = np.dtype([('x', 'u1'), ('y', 'u8'), ('z', 'u1')], align=True) >>> x = np.array([(1, 2, 3)], dtype=t) >>> x.tostring() b'\x01\xf7\xba\xab\x03\x7f\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00' The struct module in native mode does not: >>> struct.pack("BQB", 1, 2, 3) b'\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03' I'm not sure if this is intended -- or if full compatibility to native compilers is even achievable in the general case. ---------- components: Extension Modules messages: 263315 nosy: mark.dickinson, skrah priority: normal severity: normal status: open title: struct.pack(): trailing padding bytes on x64 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 13 11:29:00 2016 From: report at bugs.python.org (Nan Wu) Date: Wed, 13 Apr 2016 15:29:00 +0000 Subject: [New-bugs-announce] [issue26747] types.InstanceType only for old style class only in 2.7 Message-ID: <1460561340.28.0.563389556375.issue26747@psf.upfronthosting.co.za> New submission from Nan Wu: >>> import types >>> a = 1 >>> isinstance(a, types.InstanceType) False >>> class A: ... pass ... >>> a = A() >>> isinstance(a, types.InstanceType) True >>> class A(object): ... pass ... >>> a = A() >>> isinstance(a, types.InstanceType) False Looks like isinstance(instance, types.InstanceType) only return True for user-defined old-style class instance. If it's the case, I feel doc should clarify that like what types.ClassType did. If no, someone please close this request. Thanks. ---------- files: doc_InstanceType_is_for_old_style_cls.patch keywords: patch messages: 263338 nosy: Nan Wu priority: normal severity: normal status: open title: types.InstanceType only for old style class only in 2.7 type: enhancement versions: Python 2.7 Added file: http://bugs.python.org/file42456/doc_InstanceType_is_for_old_style_cls.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 13 13:22:01 2016 From: report at bugs.python.org (Antoine Pitrou) Date: Wed, 13 Apr 2016 17:22:01 +0000 Subject: [New-bugs-announce] [issue26748] enum.Enum is False-y Message-ID: <1460568121.51.0.847598758945.issue26748@psf.upfronthosting.co.za> New submission from Antoine Pitrou: >>> import enum >>> bool(enum.Enum) False >>> bool(enum.IntEnum) False This behaviour is relatively unexpected for classes, and can lead to subtle bugs such as the following: https://bitbucket.org/ambv/singledispatch/issues/8/inconsistent-hierarchy-with-enum ---------- components: Library (Lib) messages: 263342 nosy: barry, eli.bendersky, ethan.furman, gvanrossum, pitrou priority: normal severity: normal status: open title: enum.Enum is False-y type: behavior versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 13 14:49:03 2016 From: report at bugs.python.org (Luiz Poleto) Date: Wed, 13 Apr 2016 18:49:03 +0000 Subject: [New-bugs-announce] [issue26749] Update devguide to include Fedora's DNF Message-ID: <1460573343.29.0.391562448207.issue26749@psf.upfronthosting.co.za> New submission from Luiz Poleto: Starting with Fedora 22, yum is no longer the default packaging tool, being replaced by the new DNF (Dandified Yum). Section 1.1.3.1 of the devguide, Build dependencies, has instructions to install system headers using popular Linux distributions, including Fedora, however, it only covers using yum to do it. This section should be updated to include the usage of the new DNF packaging tool to perform that task. ---------- assignee: docs at python components: Documentation messages: 263350 nosy: docs at python, poleto priority: normal severity: normal status: open title: Update devguide to include Fedora's DNF type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 13 18:13:06 2016 From: report at bugs.python.org (Amaury Forgeot d'Arc) Date: Wed, 13 Apr 2016 22:13:06 +0000 Subject: [New-bugs-announce] [issue26750] Mock autospec does not work with subclasses of property() Message-ID: <1460585586.67.0.646853664072.issue26750@psf.upfronthosting.co.za> New submission from Amaury Forgeot d'Arc: When patching a class, mock.create_autospec() correctly detects properties and __slot__ attributes, but not subclasses of property() or other kinds of data descriptors. The attached patch detects all data descriptors and patch them the way they should be. ---------- components: Tests files: mock-descriptor.patch keywords: patch messages: 263361 nosy: amaury.forgeotdarc, michael.foord priority: normal severity: normal status: open title: Mock autospec does not work with subclasses of property() type: enhancement Added file: http://bugs.python.org/file42458/mock-descriptor.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 13 21:14:51 2016 From: report at bugs.python.org (David Manowitz) Date: Thu, 14 Apr 2016 01:14:51 +0000 Subject: [New-bugs-announce] [issue26751] Possible bug in sorting algorithm Message-ID: <1460596491.57.0.479961799171.issue26751@psf.upfronthosting.co.za> New submission from David Manowitz: I'm trying to sort a list of tuples. Most of the tuples are pairs of US state names. However, some of the tuples have None instead of the 2nd name. I want the items sorted first by the 1st element, and then by the 2nd element, BUT I want the None to count as LARGER than any name. Thus, I want to see [('Alabama', 'Iowa'), ('Alabama', None)] rather than [('Alabama', None), ('Alabama', 'Iowa')]. I defined the following comparitor: def cmp_keys (k1, k2): retval = cmp(k1[0], k2[0]) if retval == 0: if k2[1] is None: retval = -1 if k1[1] is None: retval = 1 else: retval = cmp(k1[1], k2[1]) return retval However, once I sort using this, some of the elements are out of order. ---------- components: Interpreter Core messages: 263367 nosy: David.Manowitz priority: normal severity: normal status: open title: Possible bug in sorting algorithm type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 03:43:24 2016 From: report at bugs.python.org (James) Date: Thu, 14 Apr 2016 07:43:24 +0000 Subject: [New-bugs-announce] [issue26752] Mock(2.0.0).assert_has_calls() raise AssertionError in two same calls Message-ID: <1460619804.34.0.222553592589.issue26752@psf.upfronthosting.co.za> New submission from James: >>> import mock >>> print mock.__version__ 2.0.0 >>> ================= test.py from mock import Mock,call class BB(object): def __init__(self):pass def print_b(self):pass def print_bb(self,tsk_id):pass bMock = Mock(return_value=Mock(spec=BB)) bMock().print_bb(20) bMock().assert_has_calls([call.print_bb(20)]) =================== Traceback (most recent call last): File "test.py", line 11, in bMock().assert_has_calls([call.print_bb(20)]) File "/usr/lib/python2.7/site-packages/mock/mock.py", line 969, in assert_has_calls ), cause) File "/usr/lib/python2.7/site-packages/six.py", line 718, in raise_from raise value AssertionError: Calls not found. Expected: [call.print_bb(20)] Actual: [call.print_bb(20)] ======= print expected in mock.py assert_has_calls() result is: [TypeError('too many positional arguments',)] ---------- files: test.py messages: 263375 nosy: jekin000, rbcollins priority: normal severity: normal status: open title: Mock(2.0.0).assert_has_calls() raise AssertionError in two same calls type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file42460/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 03:55:46 2016 From: report at bugs.python.org (Larry Hastings) Date: Thu, 14 Apr 2016 07:55:46 +0000 Subject: [New-bugs-announce] [issue26753] Obmalloc lock LOCK_INIT and LOCK_FINI are never used Message-ID: <1460620546.75.0.247903713074.issue26753@psf.upfronthosting.co.za> New submission from Larry Hastings: Obmalloc now has theoretical support for locking. I say theoretical because I'm not convinced it's ever been used. The interface is defined through five macros: SIMPLELOCK_DECL SIMPLELOCK_INIT SIMPLELOCK_FINI SIMPLELOCK_LOCK SIMPLELOCK_UNLOCK Internally these are used to define an actual lock to be used in the module. The lock, "_malloc_lock", is declared, then four defines are made building on top of the SIMPLELOCK macros, named: LOCK UNLOCK LOCK_INIT LOCK_FINI LOCK_INIT and LOCK_FINI are never called. So unless your lock doesn't happen to require initialization or shutdown, this API is misimplemented. Victor: this was your work, right? If not, sorry, please unassign/de-nosy yourself. ---------- assignee: haypo components: Interpreter Core messages: 263377 nosy: haypo, larry priority: low severity: normal stage: needs patch status: open title: Obmalloc lock LOCK_INIT and LOCK_FINI are never used type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 04:06:10 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 14 Apr 2016 08:06:10 +0000 Subject: [New-bugs-announce] [issue26754] PyUnicode_FSDecoder() accepts arbitrary iterable Message-ID: <1460621170.51.0.766625206789.issue26754@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: PyUnicode_FSDecoder() accepts not only str and bytes or bytes-like object, but arbitrary iterable, e.g. list. Example: >>> compile('', [116, 101, 115, 116], 'exec') at 0xb6fb1340, file "test", line 1> I think accepting arbitrary iterables is unintentional and weird behavior. ---------- components: Interpreter Core messages: 263378 nosy: serhiy.storchaka priority: normal severity: normal status: open title: PyUnicode_FSDecoder() accepts arbitrary iterable type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 05:38:44 2016 From: report at bugs.python.org (Berker Peksag) Date: Thu, 14 Apr 2016 09:38:44 +0000 Subject: [New-bugs-announce] [issue26755] Update version{added, changed} docs in devguide Message-ID: <1460626724.97.0.188358038821.issue26755@psf.upfronthosting.co.za> New submission from Berker Peksag: This is a follow-up from issue 26366: "the original intention was to use "versionadded" where the API item is completely new. So "The parameter x was added" in a function is using "versionchanged" because only an aspect of the function's signature was changed." See msg260314 and msg260509 for details. ---------- components: Devguide files: versionchanged.diff keywords: patch messages: 263393 nosy: berker.peksag, ezio.melotti, georg.brandl, willingc priority: normal severity: normal stage: patch review status: open title: Update version{added,changed} docs in devguide type: behavior Added file: http://bugs.python.org/file42461/versionchanged.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 10:32:27 2016 From: report at bugs.python.org (Joel Barry) Date: Thu, 14 Apr 2016 14:32:27 +0000 Subject: [New-bugs-announce] [issue26756] fileinput handling of unicode errors from standard input Message-ID: <1460644347.16.0.552330506095.issue26756@psf.upfronthosting.co.za> New submission from Joel Barry: The openhook for fileinput currently will not be called when the input is from sys.stdin. However, if the input contains invalid UTF-8 sequences, a program with a hook that specifies errors='replace' will not behave as expected: $ cat x.py import fileinput import sys def hook(filename, mode): print('hook called') return open(filename, mode, errors='replace') for line in fileinput.input(openhook=hook): sys.stdout.write(line) $ echo -e "foo\x80bar" >in.txt $ python3 x.py in.txt hook called foo?bar Good. Hook is called, and replacement character is observed. $ python3 x.py for line in fileinput.input(openhook=hook): File "/usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/fileinput.py", line 263, in __next__ line = self.readline() File "/usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/fileinput.py", line 363, in readline self._buffer = self._file.readlines(self._bufsize) File "/usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/codecs.py", line 319, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 3: invalid start byte Hook was not called, and so we get the UnicodeDecodeError. Should fileinput attempt to apply the hook code to stdin? ---------- messages: 263409 nosy: jmb236 priority: normal severity: normal status: open title: fileinput handling of unicode errors from standard input type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 10:53:12 2016 From: report at bugs.python.org (STINNER Victor) Date: Thu, 14 Apr 2016 14:53:12 +0000 Subject: [New-bugs-announce] [issue26757] test_urllib2net.test_http_basic() timeout after 15 min on Message-ID: <1460645592.51.0.660228574975.issue26757@psf.upfronthosting.co.za> New submission from STINNER Victor: Timeout seen on "x86-64 Ubuntu 15.10 Skylake CPU 3.5" buildbot: http://buildbot.python.org/all/builders/x86-64%20Ubuntu%2015.10%20Skylake%20CPU%203.5/builds/357/steps/test/logs/stdio [215/398] test_urllib2net Timeout (0:15:00)! Thread 0x00007f71354be700 (most recent call first): File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/socket.py", line 575 in readinto File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/http/client.py", line 258 in _read_status File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/http/client.py", line 297 in begin File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/http/client.py", line 1197 in getresponse File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/urllib/request.py", line 1246 in do_open File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/urllib/request.py", line 1271 in http_open File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/urllib/request.py", line 443 in _call_chain File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/urllib/request.py", line 483 in _open File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/urllib/request.py", line 465 in open File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/urllib/request.py", line 162 in urlopen File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/test_urllib2net.py", line 19 in _retry_thrice File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/test_urllib2net.py", line 27 in wrapped File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/test_urllib2net.py", line 255 in test_http_basic File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/case.py", line 600 in run File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/case.py", line 648 in __call__ File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/suite.py", line 122 in run File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/suite.py", line 84 in __call__ File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/unittest/runner.py", line 176 in run File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/support/__init__.py", line 1800 in _run_suite File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/support/__init__.py", line 1834 in run_unittest File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/regrtest.py", line 1305 in test_runner File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/regrtest.py", line 1306 in runtest_inner File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/regrtest.py", line 991 in runtest File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/regrtest.py", line 784 in main File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/regrtest.py", line 1592 in main_in_temp_cwd File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/test/__main__.py", line 3 in File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/runpy.py", line 85 in _run_code File "/home/buildbot/buildarea/3.5.intel-ubuntu-skylake/build/Lib/runpy.py", line 184 in _run_module_as_main ---------- messages: 263411 nosy: haypo, martin.panter priority: normal severity: normal status: open title: test_urllib2net.test_http_basic() timeout after 15 min on _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 12:03:16 2016 From: report at bugs.python.org (Josh Rosenberg) Date: Thu, 14 Apr 2016 16:03:16 +0000 Subject: [New-bugs-announce] [issue26758] Unnecessary format string handling for no argument slot wrappers in typeobject.c Message-ID: <1460649796.07.0.143785441525.issue26758@psf.upfronthosting.co.za> New submission from Josh Rosenberg: Right now, in typeobject.c, the call_method and call_maybe utility functions have a fast path for no argument methods, where a NULL or "" format string just calls PyTuple_New(0) directly instead of wasting time parsing Py_VaBuildValue. Problem is, nothing uses it. Every no arg user (the slot wrappers for __len__, __index__ and __next__ directly, and indirectly through the SLOT0 macro for __neg__, __pos__, __abs__, __invert__, __int__ and __float__) is passing along "()" as the format string, which fails the test for NULL/"", so it calls Py_VaBuildValue that goes to an awful lot of trouble to scan the string a few times and eventually spit out the empty tuple anyway. Changing the three direct calls to call_method where it passes "()" as the format string, as well as the definition of SLOT0, to replace "()" with NULL as the format string argument should remove a non-trivial number of C varargs function calls and string processing, replacing it with a single, cheap PyTuple_New(0) call (which Py_VaBuildValue was already eventually performing anyway). If I understand the purpose of these slot wrapper functions, that should give a free speed up to all types implemented at the Python level, particularly numeric types (e.g. fractions.Fraction) and container/iterator types (speeding up __len__ and __next__ respectively). I identified this while on a work machine which I can't use to check out the Python repository; I'll submit a patch later today if no one else gets to it, once I'm home and can use my own computer to make/test the fix. ---------- components: Interpreter Core messages: 263419 nosy: josh.r priority: normal severity: normal status: open title: Unnecessary format string handling for no argument slot wrappers in typeobject.c versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 14:13:43 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 14 Apr 2016 18:13:43 +0000 Subject: [New-bugs-announce] [issue26759] PyBytes_FromObject accepts arbitrary iterable Message-ID: <1460657623.47.0.29764231335.issue26759@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: PyBytes_FromObject creates a bytes object from an object that implements the buffer or the iterable protocol. But only using the buffer protocol is documented. We should either document the current behavior (the documentation of int.from_bytes() can be used as a sample), or change the behavior to match the documentation. For now PyBytes_FromObject() is used in the stdlib only for converting FS paths to str (besides using internally in bytes). When called from PyUnicode_FSDecoder(), this leads to accepting arbitrary iterables as filenames, that looks at leas strange (issue26754). In the posix module it is called only for objects that support the buffer protocol. Thus the support of the iterable protocol is not used or misused in the stdlib. I don't know if it is used correctly in third party code, I suspect that this is rather misused. Note that there is alternative API function PyObject_Bytes(), that accepts same arguments as the bytes() constructor, except an integer, and supports the buffer protocol, the iterable protocol, and in additional supports the __bytes__() special method. ---------- messages: 263423 nosy: haypo, martin.panter, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: PyBytes_FromObject accepts arbitrary iterable type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 14:26:28 2016 From: report at bugs.python.org (Brett Cannon) Date: Thu, 14 Apr 2016 18:26:28 +0000 Subject: [New-bugs-announce] [issue26760] Document PyFrameObject Message-ID: <1460658388.85.0.590056164506.issue26760@psf.upfronthosting.co.za> New submission from Brett Cannon: Can be as simple as https://docs.python.org/3/c-api/code.html#c.PyCodeObject . Key point is to have it in the index so people don't wonder what the deal is with the type when noticing it as a parameter to a function. ---------- assignee: brett.cannon components: Documentation messages: 263424 nosy: brett.cannon priority: normal severity: normal status: open title: Document PyFrameObject versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 14 23:29:55 2016 From: report at bugs.python.org (Ganning Liu) Date: Fri, 15 Apr 2016 03:29:55 +0000 Subject: [New-bugs-announce] [issue26761] winsound module very unstable in Windows 10 Message-ID: <1460690995.69.0.897655850655.issue26761@psf.upfronthosting.co.za> New submission from Ganning Liu: Cannot using winsound.Beep?? in .py file (run as module), get error information like: Traceback (most recent call last): File "C:\Users\liuga\Desktop\sound.py", line 2, in winsound.Beep(230,200) AttributeError: module 'winsound' has no attribute 'Beep' And also failed in interactive shell occasionally. ---------- components: Extension Modules messages: 263438 nosy: Ganning Liu priority: normal severity: normal status: open title: winsound module very unstable in Windows 10 type: compile error versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 00:04:25 2016 From: report at bugs.python.org (Martin Panter) Date: Fri, 15 Apr 2016 04:04:25 +0000 Subject: [New-bugs-announce] [issue26762] test_multiprocessing_spawn leaves processes running in background Message-ID: <1460693065.24.0.946570360611.issue26762@psf.upfronthosting.co.za> New submission from Martin Panter: I noticed that this test leaves processes running in the background for a moment after the parent Python process exits. They disappear pretty quickly, but even so, it seems like a bad design. However I am not familiar with the multiprocessing module, so maybe this is unavoidable. $ ps PID TTY TIME CMD 597 pts/2 00:00:01 bash 13423 pts/2 00:00:00 ps $ python3.5 -m test test_multiprocessing_spawn; ps [1/1] test_multiprocessing_spawn 1 test OK. PID TTY TIME CMD 597 pts/2 00:00:01 bash 13429 pts/2 00:00:00 python3.5 13475 pts/2 00:00:00 python3.5 15066 pts/2 00:00:00 ps ---------- components: Tests messages: 263442 nosy: martin.panter priority: normal severity: normal status: open title: test_multiprocessing_spawn leaves processes running in background type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 03:06:00 2016 From: report at bugs.python.org (Ian Lee) Date: Fri, 15 Apr 2016 07:06:00 +0000 Subject: [New-bugs-announce] [issue26763] Update PEP-8 regarding binary operators Message-ID: <1460703960.6.0.758787970043.issue26763@psf.upfronthosting.co.za> New submission from Ian Lee: Following up from discussion on python-ideas [1] about updating PEP-8 regarding wrapping lines before rather than after binary operators. ---------- files: wrap-before-binary-operator.patch keywords: patch messages: 263453 nosy: IanLee1521, gvanrossum priority: normal severity: normal status: open title: Update PEP-8 regarding binary operators type: enhancement Added file: http://bugs.python.org/file42463/wrap-before-binary-operator.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 03:28:02 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 15 Apr 2016 07:28:02 +0000 Subject: [New-bugs-announce] [issue26764] SystemError in bytes.__rmod__ Message-ID: <1460705282.56.0.272877366519.issue26764@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: >>> [] % b'' Traceback (most recent call last): File "", line 1, in SystemError: Objects/bytesobject.c:2975: bad argument to internal function Proposed patch fixes bytes.__rmod__ and tests for bytes formatting. ---------- components: Interpreter Core files: bytes_rmod.patch keywords: patch messages: 263457 nosy: ethan.furman, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: SystemError in bytes.__rmod__ type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42464/bytes_rmod.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 04:05:59 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 15 Apr 2016 08:05:59 +0000 Subject: [New-bugs-announce] [issue26765] Factor out common bytes and bytearray implementation Message-ID: <1460707559.13.0.709090380773.issue26765@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch factors out the implementation of bytes and bytearray by moving common code in separate files. This is not new approach, the part of the code is already shared. The patch decreases the size of the source code by 862 lines. ---------- components: Interpreter Core files: bytes_methods.patch keywords: patch messages: 263458 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Factor out common bytes and bytearray implementation type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42465/bytes_methods.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 05:43:21 2016 From: report at bugs.python.org (Berker Peksag) Date: Fri, 15 Apr 2016 09:43:21 +0000 Subject: [New-bugs-announce] [issue26766] Redundant check in bytearray_mod Message-ID: <1460713401.07.0.177357500269.issue26766@psf.upfronthosting.co.za> New submission from Berker Peksag: I noticed this while looking at issue 26764. bytearray_mod() and bytearray_format() both have checks for PyByteArray_Check(v). The check in bytearray_format() looks redundant to me. Here is a patch. ---------- components: Interpreter Core files: bytearray_mod.diff keywords: patch messages: 263462 nosy: berker.peksag, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Redundant check in bytearray_mod type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42466/bytearray_mod.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 06:10:32 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 15 Apr 2016 10:10:32 +0000 Subject: [New-bugs-announce] [issue26767] Inconsistant error messages for failed attribute modification Message-ID: <1460715032.51.0.36886496046.issue26767@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: >>> class I(int): ... @property ... def a(self): pass ... @property ... def b(self): pass ... @b.setter ... def b(self, value): pass ... @property ... def c(self): pass ... @c.deleter ... def c(self): pass ... >>> obj = I() >>> del obj.numerator Traceback (most recent call last): File "", line 1, in AttributeError: attribute 'numerator' of 'int' objects is not writable >>> del obj.to_bytes Traceback (most recent call last): File "", line 1, in AttributeError: to_bytes >>> del obj.from_bytes Traceback (most recent call last): File "", line 1, in AttributeError: from_bytes >>> del obj.a Traceback (most recent call last): File "", line 1, in AttributeError: can't delete attribute >>> del obj.b Traceback (most recent call last): File "", line 1, in AttributeError: can't delete attribute >>> del obj.y Traceback (most recent call last): File "", line 1, in AttributeError: y >>> obj.numerator = 1 Traceback (most recent call last): File "", line 1, in AttributeError: attribute 'numerator' of 'int' objects is not writable >>> obj.a = 1 Traceback (most recent call last): File "", line 1, in AttributeError: can't set attribute >>> obj.c = 1 Traceback (most recent call last): File "", line 1, in AttributeError: can't set attribute >>> >>> obj = 1 >>> del obj.numerator Traceback (most recent call last): File "", line 1, in AttributeError: attribute 'numerator' of 'int' objects is not writable >>> del obj.to_bytes Traceback (most recent call last): File "", line 1, in AttributeError: 'int' object attribute 'to_bytes' is read-only >>> del obj.from_bytes Traceback (most recent call last): File "", line 1, in AttributeError: 'int' object attribute 'from_bytes' is read-only >>> del obj.y Traceback (most recent call last): File "", line 1, in AttributeError: 'int' object has no attribute 'y' >>> obj.numerator = 1 Traceback (most recent call last): File "", line 1, in AttributeError: attribute 'numerator' of 'int' objects is not writable >>> obj.to_bytes = 1 Traceback (most recent call last): File "", line 1, in AttributeError: 'int' object attribute 'to_bytes' is read-only >>> obj.from_bytes = 1 Traceback (most recent call last): File "", line 1, in AttributeError: 'int' object attribute 'from_bytes' is read-only >>> obj.y = 1 Traceback (most recent call last): File "", line 1, in AttributeError: 'int' object has no attribute 'y' Different error messages are used in errors when try to modify non-existing or read-only attribute. This depends on the existing __dict__ and the way how the attribute is resolved. * just the attribute name * "'%.50s' object has no attribute '%U'" * "'%.50s' object attribute '%U' is read-only" * "attribute '%V' of '%.100s' objects is not writable" I think it would be nice to unify error messages and make them more specific. ---------- components: Interpreter Core messages: 263464 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Inconsistant error messages for failed attribute modification type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 07:21:51 2016 From: report at bugs.python.org (Ivan Pozdeev) Date: Fri, 15 Apr 2016 11:21:51 +0000 Subject: [New-bugs-announce] [issue26768] Fix instructions at WindowsCompilers for MSVC/SDKs Message-ID: <1460719311.17.0.00683602776456.issue26768@psf.upfronthosting.co.za> New submission from Ivan Pozdeev: Current instructions at https://wiki.python.org/moin/WindowsCompilers for a number of items are insufficient to make things work out of the box. This has lead to widespread confusion and a lot of vastly different and invariably hacky/unreliable/unmaintainable "alternative" guides on the Net (see e.g. the sheer volume of crappy advice at http://stackoverflow.com/questions/2817869/error-unable-to-find-vcvarsall-bat and http://stackoverflow.com/questions/4676728/value-error-trying-to-install-python-for-windows-extensions). The first patch fixes that for SDKs 6.1,7.0,7.1 (details are in the patch's Subject). The second one mentions VS Express' limitation that leads to an obscure error in distutils which resulted in https://bugs.python.org/issue7511 . Unlike other (all?) instructions circling around the Net, these are NOT hacks and are intended to be official recommendations. I tested them to work with 2.7 and 3.2 on an x32 with no prior development tools installed. I also checked other instructions applicable to these versions to be okay. I didn't touch the ''mingw'' section because, according to https://bugs.python.org/issue4709 , it can't really be officially supported as it is now. ---------- assignee: docs at python components: Build, Documentation, Windows files: 0001-fix-winsdk.patch keywords: patch messages: 263474 nosy: Ivan.Pozdeev, docs at python, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Fix instructions at WindowsCompilers for MSVC/SDKs type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4 Added file: http://bugs.python.org/file42468/0001-fix-winsdk.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 08:59:01 2016 From: report at bugs.python.org (STINNER Victor) Date: Fri, 15 Apr 2016 12:59:01 +0000 Subject: [New-bugs-announce] [issue26769] Python 2.7: make private file descriptors non inheritable Message-ID: <1460725141.7.0.638632202912.issue26769@psf.upfronthosting.co.za> New submission from STINNER Victor: By default, subprocess.Popen doesn't close file descriptors (close_fds=False by default) and so a child process inherit all file descriptors open by the parent process. See the PEP 446 for the long rationale. I propose to partially backport the PEP 446: only make *private* file descriptors non-inheritable. The private file descriptor of os.urandom() is already marked as non-inheritable. To be clear: my patch doesn't affect applications using fork(). Only child processes created with fork+exec (ex: using subprocess) are impacted. Since modified file descriptors are private (not accessible from the Python scope), I don't think that the change can cause any backward compatibility issue. I'm not 100% sure that it's worth to take the risk of introducing bugs (backward incompatible change?), since it looks like very few users complained of leaked file descriptors. I'm not sure that the modified functions contain sensitive file descriptors. -- I chose to add Python/fileutils.c (and Include/fileutils.h) to add the new functions: /* Try to make the file descriptor non-inheritable. Ignore errors. */ PyAPI_FUNC(void) _Py_try_set_non_inheritable(int fd); /* Wrapper to fopen() which tries to make the file non-inheritable on success. Ignore errors on trying to set the file descriptor non-inheritable. */ PyAPI_FUNC(FILE*) _Py_fopen(const char *path, const char *mode); I had to modify PCbuild/pythoncore.vcxproj and Makefile.pre.in to add fileutils.c/h. I chose to mimick Python 3 Python/fileutils.c. Tell me if you prefer to put the new functions in an existing file (I don't see where such function should be put). File descriptors made non inheritable: * linuxaudiodev.open() * mmap.mmap(fd, 0): this function duplicates the file descriptor, the duplicated file descriptor is set to non-inheritable * mmap.mmap(-1, ...): on platforms without MAP_ANONYMOUS, the function opens /dev/zero, its file descriptor is set to non-inheritable * ossaudiodev.open() * ossaudiodev.openmixer() * select.epoll() * select.kqueue() * sunaudiodev.open() Other functions using fopen() have been modified to use _Py_fopen(): the file is closed a few lines below and not returned to the Python scope. The patch also reuses _Py_try_set_non_inheritable(fd) in Modules/posixmodule.c and Python/random.c. Not modified: * _hotshot: the file descriptor can get retrieved with _hotshot.ProfilerType.fileno(). * find_module() of Python/import.c: imp.find_module() uses the FILE* to create a Python file object which is returned Note: I don't think that os.openpty() should be modified to limit the risk of breaking the backward compatibility. This issue is a much more generic issue than the change #10897. ---------- files: set_inheritable.patch keywords: patch messages: 263488 nosy: benjamin.peterson, haypo, martin.panter, serhiy.storchaka priority: normal severity: normal status: open title: Python 2.7: make private file descriptors non inheritable type: resource usage versions: Python 2.7 Added file: http://bugs.python.org/file42471/set_inheritable.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 09:21:00 2016 From: report at bugs.python.org (STINNER Victor) Date: Fri, 15 Apr 2016 13:21:00 +0000 Subject: [New-bugs-announce] [issue26770] _Py_set_inheritable(): do nothing if the FD_CLOEXEC close is already set/cleared Message-ID: <1460726460.32.0.344896145272.issue26770@psf.upfronthosting.co.za> New submission from STINNER Victor: Attached patch is avoids a syscall in os.set_inheritable() if the FD_CLOEXEC flag is already set/cleared. The change only impacts platforms using fcntl() in _Py_set_inheritable(). Windows has a different implementation, and Linux uses ioctl() for example. The same "optimization" is used in socket.socket.setblocking(): see the issue #19827. ---------- files: set_inheritable_fcntl.patch keywords: patch messages: 263493 nosy: haypo priority: normal severity: normal status: open title: _Py_set_inheritable(): do nothing if the FD_CLOEXEC close is already set/cleared type: performance versions: Python 3.6 Added file: http://bugs.python.org/file42472/set_inheritable_fcntl.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 12:00:47 2016 From: report at bugs.python.org (Benjamin Berg) Date: Fri, 15 Apr 2016 16:00:47 +0000 Subject: [New-bugs-announce] [issue26771] python-config.sh.in INCDIR does not match python version if exec_prefix != prefix Message-ID: <1460736047.35.0.090254006662.issue26771@psf.upfronthosting.co.za> New submission from Benjamin Berg: The script contains: INCDIR="-I$includedir/python${VERSION}${ABIFLAGS}" PLATINCDIR="-I$includedir/python${VERSION}${ABIFLAGS}" But looking at the sysconfig module we have: 'include': '{installed_base}/include/python{py_version_short}{abiflags}', 'platinclude': '{installed_platbase}/include/python{py_version_short}{abiflags}', which resolves from: _CONFIG_VARS['installed_base'] = _BASE_PREFIX _CONFIG_VARS['platbase'] = _EXEC_PREFIX So one is based on prefix, and the other on exec_prefix. I am actually not sure right now how I could properly reconcile these in the shell script version, but if I simply patch the makefile to install the python version, then everything works fine. ---------- components: Cross-Build messages: 263505 nosy: Alex.Willmer, benzea priority: normal severity: normal status: open title: python-config.sh.in INCDIR does not match python version if exec_prefix != prefix versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 14:51:46 2016 From: report at bugs.python.org (Rex Dwyer) Date: Fri, 15 Apr 2016 18:51:46 +0000 Subject: [New-bugs-announce] [issue26772] regex.ENHANCEMATCH crashes interpreter Message-ID: <1460746306.07.0.135965165356.issue26772@psf.upfronthosting.co.za> New submission from Rex Dwyer: regex.findall(r'((brown)|(lazy)){1<=e<=3} ((dog)|(fox)){1<=e<=3}', 'The quick borwn fax jumped over the lzy hog', regex.ENHANCEMATCH) crashes interpreter. regex.__version__ => 2.4.85 python version '3.4.2 (v3.4.2:ab2c023a9432, Oct 6 2014, 22:15:05) [MSC v.1600 32 bit (Intel)]' ---------- components: Regular Expressions messages: 263515 nosy: Rex Dwyer, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: regex.ENHANCEMATCH crashes interpreter type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 15:43:36 2016 From: report at bugs.python.org (Paul Ellenbogen) Date: Fri, 15 Apr 2016 19:43:36 +0000 Subject: [New-bugs-announce] [issue26773] Shelve works inconsistently when carried over to child processes Message-ID: <1460749416.45.0.0472811645443.issue26773@psf.upfronthosting.co.za> New submission from Paul Ellenbogen: If a shelve is opened, then the processed forked, sometime the shelve will appear to work in the child, and other times it will throw a KeyError. I suspect the order of element access may trigger the issue. I have included a python script that will exhibit the error. It may need to be run a few times. If shelve is not meant to be inherited by the child process in this way, it should consistently throw an error (probably not a KeyError) on any use, including the first. This way it can be caught in the child, and the shelve can potentially be reopened in the child. A current workaround is to find all places where a process may fork, and reopen any shelves in the child process after the fork. This may work for most smaller scripts. This could become tedious in more complex applications that fork in multiple places and open shelves in multiple places. ------------------------------------------------------- Running #!/usr/bin/env python3 import multiprocessing import platform import sys print(sys.version) print(multiprocessing.cpu_count()) print(platform.platform()) outputs: 3.4.3+ (default, Oct 14 2015, 16:03:50) [GCC 5.2.1 20151010] 8 Linux-4.2.0-34-generic-x86_64-with-Ubuntu-15.10-wily ---------- components: Interpreter Core files: shelve_process.py messages: 263522 nosy: Paul Ellenbogen priority: normal severity: normal status: open title: Shelve works inconsistently when carried over to child processes versions: Python 3.4, Python 3.5 Added file: http://bugs.python.org/file42475/shelve_process.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 15 23:29:24 2016 From: report at bugs.python.org (Larry Hastings) Date: Sat, 16 Apr 2016 03:29:24 +0000 Subject: [New-bugs-announce] [issue26774] Elide Py_atomic fences when WITH_THREAD is disabled? Message-ID: <1460777364.25.0.14094466129.issue26774@psf.upfronthosting.co.za> New submission from Larry Hastings: Right now the atomic access fence macros in pyatomic.h are unconditional. This means that they're active even even when you "./configure --without-threads". If Python thread support is disabled, surely we don't need to ensure atomic access to variables, because there aren't any other threads to compete with. Shouldn't we add #ifdef WITH_THREAD /* current code goes here */ #else #define _Py_atomic_load_relaxed(x) (x) /* etc */ #endif ? ---------- messages: 263537 nosy: haypo, jyasskin, larry priority: low severity: normal stage: test needed status: open title: Elide Py_atomic fences when WITH_THREAD is disabled? type: performance versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 02:54:29 2016 From: report at bugs.python.org (Luiz Poleto) Date: Sat, 16 Apr 2016 06:54:29 +0000 Subject: [New-bugs-announce] [issue26775] Improve test coverage on urllib.parse Message-ID: <1460789669.45.0.432751253755.issue26775@psf.upfronthosting.co.za> New submission from Luiz Poleto: urllib.parse has two methods, parse_qs and parse_qsl to parse a query string and return its parameters/values as a dictionary or a list, respectively. However, the unit tests only tests parse_qsl, which is also incomplete since both parse_qs and parse_qsl support & and ; as separators for key=value pairs and there are only test scenarios using &. The attached patch add a new test for parse_qs as well as new scenarios including ; as separator. ---------- components: Tests files: urllib.parse_test_coverage.patch keywords: patch messages: 263538 nosy: luiz.poleto priority: normal severity: normal status: open title: Improve test coverage on urllib.parse type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42477/urllib.parse_test_coverage.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 04:02:38 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 16 Apr 2016 08:02:38 +0000 Subject: [New-bugs-announce] [issue26776] Determining the failure of C API call is ambiguous Message-ID: <1460793758.51.0.304392536592.issue26776@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: C API functions returns a special value unambiguously signaling about a raised exception (NULL or -1). But in some cases this is ambiguous, because the special value is a legitimate value (e.g. -1 for PyLong_AsLong() or NULL for PyDict_GetItem()). Needed to use PyErr_Occurred() to distinguish between successful and failed call. The problem is that if PyLong_AsLong() is called when the exception is set, successful call returned -1 is interpreted as failed. Since it is happen in very rare case, this bug is usually unnoticed. Attached experimental patch makes some functions like PyLong_AsLong() always failing if called with an exception set. Some tests are failed with it applied: test_compile test_datetime test_io test_os test_symtable test_syntax test_xml_etree_c. ---------- components: Interpreter Core files: check_error_occurred.patch keywords: patch messages: 263540 nosy: haypo, serhiy.storchaka priority: normal severity: normal status: open title: Determining the failure of C API call is ambiguous Added file: http://bugs.python.org/file42478/check_error_occurred.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 08:35:09 2016 From: report at bugs.python.org (STINNER Victor) Date: Sat, 16 Apr 2016 12:35:09 +0000 Subject: [New-bugs-announce] [issue26777] test_asyncio: test_timeout_disable() fails randomly Message-ID: <1460810109.63.0.0563263496249.issue26777@psf.upfronthosting.co.za> New submission from STINNER Victor: On the "AMD64 FreeBSD 9.x 3.5" buildbot, test_timeout_disable() fails randomly. http://buildbot.python.org/all/builders/AMD64%20FreeBSD%209.x%203.5/builds/701/steps/test/logs/stdio ====================================================================== FAIL: test_timeout_disable (test.test_asyncio.test_tasks.TimeoutTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.5.koobs-freebsd9/build/Lib/test/test_asyncio/test_tasks.py", line 2399, in test_timeout_disable self.loop.run_until_complete(go()) File "/usr/home/buildbot/python/3.5.koobs-freebsd9/build/Lib/asyncio/base_events.py", line 379, in run_until_complete return future.result() File "/usr/home/buildbot/python/3.5.koobs-freebsd9/build/Lib/asyncio/futures.py", line 274, in result raise self._exception File "/usr/home/buildbot/python/3.5.koobs-freebsd9/build/Lib/asyncio/tasks.py", line 240, in _step result = coro.send(None) File "/usr/home/buildbot/python/3.5.koobs-freebsd9/build/Lib/test/test_asyncio/test_tasks.py", line 2398, in go self.assertTrue(0.09 < dt < 0.11, dt) AssertionError: False is not true : 0.11916078114882112 ---------- components: Tests, asyncio keywords: buildbot messages: 263550 nosy: gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: test_asyncio: test_timeout_disable() fails randomly versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 08:49:34 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 16 Apr 2016 12:49:34 +0000 Subject: [New-bugs-announce] [issue26778] More typo fixes Message-ID: <1460810974.92.0.431649847374.issue26778@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch fixes a number of typos (mainly about misusing "a/an") in the docs, docstrings, comments and error messages. ---------- files: typos.patch keywords: patch messages: 263552 nosy: martin.panter, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: More typo fixes type: enhancement versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42484/typos.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 09:08:12 2016 From: report at bugs.python.org (Sriram Rajagopalan) Date: Sat, 16 Apr 2016 13:08:12 +0000 Subject: [New-bugs-announce] [issue26779] pdb continue followed by an exception in the same frame shows incorrect frame linenumber Message-ID: <1460812092.64.0.640871964672.issue26779@psf.upfronthosting.co.za> New submission from Sriram Rajagopalan: Consider this simple python program - 1 #!/usr/bin/python 2 3 import pdb 4 import sys 5 import traceback 6 7 def trace_exception( type, value, tb ): 8 traceback.print_tb( tb ) 9 pdb.post_mortem( tb ) 10 11 sys.excepthook = trace_exception 12 13 def func(): 14 print ( "Am here in func" ) 15 pdb.set_trace() 16 print ( "Am here after pdb" ) 17 print ( "Am going to assert now" ) 18 assert False 19 print ( "Am here after exception" ) 20 21 def main(): 22 func() 23 24 if __name__ == "__main__": 25 main() On running this program - % ./python /tmp/test.py Am here in func > /tmp/test.py(16)func() -> print ( "Am here after pdb" ) (Pdb) c Am here after pdb Am going to assert now File "/tmp/test.py", line 25, in main() File "/tmp/test.py", line 22, in main func() File "/tmp/test.py", line 16, in func print ( "Am here after pdb" ) > /tmp/test.py(16)func() -> print ( "Am here after pdb" ) >>>> This should have been at the line corresponding to "Am going to assert now" (Pdb) This seems to be an bug ( due to a performance consideration ) with the way python bdb's set_continue() has been implemented - https://hg.python.org/cpython/file/2.7/Lib/bdb.py#l227 def set_continue(self): # Don't stop except at breakpoints or when finished self._set_stopinfo(self.botframe, None, -1) if not self.breaks: # no breakpoints; run without debugger overhead sys.settrace(None) frame = sys._getframe().f_back while frame and frame is not self.botframe: del frame.f_trace frame = frame.f_back Basically what happens after "c" in a "(Pdb)" prompt is that bdb optimizes for the case where there are no more break points found by cleaning up the trace callback from all the frames. However, all of this happens in the context of tracing itself and hence the trace_dispatch function in https://hg.python.org/cpython/file/2.7/Lib/bdb.py#l45 still returns back the trace_dispatch as the new system trace function. For more details on sys.settrace(), check https://docs.python.org/2/library/sys.html#sys.settrace Check, the function trace_trampoline at https://hg.python.org/cpython/file/2.7/Python/sysmodule.c#l353 which sets f->f_trace back to result at https://hg.python.org/cpython/file/2.7/Python/sysmodule.c#l377 This seems to be an bug ( due to a performance consideration ) with the way python bdb's set_continue() has been implemented - https://hg.python.org/cpython/file/2.7/Lib/bdb.py#l227 def set_continue(self): # Don't stop except at breakpoints or when finished self._set_stopinfo(self.botframe, None, -1) if not self.breaks: # no breakpoints; run without debugger overhead sys.settrace(None) frame = sys._getframe().f_back while frame and frame is not self.botframe: del frame.f_trace frame = frame.f_back Basically what happens after "c" in a "(Pdb)" prompt is that bdb optimizes for the case where there are no more break points found by cleaning up the trace callback from all the frames. However, all of this happens in the context of tracing itself and hence the trace_dispatch function in https://hg.python.org/cpython/file/2.7/Lib/bdb.py#l45 still returns back the trace_dispatch as the new system trace function. For more details on sys.settrace(), check https://docs.python.org/2/library/sys.html#sys.settrace Check, the function trace_trampoline at https://hg.python.org/cpython/file/2.7/Python/sysmodule.c#l353 which sets f->f_trace back to result at https://hg.python.org/cpython/file/2.7/Python/sysmodule.c#l377 Now, check the function PyFrame_GetLineNumber() which is used by the traceback to get the frame line number https://hg.python.org/cpython/file/2.7/Objects/frameobject.c#l63 int PyFrame_GetLineNumber(PyFrameObject *f) { if (f->f_trace) return f->f_lineno; else return PyCode_Addr2Line(f->f_code, f->f_lasti); } Basically this function returns back the stored f->f_lineno in case the f->f_trace is enabled. The fix is fortunately simple - Just set self.trace_dispatch to None if pdb set_continue decides to run without debugger overhead. ---------- components: Library (Lib) files: bdbfix.patch keywords: patch messages: 263553 nosy: Sriram Rajagopalan priority: normal severity: normal status: open title: pdb continue followed by an exception in the same frame shows incorrect frame linenumber type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42486/bdbfix.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 10:06:44 2016 From: report at bugs.python.org (Brandon Rhodes) Date: Sat, 16 Apr 2016 14:06:44 +0000 Subject: [New-bugs-announce] [issue26780] Illustrate both binary operator conventions in PEP-8 Message-ID: <1460815604.66.0.184242188494.issue26780@psf.upfronthosting.co.za> New submission from Brandon Rhodes: I am delighted to see that PEP-8 has pivoted to breaking long formulae before, rather than after, each binary operator! But I would like to pivot the PEP away from citing my own PyCon Canada talk as the authority on the matter, and toward citing Knuth himself. It would also be an enhancement for the PEP to show both options and make an argument for the practice, instead of simply asserting that one is better than the other. I therefore propose the attached patch. ---------- assignee: docs at python components: Documentation files: pep8-knuth.patch keywords: patch messages: 263554 nosy: barry, brandon-rhodes, docs at python, gvanrossum priority: normal severity: normal status: open title: Illustrate both binary operator conventions in PEP-8 Added file: http://bugs.python.org/file42487/pep8-knuth.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 10:11:10 2016 From: report at bugs.python.org (Aviv Palivoda) Date: Sat, 16 Apr 2016 14:11:10 +0000 Subject: [New-bugs-announce] [issue26781] os.walk max_depth Message-ID: <1460815870.93.0.432472188169.issue26781@psf.upfronthosting.co.za> New submission from Aviv Palivoda: I am suggesting to add max_depth argument to os.walk. I think this is very useful for two cases. The trivial one is when someone wants to walk on a directory tree up to specific depth. The second one is when you follow symlinks and wish to avoid infinite loop. The patch add the max_depth both to os.walk and os.fwalk. ---------- components: Library (Lib) files: os-walk-max-depth.patch keywords: patch messages: 263556 nosy: loewis, palaviv priority: normal severity: normal status: open title: os.walk max_depth type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42490/os-walk-max-depth.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 10:23:28 2016 From: report at bugs.python.org (Martin Panter) Date: Sat, 16 Apr 2016 14:23:28 +0000 Subject: [New-bugs-announce] [issue26782] subprocess.__all__ incomplete on Windows Message-ID: <1460816608.8.0.0902250991116.issue26782@psf.upfronthosting.co.za> New submission from Martin Panter: After enabling test__all__() in test_subprocess on Windows (see Issue 10838), I find that STARTUPINFO is missing from __all__, and there is a class Handle that is ambiguous. Handle doesn?t seem to be documented, so I propose to add it to the intentionally-excluded list. In Python 3.5 I will fix the test to exclude STARTUPINFO from __all__. ---------- components: Windows files: subprocess-all.patch keywords: patch messages: 263557 nosy: martin.panter, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: patch review status: open title: subprocess.__all__ incomplete on Windows versions: Python 3.6 Added file: http://bugs.python.org/file42491/subprocess-all.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 10:24:27 2016 From: report at bugs.python.org (Aviv Palivoda) Date: Sat, 16 Apr 2016 14:24:27 +0000 Subject: [New-bugs-announce] [issue26783] test_os.WalkTests.test_walk_topdown don't test fwalk and Bytes Message-ID: <1460816667.71.0.632457277114.issue26783@psf.upfronthosting.co.za> New submission from Aviv Palivoda: test_walk_topdown call os.walk directly instead of using self.walk. This test currently run 3 time's while checking the same thing. The test should use self.walk to check fwalk and Bytes as well. ---------- components: Tests files: os-test-walk-topdown-use-self-walk.patch keywords: patch messages: 263558 nosy: loewis, palaviv priority: normal severity: normal status: open title: test_os.WalkTests.test_walk_topdown don't test fwalk and Bytes versions: Python 3.6 Added file: http://bugs.python.org/file42492/os-test-walk-topdown-use-self-walk.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 12:48:11 2016 From: report at bugs.python.org (Marcus) Date: Sat, 16 Apr 2016 16:48:11 +0000 Subject: [New-bugs-announce] [issue26784] regular expression problem at umlaut handling Message-ID: <1460825291.89.0.971564767313.issue26784@psf.upfronthosting.co.za> New submission from Marcus: Working with this example string "E-112233-555-11 | Bl?h - Bl?h" with the following code leeds under python 2.7.10 (OSX) to an exception whereas the same code works under python 3.5.1 (OSX). s = "E-112233-555-11 | Bl?h - Bl?h" expr = re.compile(r"(?P

[A-Z]{1}-[0-9]{0,}(-[0-9]{0,}(-[0-9]{0,})?)?)?(( [|] )?(?P[\s\w]*)?)? - (?P[\s\w]*)?",re.UNICODE) res = re.match(expr,s) a = (res.group('p'), res.group('a'), res.group('j')) print(a) When I change the first umlaut in "Bl?h" from ? to ? it works as expected on python 2 and 3. A change from ? to ? however leeds to a crash again. Ideas? ---------- messages: 263567 nosy: arbyter priority: normal severity: normal status: open title: regular expression problem at umlaut handling type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 14:44:53 2016 From: report at bugs.python.org (Hrvoje Abraham) Date: Sat, 16 Apr 2016 18:44:53 +0000 Subject: [New-bugs-announce] [issue26785] repr of -nan value should contain the sign Message-ID: <1460832293.23.0.989531230725.issue26785@psf.upfronthosting.co.za> New submission from Hrvoje Abraham: repr of -nan value should contain the sign so the round-trip could be assured. NaN value sign (bit) could be seen as not relevant or even uninterpretable information, but it is actually used in real-life situations, the fact substantiated by section 6.3 of IEEE-754 2008 standard. >>> from math import copysign >>> x = float("-nan") >>> copysign(1.0, x) -1.0 This is correct. Also proves the value contains the sign information. >>> repr(x) nan Not correct. Should be '-nan'. ---------- components: Interpreter Core messages: 263576 nosy: ahrvoje priority: normal severity: normal status: open title: repr of -nan value should contain the sign type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 16 18:06:30 2016 From: report at bugs.python.org (Ivan Pozdeev) Date: Sat, 16 Apr 2016 22:06:30 +0000 Subject: [New-bugs-announce] [issue26786] bdist_msi duplicates directories with names in ALL CAPS to a bogus location Message-ID: <1460844390.43.0.520967711704.issue26786@psf.upfronthosting.co.za> New submission from Ivan Pozdeev: If a package has directories with names in APP CAPS, distutils.commands.bdist_msi creates properties for them that are also in all caps. Such properties are handled specially by MSI and are called "public properties (http://www.advancedinstaller.com/user-guide/properties.html). Due to the way bdist_msi-produced .msi's work, this ultimately results in subtrees of these directories being duplicated to a bogus location (the root directory of the drive on which the .msi being installed is). E.g. in the attached example, all \Lib\mercurial\locale\\LC_MESSAGES subtrees got duplicated to D:\Lib\. See https://bz.mercurial-scm.org/show_bug.cgi?id=5192 for details (including a high-level description of how bdist_msi packages work). ---------- components: Distutils, Library (Lib), Windows files: mercurial-3.3.2.log.gz messages: 263591 nosy: Ivan.Pozdeev, dstufft, eric.araujo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: bdist_msi duplicates directories with names in ALL CAPS to a bogus location type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42495/mercurial-3.3.2.log.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 17 02:21:25 2016 From: report at bugs.python.org (Gregory P. Smith) Date: Sun, 17 Apr 2016 06:21:25 +0000 Subject: [New-bugs-announce] [issue26787] test_distutils fails when configured --with-lto Message-ID: <1460874085.6.0.334729464026.issue26787@psf.upfronthosting.co.za> New submission from Gregory P. Smith: When configured using './configure --with-lto' (added in issue25702) and doing a 'make profile-opt' build, test_distutils fails: ====================================================================== FAIL: test_sysconfig_compiler_vars (distutils.tests.test_sysconfig.SysconfigTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/greg/sandbox/python/cpython/3.5/Lib/distutils/tests/test_sysconfig.py", line 156, in test_sysconfig_compiler_vars sysconfig.get_config_var('LDSHARED')) AssertionError: 'gcc -pthread -shared -flto -fuse-linker-plugin -ffat-lto-obje[20 chars]none' != 'gcc -pthread -shared' - gcc -pthread -shared -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none + gcc -pthread -shared ====================================================================== FAIL: test_sysconfig_module (distutils.tests.test_sysconfig.SysconfigTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/greg/sandbox/python/cpython/3.5/Lib/distutils/tests/test_sysconfig.py", line 133, in test_sysconfig_module sysconfig.get_config_var('LDFLAGS')) AssertionError: '-flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none' != '' - -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none + ---------- components: Build messages: 263598 nosy: alecsandru.patrascu, gregory.p.smith priority: normal severity: normal status: open title: test_distutils fails when configured --with-lto versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 17 02:29:59 2016 From: report at bugs.python.org (Gregory P. Smith) Date: Sun, 17 Apr 2016 06:29:59 +0000 Subject: [New-bugs-announce] [issue26788] test_gdb fails all tests on a profile-opt build configured --with-lto Message-ID: <1460874599.33.0.6029536171.issue26788@psf.upfronthosting.co.za> New submission from Gregory P. Smith: cpython/build35.lto$ ./python ../3.5/Lib/test/test_gdb.py GDB version 7.10: GNU gdb (Ubuntu 7.10-1ubuntu2) 7.10 ... ====================================================================== FAIL: test_tuples (__main__.PrettyPrintTests) Verify the pretty-printing of tuples ---------------------------------------------------------------------- Traceback (most recent call last): File "../3.5/Lib/test/test_gdb.py", line 359, in test_tuples self.assertGdbRepr(tuple(), '()') File "../3.5/Lib/test/test_gdb.py", line 279, in assertGdbRepr gdb_repr, gdb_output = self.get_gdb_repr('id(' + ascii(val) + ')') File "../3.5/Lib/test/test_gdb.py", line 255, in get_gdb_repr self.fail('Unexpected gdb output: %r\n%s' % (gdb_output, gdb_output)) AssertionError: Unexpected gdb output: 'Breakpoint 1 at 0x4cc310\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".\n\nBreakpoint 1, builtin_id ()\n#0 builtin_id ()\n' Breakpoint 1 at 0x4cc310 [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Breakpoint 1, builtin_id () #0 builtin_id () I don't know the right thing to do here. This might depend on compiler, linker, arch and gdb version? Are it's LTO executables debuggable? It's not clear to me that we can do anything about this. We may just want to skip the test when configured --with-lto. Or perhaps this is an indication that this specific toolchain's LTO build (x86_64, ubuntu wily gcc 5.2.1 and gdb 7.1.0) has issues and shouldn't be used? ---------- components: Build messages: 263599 nosy: alecsandru.patrascu, gregory.p.smith priority: normal severity: normal status: open title: test_gdb fails all tests on a profile-opt build configured --with-lto versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 17 11:12:07 2016 From: report at bugs.python.org (Matthias Urlichs) Date: Sun, 17 Apr 2016 15:12:07 +0000 Subject: [New-bugs-announce] [issue26789] Please do not log during shutdown Message-ID: <1460905927.74.0.222378341822.issue26789@psf.upfronthosting.co.za> New submission from Matthias Urlichs: ? or, if you do, ignore errors. This is during program shutdown. Unfortunately, I am unable to create a trivial example which exhibits the order of destruction necessary to trigger this problem. Traceback (most recent call last): File "/usr/lib/python3.5/asyncio/tasks.py", line 93, in __del__ File "/usr/lib/python3.5/asyncio/base_events.py", line 1160, in call_exception_handler File "/usr/lib/python3.5/logging/__init__.py", line 1308, in error File "/usr/lib/python3.5/logging/__init__.py", line 1415, in _log File "/usr/lib/python3.5/logging/__init__.py", line 1425, in handle File "/usr/lib/python3.5/logging/__init__.py", line 1487, in callHandlers File "/usr/lib/python3.5/logging/__init__.py", line 855, in handle File "/usr/lib/python3.5/logging/__init__.py", line 1047, in emit File "/usr/lib/python3.5/logging/__init__.py", line 1037, in _open NameError: name 'open' is not defined ---------- components: asyncio messages: 263612 nosy: gvanrossum, haypo, smurfix, yselivanov priority: normal severity: normal status: open title: Please do not log during shutdown type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 17 12:12:00 2016 From: report at bugs.python.org (Ivan Pozdeev) Date: Sun, 17 Apr 2016 16:12:00 +0000 Subject: [New-bugs-announce] [issue26790] bdist_msi package duplicates everything to a bogus location when run with /passive or /q Message-ID: <1460909520.55.0.0242536179929.issue26790@psf.upfronthosting.co.za> New submission from Ivan Pozdeev: First, the background information so you understand what I am talking about. bdist_msi-produced packages work in the following way: The files to install are presented as at least 3 equivalent sets (implemented as Features): "Python" (for Python from registry), "PythonX" (for Python from a custom location) and a hidden but always selected "Python" - a "source" set. Files in them are linked together with DuplicateFiles, with "real" files in the "Python" set. "Python" has the installation target set to TARGETDIR that has the default value specified as "SourceDir" (practice shows it's initially set to the root of the drive where the .msi being installed is). The other two have default locations set to subdirectories of TARGETDIR, but PYTHON is changed to the root of the installed Python early on (at AppSearch stage and custom actions just after it in both InstallUISequence and InstallExecuteSequence) if an installtion is detected. Now, at SelectFeaturesDlg in InstallUISequence, TARGETDIR is changed to a location for one of the features selected for install (initially, "Python" is selected if an installation was found). Later, as a public property, it's passed to InstallExecuteSequence, acting as the new default value. Finally, the files are installed to TARGETDIR and _duplicated_ to locations for all other features that have been selected. So, if only one feature was selected (the common scenario), TARGETDIR is equal to its location, so no duplication takes place. If nothing was selected, everything is unpacked to the directory where the .msi is, like `tar' does. (The latter is extremely weird for an .msi but is quite in line with the logic for other types of `bdist_' packages.) ---- Now, the problem is: * the aforementioned TARGETDIR switch is implemented as an event handler in the SelectFeaturesDlg dialog. * if I run with /passive or /q, InstallUISequence isn't done, the dialog isn't shown, and the event never happens. * so TARGETDIR remains the default, and MSI installs everything to whatever that default happened to be, then duplicates to the correct location. ---- To fix this, we need to somehow duplicate the Python detection and TARGETDIR switch to InstallExecuteSequence, avoiding overwriting the results of earlier actions. Current workaround is to pass "TARGETDIR=" to msiexec. ---------- components: Distutils, Library (Lib), Windows files: mercurial-3.7.3.log.gz messages: 263615 nosy: Ivan.Pozdeev, dstufft, eric.araujo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: bdist_msi package duplicates everything to a bogus location when run with /passive or /q type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42499/mercurial-3.7.3.log.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 17 12:58:17 2016 From: report at bugs.python.org (Renato Alves) Date: Sun, 17 Apr 2016 16:58:17 +0000 Subject: [New-bugs-announce] [issue26791] shutil.move fails to move symlink (Invalid cross-device link) Message-ID: <1460912297.17.0.227992133358.issue26791@psf.upfronthosting.co.za> New submission from Renato Alves: Hi everyone, I'm not really sure if this is a new issue but digging through the bug reports from the past I couldn't find an answer. There's http://bugs.python.org/issue1438480 but this seems to be a different issue. I also found http://bugs.python.org/issue9993 that addressed problems with symlinks but didn't correct the behavior reported here. The problem can be visualized with the following code. Code fails on python 2.7 as well as python 3.4+. Not tested in python <2.7 and <3.4. import shutil import os TMPDIR = "/tmp/tmpdir" TESTLINK = "test_dir" if not os.path.isdir(TMPDIR): os.mkdir(TMPDIR) if not os.path.islink(TESTLINK): os.symlink(TMPDIR, TESTLINK) shutil.move(TESTLINK, TMPDIR) When executed it gives me: % python3 test.py Traceback (most recent call last): File "test.py", line 14, in shutil.move(TESTLINK, TMPDIR) File "/usr/lib64/python3.4/shutil.py", line 516, in move os.rename(src, dst) OSError: [Errno 18] Invalid cross-device link: 'test_dir' -> '/tmp/tmpdir' This happens because /tmp is: tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noatime,nodiratime) In the past the recommendation to handle this problem was to stop using os.rename and use shutil.move instead. This was even discussed in a bug report - http://bugs.python.org/issue14848 If one searches for this exception there's plenty of advice [1][2][3][4] in the same direction. However, given that shutil.move uses os.rename internally, the problem returns. On the other end doing the equivalent action in the shell with 'mv' works fine. [1] - http://stackoverflow.com/a/15300474 [2] - https://mail.python.org/pipermail/python-list/2005-February/342892.html [3] - http://www.thecodingforums.com/threads/errno-18-invalid-cross-device-link-using-os-rename.341597/ [4] - https://github.com/pypa/pip/issues/103 ---------- components: Library (Lib) messages: 263616 nosy: Unode priority: normal severity: normal status: open title: shutil.move fails to move symlink (Invalid cross-device link) versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 17 21:55:02 2016 From: report at bugs.python.org (Antony Lee) Date: Mon, 18 Apr 2016 01:55:02 +0000 Subject: [New-bugs-announce] [issue26792] docstrings of runpy.run_{module, path} are rather sparse Message-ID: <1460944502.67.0.955110104065.issue26792@psf.upfronthosting.co.za> New submission from Antony Lee: $ pydoc runpy run_module(mod_name, init_globals=None, run_name=None, alter_sys=False) Execute a module's code without importing it Returns the resulting top level namespace dictionary run_path(path_name, init_globals=None, run_name=None) Execute code located at the specified filesystem location Returns the resulting top level namespace dictionary The file path may refer directly to a Python script (i.e. one that could be directly executed with execfile) or else it may refer to a zipfile or directory containing a top level __main__.py script. The meaning of the arguments should be documented (e.g. by copy-pasting the html docs). (And some sentences are missing final dots.) ---------- assignee: docs at python components: Documentation messages: 263638 nosy: Antony.Lee, docs at python priority: normal severity: normal status: open title: docstrings of runpy.run_{module,path} are rather sparse versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 00:51:59 2016 From: report at bugs.python.org (Steven Adams) Date: Mon, 18 Apr 2016 04:51:59 +0000 Subject: [New-bugs-announce] [issue26793] uuid causing thread issues when forking using os.fork py3.4+ Message-ID: <1460955119.44.0.370193254434.issue26793@psf.upfronthosting.co.za> New submission from Steven Adams: I've ran into a strange issue after trying to port a project to support py 3.x The app uses a double os.fork to run in the background. On py 3.4+ it seems that when you have an import uuid statement it causes threading.threads to always return false on is_alive().. Here is an example of the issue. You can see i've imported uuid. This script should fork into the background and stay alive for at least 3 loops (3 seconds) but it dies after 1 loop as self._thread.is_alive() return False?? http://paste.pound-python.org/show/WbDkqPqu94zEstHG6Xl1/ This does NOT happen in py 2.7 or py3.3. Only occurs py3.4+ ---------- components: Interpreter Core messages: 263640 nosy: Steven Adams priority: normal severity: normal status: open title: uuid causing thread issues when forking using os.fork py3.4+ versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 03:47:44 2016 From: report at bugs.python.org (Jacek Pliszka) Date: Mon, 18 Apr 2016 07:47:44 +0000 Subject: [New-bugs-announce] [issue26794] curframe can be None in pdb.py Message-ID: <1460965664.24.0.92280915097.issue26794@psf.upfronthosting.co.za> New submission from Jacek Pliszka: In /usr/lib64/python2.7/pdb.py in Pdb.default there is line: globals = self.curframe.f_globals curframe can be None so the line should be replaced by: globals = getattr(self.curframe, "f_globals", None) if hasattr(self, 'curframe') else None or something should be done in different way in setup ---------- components: Library (Lib) messages: 263652 nosy: Jacek.Pliszka priority: normal severity: normal status: open title: curframe can be None in pdb.py type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 09:48:11 2016 From: report at bugs.python.org (Marat Sharafutdinov) Date: Mon, 18 Apr 2016 13:48:11 +0000 Subject: [New-bugs-announce] [issue26795] Fix PEP 344 Python version Message-ID: <1460987291.12.0.969662098571.issue26795@psf.upfronthosting.co.za> New submission from Marat Sharafutdinov: Should be 3.0 instead of 2.5 ---------- assignee: docs at python components: Documentation files: pep-0344-version.patch keywords: patch messages: 263667 nosy: decaz, docs at python priority: normal severity: normal status: open title: Fix PEP 344 Python version type: enhancement Added file: http://bugs.python.org/file42505/pep-0344-version.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 10:43:59 2016 From: report at bugs.python.org (Hans Lawrenz) Date: Mon, 18 Apr 2016 14:43:59 +0000 Subject: [New-bugs-announce] [issue26796] BaseEventLoop.run_in_executor shouldn't specify max_workers for default Executor Message-ID: <1460990639.79.0.271352232569.issue26796@psf.upfronthosting.co.za> New submission from Hans Lawrenz: In issue 21527 the concurrent.futures.ThreadPoolExecutor was changed to have a default value for max_workers. When asyncio.base_events.BaseEventLoop.run_in_executor creates a default ThreadPoolExecutor it specifies a value of 5 for max_workers, presumably because at the time it was written ThreadPoolExecutor didn't have a default for max_workers. This is confusing because on reading the documentation for ThreadPoolExecutor one might assume that the default specified there is what will be used if a default executor isn't supplied via BaseEventLoop.set_default_executor. I propose that BaseEventLoop.run_in_executor be changed to not supply a default for max_workers. If this isn't acceptable, a note ought to be put in the run_in_executor documentation. ---------- components: asyncio files: run_in_executor_max_workers.patch keywords: patch messages: 263669 nosy: Hans Lawrenz, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: BaseEventLoop.run_in_executor shouldn't specify max_workers for default Executor type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42506/run_in_executor_max_workers.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 14:35:54 2016 From: report at bugs.python.org (Yury Selivanov) Date: Mon, 18 Apr 2016 18:35:54 +0000 Subject: [New-bugs-announce] [issue26797] Segafault in _PyObject_Alloc Message-ID: <1461004554.65.0.108278995171.issue26797@psf.upfronthosting.co.za> New submission from Yury Selivanov: I'm working on an implementation of asyncio event loop on top of libuv [1]. One of my tests crashes on Mac OS X with a segfault [2]. The problem is that it's not consistent -- looks like it depends on size of uvloop so binary, or/and amount of objects allocated in program. I can't reproduce the problem on a debug build, or write a test for it, it is really a weird edge-case. But I'm certain that we have a bug in our memory allocator. Here's a screenshot of crash log [3], and here's an lldb disassembly of crash location [4]. Now, what's going on in [2] is: 1. uvloop sets a sigint signal handler the moment it starts the loop 2. uvloop start to execute a coroutine, which blocks on a long "time.sleep(..)" call 3. sigint handler receives a SIGINT and calls PyErr_SetInterrupt 4. CPython interrupts the code, KeyboardInterrupt is instantiated and raised 5. CPython starts to render the traceback to print it to stderr, and this is where it tries to allocate some object, and this is where we hit a bad-access in _PyObject_Alloc. I'd really appreciate any ideas on what's going on here and how we can fix this. [1] https://github.com/magicstack/uvloop [2] https://github.com/MagicStack/uvloop/blob/v0.4.6/tests/test_signals.py#L14 [3] http://imgur.com/6GcE92X [4] https://gist.github.com/1st1/b46f3702aeb1b57fe4ad32b19ed63c3f ---------- messages: 263678 nosy: haypo, ned.deily, serhiy.storchaka, yselivanov priority: normal severity: normal stage: test needed status: open title: Segafault in _PyObject_Alloc type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 14:40:12 2016 From: report at bugs.python.org (Zooko Wilcox-O'Hearn) Date: Mon, 18 Apr 2016 18:40:12 +0000 Subject: [New-bugs-announce] [issue26798] add BLAKE2 to hashlib Message-ID: <1461004812.44.0.872226457841.issue26798@psf.upfronthosting.co.za> New submission from Zooko Wilcox-O'Hearn: (Disclosure: I'm one of the authors of BLAKE2.) Please include BLAKE2 in hashlib. It well-suited for hashing long inputs (e.g. files), because it is substantially faster than SHA-3, SHA-2, SHA-1, or MD5 while also being safer than SHA-2, SHA-1, or MD5. BLAKE2 and/or its relatives, BLAKE, ChaCha20, and Salsa20, have gotten extensive cryptographic peer review. It is widely used in modern applications and widely supported in modern crypto libraries (https://en.wikipedia.org/wiki/BLAKE_%28hash_function%29#BLAKE2_uses). Here is the official reference implementation: https://github.com/BLAKE2 Here are some Python modules (wrappers or implementations): * https://github.com/buggywhip/blake2_py * https://github.com/dchest/pyblake2 * https://github.com/darjeeling/python-blake2 ---------- components: Library (Lib) messages: 263679 nosy: Zooko.Wilcox-O'Hearn, dstufft priority: normal severity: normal status: open title: add BLAKE2 to hashlib _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 17:10:40 2016 From: report at bugs.python.org (Thomas) Date: Mon, 18 Apr 2016 21:10:40 +0000 Subject: [New-bugs-announce] [issue26799] gdb support fails with "Invalid cast." Message-ID: <1461013840.26.0.744881925452.issue26799@psf.upfronthosting.co.za> New submission from Thomas: Trying to use any kind of python gdb integration results in the following error: (gdb) py-bt Traceback (most recent call first): Python Exception Invalid cast.: Error occurred in Python command: Invalid cast. I have tracked it down to the _type_... globals, and I am able to fix it with the following commands: (gdb) pi >>> # Look up the gdb.Type for some standard types: ... _type_char_ptr = gdb.lookup_type('char').pointer() # char* >>> _type_unsigned_char_ptr = gdb.lookup_type('unsigned char').pointer() # unsigned char* >>> _type_void_ptr = gdb.lookup_type('void').pointer() # void* >>> _type_unsigned_short_ptr = gdb.lookup_type('unsigned short').pointer() >>> _type_unsigned_int_ptr = gdb.lookup_type('unsigned int').pointer() After this, it works correctly. I was able to workaround it by making a fix_globals that resets the globals on each gdb.Command. I do not understand why the originally initialized types are not working properly. It feels like gdb-inception trying to debug python within a gdb that debugs cpython while executing python code. I have tried this using hg/default cpython (--with-pydebug --without-pymalloc --with-valgrind --enable-shared) 1) System install of gdb 7.11, linked against system libpython 3.5.1. 2) Custom install of gdb 7.11.50.20160411-git, the debug cpython I am trying to debug ---------- components: Demos and Tools messages: 263690 nosy: tilsche priority: normal severity: normal status: open title: gdb support fails with "Invalid cast." type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 17:34:55 2016 From: report at bugs.python.org (Philip Jenvey) Date: Mon, 18 Apr 2016 21:34:55 +0000 Subject: [New-bugs-announce] [issue26800] Don't accept bytearray as filenames part 2 Message-ID: <1461015295.39.0.246147143774.issue26800@psf.upfronthosting.co.za> New submission from Philip Jenvey: Basically a reopen of the older issue8485 with the same name. It was decided there to drop support for bytearray filenames -- partly because of the complexity of handling buffers but it was also deemed to just not make much sense. This regressed or crept back into the posix module with the big path_converter changes for 3.3: https://hg.python.org/cpython/file/ee9921b29fd8/Lib/test/test_posix.py#l411 IMHO this functionality should be deprecated/removed per the original discussion, or does someone want to reopen the debate? The os module docs (and path_converter's own docs) explicitly advertise handling of str or bytes, not bytearrays or buffers. Even os.fsencode rejects bytearrays/buffers. Related to issue26754 -- further inconsistencies around filename handling ---------- assignee: larry components: Interpreter Core keywords: 3.3regression messages: 263694 nosy: Ronan.Lamy, haypo, larry, pitrou, pjenvey, serhiy.storchaka priority: normal severity: normal status: open title: Don't accept bytearray as filenames part 2 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 19:43:57 2016 From: report at bugs.python.org (Emanuel Barry) Date: Mon, 18 Apr 2016 23:43:57 +0000 Subject: [New-bugs-announce] [issue26801] Fix shutil.get_terminal_size() to catch AttributeError Message-ID: <1461023037.12.0.975016313479.issue26801@psf.upfronthosting.co.za> New submission from Emanuel Barry: `shutil.get_terminal_size()` will sometimes propagate `AttributeError: module has not attribute 'get_terminal_size'` to the caller. The call checks for NameError, which makes no sense, so I guess it must be an oversight. Attached patch fixes it. (diff was generated with git, I don't know if it works with Mercurial, but it's a trivial change anyway) ---------- components: Library (Lib) files: get_terminal_size.diff keywords: patch messages: 263701 nosy: ebarry priority: normal severity: normal stage: patch review status: open title: Fix shutil.get_terminal_size() to catch AttributeError type: behavior Added file: http://bugs.python.org/file42511/get_terminal_size.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 18 21:10:12 2016 From: report at bugs.python.org (Joe Jevnik) Date: Tue, 19 Apr 2016 01:10:12 +0000 Subject: [New-bugs-announce] [issue26802] Avoid copy in call_function_var when no extra stack args are passed Message-ID: <1461028212.35.0.591299573852.issue26802@psf.upfronthosting.co.za> New submission from Joe Jevnik: When star unpacking positions into a function we can avoid a copy iff there are no extra arguments coming from the stack. Syntactically this looks like: `f(*args)` This reduces the overhead of the call by about 25% (~30ns on my machine) based on some light profiling of just * unpacking. I can imagine this having a slight impact on code that uses this feature in a loop. The cost is minimal when we need to make the copy because the int != 0 check is dwarfed by the allocation. I did not notice a significant change between the slow path and the original code. The macro benchmark suite did not show a difference on my machine but this is not much more complex code. ---------- components: Interpreter Core files: call-function-var.patch keywords: patch messages: 263702 nosy: llllllllll priority: normal severity: normal status: open title: Avoid copy in call_function_var when no extra stack args are passed versions: Python 3.6 Added file: http://bugs.python.org/file42512/call-function-var.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 19 03:35:16 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 19 Apr 2016 07:35:16 +0000 Subject: [New-bugs-announce] [issue26803] syslog logging handler fails with address in unix abstract namespace Message-ID: <1461051316.11.0.100147214694.issue26803@psf.upfronthosting.co.za> New submission from Xavier de Gaye: The traceback: Traceback (most recent call last): File "Lib/logging/handlers.py", line 917, in emit self.socket.sendto(msg, self.address) TypeError: getsockaddrarg: AF_INET address must be tuple, not bytes The attached patch reproduces the issue with a test case and fixes it. ---------- components: Library (Lib) files: unix_abstract_namespace.patch keywords: patch messages: 263715 nosy: vinay.sajip, xdegaye priority: normal severity: normal status: open title: syslog logging handler fails with address in unix abstract namespace type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42513/unix_abstract_namespace.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 19 04:18:29 2016 From: report at bugs.python.org (Hans-Peter Jansen) Date: Tue, 19 Apr 2016 08:18:29 +0000 Subject: [New-bugs-announce] [issue26804] Prioritize lowercase proxy variables in urllib.request Message-ID: <1461053909.86.0.0123928931795.issue26804@psf.upfronthosting.co.za> New submission from Hans-Peter Jansen: During programming a function, that replaces a wget call, I noticed, that something is wrong with urllibs proxy handling. I usually use the scheme "http_proxy= wget -N -nd URL" when I need to bypass the proxy. Hence I was pretty confused, that this doesn't work with python(3). Creating an empty ProxyHandler isn't the real Mc Coy either. Diving into the issue, I found getproxies_environment, but couldn't make much sense out of its behavior, up until I noticed, that my OS (openSUSE ) creates both variants of environment variables behind the scenes: uppercase and lowercase. Consequence: python3 needs the scheme "http_proxy= HTTP_PROXY= python3 ..." Since I, like everyone else, prefer gentle tones over the loud, and want to spare this surprise for others in the future, I propose the attached patch. Process environment variables in two passes, first uppercase, then lowercase, allowing an empty lowercase value to overrule any uppercase value. Please consider applying this. ---------- components: Extension Modules files: prioritize_lowercase_proxy_vars_in_urllib_request.diff keywords: patch messages: 263720 nosy: frispete priority: normal severity: normal status: open title: Prioritize lowercase proxy variables in urllib.request type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42516/prioritize_lowercase_proxy_vars_in_urllib_request.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 19 08:25:44 2016 From: report at bugs.python.org (Paul Moore) Date: Tue, 19 Apr 2016 12:25:44 +0000 Subject: [New-bugs-announce] [issue26805] Refer to types.SimpleNamespace in namedtuple documentation Message-ID: <1461068744.46.0.957777609901.issue26805@psf.upfronthosting.co.za> New submission from Paul Moore: People often look towards collections.namedtuple when all they actually want is "named attribute" access to a collection of values, without needing a tuple subclass, or positional access. In these cases, types.SimpleNamespace may be a better fit. I suggest adding the following pointer to the top of the namedtuple documentation: """ For simple uses, where the only requirement is to be able to refer to a set of values by name using attribute-style access, the :ref:`types.SimpleNamespace` type may be a suitable alternative to using a namedtuple. """ ---------- assignee: docs at python components: Documentation messages: 263733 nosy: docs at python, paul.moore priority: normal severity: normal status: open title: Refer to types.SimpleNamespace in namedtuple documentation type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 19 18:39:04 2016 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 19 Apr 2016 22:39:04 +0000 Subject: [New-bugs-announce] [issue26806] IDLE not displaying RecursionError tracebacks Message-ID: <1461105544.08.0.0145590373976.issue26806@psf.upfronthosting.co.za> New submission from Terry J. Reedy: Test program: import sys sys.setrecursionlimit(20) def f(): return f() f() F:\Python\mypy>python tem.py Traceback (most recent call last): File "tem.py", line 4, in f() File "tem.py", line 3, in f def f(): return f() ... RecursionError: maximum recursion depth exceeded In 2.7.11, the error is caught and the user process restarted. ======================= RESTART: F:\Python\mypy\tem.py ======================= =============================== RESTART: Shell =============================== >>> In 3.5.1, the user process hangs, ^C does not work, and a restart explicitly requested either with Shell => Restart or running another program. This behavior is either peculiar to this test case, or a regression, as I remember getting proper RecursionError tracebacks in the past. ---------- assignee: terry.reedy messages: 263785 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE not displaying RecursionError tracebacks type: behavior versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 20 04:18:03 2016 From: report at bugs.python.org (Robert Collins) Date: Wed, 20 Apr 2016 08:18:03 +0000 Subject: [New-bugs-announce] [issue26807] mock_open()().readline() fails at EOF Message-ID: <1461140283.7.0.219960203136.issue26807@psf.upfronthosting.co.za> New submission from Robert Collins: >>> import unittest.mock >>> o = unittest.mock.mock_open(read_data="fred") >>> f = o() >>> f.read() 'fred' >>> f.read() '' >>> f.readlines() [] >>> f.readline() Traceback (most recent call last): File "", line 1, in File "/home/robertc/work/cpython/Lib/unittest/mock.py", line 935, in __call__ return _mock_self._mock_call(*args, **kwargs) File "/home/robertc/work/cpython/Lib/unittest/mock.py", line 994, in _mock_call result = next(effect) StopIteration ---------- components: Library (Lib) messages: 263814 nosy: rbcollins priority: normal severity: normal stage: test needed status: open title: mock_open()().readline() fails at EOF type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 20 07:07:30 2016 From: report at bugs.python.org (Alexey Gorshkov) Date: Wed, 20 Apr 2016 11:07:30 +0000 Subject: [New-bugs-announce] [issue26808] wsgiref.simple_server breaks unicode in URIs Message-ID: <1461150450.27.0.360588809006.issue26808@psf.upfronthosting.co.za> New submission from Alexey Gorshkov: example code is in attachment example URI is (for example): http://127.0.0.1:8005/???? ---------- components: Extension Modules files: t.py messages: 263819 nosy: animus priority: normal severity: normal status: open title: wsgiref.simple_server breaks unicode in URIs type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42532/t.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 20 07:36:22 2016 From: report at bugs.python.org (leewz) Date: Wed, 20 Apr 2016 11:36:22 +0000 Subject: [New-bugs-announce] [issue26809] `string` exposes ChainMap from `collections` Message-ID: <1461152182.96.0.373784885739.issue26809@psf.upfronthosting.co.za> New submission from leewz: I don't know if this kind of thing matters, but `from string import ChainMap` works (imports from `collections). It's used internally by `string`. This was done when ChainMap was made official: https://github.com/python/cpython/commit/2886808d3809d69a6e9b360380080140b95df0b6#diff-4db7f78c8ac9907c7ad6231730d7900c You can see in the above commit that `configparser` had `_ChainMap` changed to `ChainMap as _ChainMap`, but this was not done in `string`. ---------- components: Library (Lib) messages: 263824 nosy: leewz priority: normal severity: normal status: open title: `string` exposes ChainMap from `collections` type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 20 12:04:28 2016 From: report at bugs.python.org (unsec treedee) Date: Wed, 20 Apr 2016 16:04:28 +0000 Subject: [New-bugs-announce] [issue26810] inconsistent garbage collector behavior across platforms when using ctypes data-structures Message-ID: <1461168268.36.0.0880478693475.issue26810@psf.upfronthosting.co.za> New submission from unsec treedee: The garbage collector is not behaving consistently across platforms for python 2.7.11. I realize that the example code and style is not proper :-) On the Mac OSX platform this code runs without the garbage collector "cleaning house" and there is no resulting crash from a NULL pointer. On the Linux platform the garbage collector decides to "clean house" (deallocates the object) resulting in a NULL pointer which is not handled correctly by the c-function code (some legacy stuff) and causes a segmentation fault. Temporarily disabling the garbage collector and enabling it later on allows a workaround (valid or not) that is consistent on all platforms. Improper coding and style aside... the issue I am reporting is the inconsistent behaviour of the garbage collector. I am looking for consistency across platforms (same result on all platforms). :-) ---------- components: ctypes files: gc_snippet_report_pybug.py messages: 263846 nosy: unsec treedee priority: normal severity: normal status: open title: inconsistent garbage collector behavior across platforms when using ctypes data-structures type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file42539/gc_snippet_report_pybug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 20 20:20:16 2016 From: report at bugs.python.org (random832) Date: Thu, 21 Apr 2016 00:20:16 +0000 Subject: [New-bugs-announce] [issue26811] segfault due to null pointer in tuple Message-ID: <1461198016.05.0.40690695807.issue26811@psf.upfronthosting.co.za> New submission from random832: I was writing something that iterates over all objects in gc.get_objects(). I don't know precisely where the offending tuple is coming from, but nearby objects in gc.get_objects() appear to be related to frozen_importlib. A tuple exists somewhere with a null pointer in it. Attempting to iterate over it or access its item results in a segmentation fault. I confirmed this on both Linux and OSX. Python 3.6.0a0 (default:778ccbe3cf74, Apr 20 2016, 20:17:38) ---------- components: Interpreter Core files: fnt.py messages: 263866 nosy: random832 priority: normal severity: normal status: open title: segfault due to null pointer in tuple versions: Python 3.6 Added file: http://bugs.python.org/file42545/fnt.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 00:27:19 2016 From: report at bugs.python.org (Yih-En Andrew Ban) Date: Thu, 21 Apr 2016 04:27:19 +0000 Subject: [New-bugs-announce] [issue26812] ExtendedInterpolation drops user-defined 'vars' during _interpolate_some() recursion Message-ID: <1461212839.44.0.193061483216.issue26812@psf.upfronthosting.co.za> New submission from Yih-En Andrew Ban: In Python 3.5.1, configparser.ExtendedInterpolation will drop the user-defined 'vars' passed in via ConfigParser.get(vars=...) due to a bug when recursing beyond depth 1 in ExtendedInterpolation._interpolate_some(). Line 509 of configparser.py currently reads: 'dict(parser.items(sect, raw=True))' This appears to be a mistake and should instead be: 'map', which includes the user-defined vars. The following code will trigger an InterpolationMissingOptionError without the fix, and runs as expected with the fix: from configparser import * parser = ConfigParser( interpolation=ExtendedInterpolation() ) parser.add_section( 'Section' ) parser.set( 'Section', 'first', '${userdef}' ) parser.set( 'Section', 'second', '${first}' ) parser[ 'Section' ].get( 'second', vars={'userdef': 'userval'} ) ---------- components: Library (Lib) messages: 263876 nosy: lukasz.langa, yab-arz priority: normal severity: normal status: open title: ExtendedInterpolation drops user-defined 'vars' during _interpolate_some() recursion type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 01:51:03 2016 From: report at bugs.python.org (Ken Miura) Date: Thu, 21 Apr 2016 05:51:03 +0000 Subject: [New-bugs-announce] [issue26813] Wrong Japanese translation of "Adverb" on Documentation Message-ID: <1461217863.36.0.239729021285.issue26813@psf.upfronthosting.co.za> New submission from Ken Miura: In Japanese Python document, English word "Adverb" is translated to "????", but it's "??" actually. http://docs.python.jp/2/library/re.html#finding-all-adverbs http://docs.python.jp/2/library/re.html#finding-all-adverbs-and-their-positions ---------- assignee: docs at python components: Documentation messages: 263881 nosy: Ken Miura, docs at python priority: normal severity: normal status: open title: Wrong Japanese translation of "Adverb" on Documentation _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 04:57:21 2016 From: report at bugs.python.org (STINNER Victor) Date: Thu, 21 Apr 2016 08:57:21 +0000 Subject: [New-bugs-announce] [issue26814] Add a new _PyObject_CallStack() function which avoids the creation of a tuple or dict for arguments Message-ID: <1461229041.59.0.994194592206.issue26814@psf.upfronthosting.co.za> New submission from STINNER Victor: Attached patch adds the following new function: PyObject* _PyObject_CallStack(PyObject *func, PyObject **stack, int na, int nk); where na is the number of positional arguments and nk is the number of (key, pair) arguments stored in the stack. Example of C code to call a function with one positional argument: PyObject *stack[1]; stack[0] = arg; return _PyObject_CallStack(func, stack, 1, 0); Simple, isn't it? The difference with PyObject_Call() is that its API avoids the creation of a tuple and a dictionary to pass parameters to functions when possible. Currently, the temporary tuple and dict can be avoided to call Python functions (nice, isn't it?) and C function declared with METH_O (not the most common API, but many functions are declared like that). The patch only modifies property_descr_set() to test the feature, but I'm sure that *a lot* of C code can be modified to use this new function to beneift from its optimization. Should we make this new _PyObject_CallStack() function official: call it PyObject_CallStack() (without the understand prefix) or experiment it in CPython 3.6 and decide later to make it public? If it's made private, it will require a large replacement patch later to replace all calls to _PyObject_CallStack() with PyObject_CallStack() (strip the underscore prefix). The next step is to add a new METH_STACK flag to pass parameters to C functions using a similar API (PyObject **stack, int na, int nk) and modify the argument clinic to use this new API. Thanks to Larry Hasting who gave me the idea in a previous edition of Pycon US ;-) This issue was created after the discussion on issue #26811 which is an issue in a micro-optimization in property_descr_set() to avoid the creation of a tuple: it caches a private tuple inside property_descr_set(). ---------- files: call_stack.patch keywords: patch messages: 263899 nosy: haypo, larry, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Add a new _PyObject_CallStack() function which avoids the creation of a tuple or dict for arguments type: performance versions: Python 3.6 Added file: http://bugs.python.org/file42549/call_stack.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 05:49:15 2016 From: report at bugs.python.org (STINNER Victor) Date: Thu, 21 Apr 2016 09:49:15 +0000 Subject: [New-bugs-announce] [issue26815] SIGBUS in test_ssl.test_dealloc_warn() on "AMD64 FreeBSD 10.0 3.x" buildbot Message-ID: <1461232155.49.0.392894323476.issue26815@psf.upfronthosting.co.za> New submission from STINNER Victor: Oh oh, that's not good. test_ssl crashed in test_dealloc_warn() on the "AMD64 FreeBSD 10.0 3.x" buildbot. http://buildbot.python.org/all/builders/AMD64%20FreeBSD%2010.0%203.x/builds/4348/steps/test/logs/stdio [ 39/400] test_ssl Fatal Python error: Bus error Current thread 0x0000000802006400 (most recent call first): File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/support/__init__.py", line 1444 in gc_collect File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/test_ssl.py", line 599 in test_dealloc_warn File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/unittest/case.py", line 600 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/unittest/case.py", line 648 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/unittest/suite.py", line 122 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/unittest/suite.py", line 84 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/unittest/suite.py", line 122 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/unittest/suite.py", line 84 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/unittest/runner.py", line 176 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/support/__init__.py", line 1802 in _run_suite File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/support/__init__.py", line 1836 in run_unittest File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/test_ssl.py", line 3364 in test_main File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/runtest.py", line 162 in runtest_inner File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/runtest.py", line 115 in runtest File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/runtest_mp.py", line 69 in run_tests_slave File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/main.py", line 379 in main File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/main.py", line 433 in main File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/main.py", line 455 in main_in_temp_cwd File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/regrtest.py", line 39 in File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/runpy.py", line 85 in _run_code File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/runpy.py", line 184 in _run_module_as_main Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/runpy.py", line 184, in _run_module_as_main "__main__", mod_spec) File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/__main__.py", line 3, in regrtest.main_in_temp_cwd() File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/main.py", line 455, in main_in_temp_cwd main() File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/main.py", line 433, in main Regrtest().main(tests=tests, **kwargs) File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/main.py", line 392, in main self.run_tests() File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/main.py", line 354, in run_tests run_tests_multiprocess(self) File "/usr/home/buildbot/python/3.x.koobs-freebsd10/build/Lib/test/libregrtest/runtest_mp.py", line 212, in run_tests_multiprocess raise Exception(msg) Exception: Child error on test_ssl: Exit code -10 *** Error code 1 ---------- messages: 263905 nosy: haypo priority: normal severity: normal status: open title: SIGBUS in test_ssl.test_dealloc_warn() on "AMD64 FreeBSD 10.0 3.x" buildbot type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 07:01:52 2016 From: report at bugs.python.org (Xiang Zhang) Date: Thu, 21 Apr 2016 11:01:52 +0000 Subject: [New-bugs-announce] [issue26816] Make concurrent.futures.Executor an abc Message-ID: <1461236512.66.0.771218032413.issue26816@psf.upfronthosting.co.za> New submission from Xiang Zhang: The documentation tells that concurrent.futures.Executor is an abstract class. Also PEP3148 tells so and says concurrent.futures.Executor.submit is an abstract method and must be implemented by Executor subclasses. I think using abc.ABCMeta here is a good choice. I propose a patch. The patch also remove some unnecessary object base class. ---------- files: make_concurrent_futures_Executor_an_abc.patch keywords: patch messages: 263911 nosy: bquinlan, xiang.zhang priority: normal severity: normal status: open title: Make concurrent.futures.Executor an abc Added file: http://bugs.python.org/file42551/make_concurrent_futures_Executor_an_abc.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 09:31:25 2016 From: report at bugs.python.org (Thomas Guettler) Date: Thu, 21 Apr 2016 13:31:25 +0000 Subject: [New-bugs-announce] [issue26817] Docs for StringIO should link to io.BytesIO Message-ID: <1461245485.84.0.279459146406.issue26817@psf.upfronthosting.co.za> New submission from Thomas Guettler: I think a warning at the top of StringIO docs is needed. And it should link to io.BytesIO. Maybe even deprecate StringIO and cStringIO in Python2? StringIO docs: https://docs.python.org/2/library/stringio.html io.BytesIO docs: https://docs.python.org/2/library/io.html#io.BytesIO I would like to see this at the top of StringIO: {{{ Please use io.BytesIO and io.StringIO since this module is not supported in Python3 }} ---------- assignee: docs at python components: Documentation messages: 263917 nosy: docs at python, guettli priority: normal severity: normal status: open title: Docs for StringIO should link to io.BytesIO versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 10:28:39 2016 From: report at bugs.python.org (Berker Peksag) Date: Thu, 21 Apr 2016 14:28:39 +0000 Subject: [New-bugs-announce] [issue26818] trace CLI doesn't respect -s option Message-ID: <1461248919.65.0.631004346458.issue26818@psf.upfronthosting.co.za> New submission from Berker Peksag: I noticed this while triaging issue 9317. Using traceme.py from that issue, $ ./python -m trace -c -s traceme.py returns nothing. It seems like it's a regression caused by https://github.com/python/cpython/commit/17e5da46a733a1a05072a277bc81ffa885f0c204 With trace_cli_summary.diff applied: $ ./python -m trace -c -s traceme.py lines cov% module (path) 1 100% trace (/home/berker/projects/cpython/default/Lib/trace.py) 6 100% traceme (traceme.py) trace_cli_summary.diff also fixes issue 10541. ---------- components: Library (Lib) files: trace_cli_cummary.diff keywords: patch messages: 263921 nosy: belopolsky, berker.peksag priority: normal severity: normal stage: patch review status: open title: trace CLI doesn't respect -s option type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42556/trace_cli_cummary.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 13:34:01 2016 From: report at bugs.python.org (Fulvio Esposito) Date: Thu, 21 Apr 2016 17:34:01 +0000 Subject: [New-bugs-announce] [issue26819] _ProactorReadPipeTransport pause_reading()/resume_reading() broken if called before any read is perfored Message-ID: <1461260041.01.0.731551553745.issue26819@psf.upfronthosting.co.za> New submission from Fulvio Esposito: Calling pause_reading()/resume_reading() on a _ProactorReadPipeTransport will result in an InvalidStateError('Result is not ready.') from a future if no read has been issued yet. The reason is that resume_reading() will schedule _loop_reading() a second time on the event loop. For example, currently aiomysql always fails to connect using a ProactorEventLoop on Windows because it calls pause_reading()/resume_reading() to set TCP_NODELAY on the socket just after connecting and before any read is performed. ---------- components: asyncio files: pause_resume_test.py messages: 263927 nosy: Fulvio Esposito, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: _ProactorReadPipeTransport pause_reading()/resume_reading() broken if called before any read is perfored type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42559/pause_resume_test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 14:36:47 2016 From: report at bugs.python.org (Josh Rosenberg) Date: Thu, 21 Apr 2016 18:36:47 +0000 Subject: [New-bugs-announce] [issue26820] Prevent uses of format string based PyObject_Call* that do not produce tuple required by docs Message-ID: <1461263807.63.0.49330886523.issue26820@psf.upfronthosting.co.za> New submission from Josh Rosenberg: PyObject_CallMethod explicitly documents that "The C arguments are described by a Py_BuildValue() format string that should produce a tuple." While PyObject_CallFunction doesn't document this requirement, it has the same behavior, and the same failures, as does the undocumented _PyObject_CallMethodId. The issue is that, should someone pass a format string of "O", then the type of the subsequent argument determines the effect in a non-obvious way; when the argument comes from a caller, and the intent was to pass a single argument, this means that if the caller passes a non-tuple sequence, everything works, while passing a tuple tries to pass the contents of the tuple as sequential arguments. This inconsistency was the cause of both #26478 and #21209 (maybe others). Assuming the API can't/shouldn't be changed, it should still be an error when a format string of "O" is passed and the argument is a non-tuple (because you've violated the spec; the result of BuildValue was not a tuple). Instead call_function_tail is silently rewrapping non-tuple args in a single element tuple. I'm proposing that, in debug builds (and ideally release builds too), abstract.c's call_function_tail treat the "non-tuple" case as an error, rather than rewrapping in a single element tuple. This still allows the use case where the function is used inefficiently, but correctly (where the format string is "O" and the value is *always* a tuple that's supposed to be varargs; it should really just use PyObject_CallFunctionObjArgs/PyObject_CallMethodObjArgs/PyObject_CallObject or Id based optimized versions, but it's legal). But it will make the majority of cases where a user provided argument could be tuple or not fail fast, rather than silently behave themselves *until* they receive a tuple and misbehave. Downside: It will require code changes for cases like PyObject_CallFunction(foo, "k", myunsigned);, where there was no risk of misbehavior, but those cases were also violating the spec, and should be fixable by changing the format string to wrap the single value in parens, e.g. "(k)". ---------- components: Interpreter Core messages: 263929 nosy: haypo, josh.r priority: normal severity: normal status: open title: Prevent uses of format string based PyObject_Call* that do not produce tuple required by docs versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 15:07:41 2016 From: report at bugs.python.org (Jonathan Booth) Date: Thu, 21 Apr 2016 19:07:41 +0000 Subject: [New-bugs-announce] [issue26821] array module "minimum size in bytes" table is wrong for int/long Message-ID: <1461265661.42.0.761639464385.issue26821@psf.upfronthosting.co.za> New submission from Jonathan Booth: https://docs.python.org/3.5/library/array.html describes the 'I' and 'i' typecodes as being minimum-size in bytes of 2. The interpreter disagrees: >>> import array >>> a = array.array('i') >>> a.itemsize 4 There is also a bug with the 'L' and 'l' long typecodes, which document as min-size of 4 bytes but are 8 bytes in the interpreter. That could be a bug in cPython itself though, as if 'L' should be 8 bytes, that disagrees with the type code sizing from the struct module, where it is 4 bytes, just like integers. I checked documentation for all versions of python and it matches -- I did not check all python interpreters to see they match, but 2.7 and 3.5 did. ---------- assignee: docs at python components: Documentation messages: 263931 nosy: Jonathan Booth, docs at python priority: normal severity: normal status: open title: array module "minimum size in bytes" table is wrong for int/long versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 15:23:41 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 21 Apr 2016 19:23:41 +0000 Subject: [New-bugs-announce] [issue26822] itemgetter/attrgetter/methodcaller objects ignore keyword arguments Message-ID: <1461266621.67.0.0664681491354.issue26822@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: itemgetter(), attrgetter() and methodcaller() objects require one argument. They raise TypeError if less or more than one positional argument is provided. But they totally ignore any keyword arguments. >>> import operator >>> f = operator.itemgetter(1) >>> f('abc', spam=3) 'b' >>> f = operator.attrgetter('index') >>> f('abc', spam=3) >>> f = operator.methodcaller('upper') >>> f('abc', spam=3) 'ABC' Proposed patch makes these objects raise TypeError if keyword arguments are provided. ---------- components: Extension Modules files: operator_getters_kwargs.patch keywords: patch messages: 263933 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: itemgetter/attrgetter/methodcaller objects ignore keyword arguments type: behavior versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42560/operator_getters_kwargs.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 21 22:36:25 2016 From: report at bugs.python.org (Emanuel Barry) Date: Fri, 22 Apr 2016 02:36:25 +0000 Subject: [New-bugs-announce] [issue26823] Shrink recursive tracebacks Message-ID: <1461292585.22.0.159887337407.issue26823@psf.upfronthosting.co.za> New submission from Emanuel Barry: I recently suggested on Python-ideas ( https://mail.python.org/pipermail/python-ideas/2016-April/039899.html ) to shrink long tracebacks if they were all the same noise (recursive calls). Seeing as the idea had a good reception, I went ahead and implemented a small patch for this. It doesn't keep track of call chains, and I'm not sure if we want to implement that, as the performance decrease needed to store all of that might not be worth it. But then again, if an error happened performance is probably not a concern in this case. I've never really coded in C before, so feedback is very much welcome. ---------- components: Interpreter Core files: short_tracebacks.patch keywords: patch messages: 263948 nosy: ebarry priority: normal severity: normal stage: patch review status: open title: Shrink recursive tracebacks versions: Python 3.6 Added file: http://bugs.python.org/file42561/short_tracebacks.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 00:54:50 2016 From: report at bugs.python.org (Xiang Zhang) Date: Fri, 22 Apr 2016 04:54:50 +0000 Subject: [New-bugs-announce] [issue26824] Make some macros use Py_TYPE Message-ID: <1461300890.73.0.522492647874.issue26824@psf.upfronthosting.co.za> New submission from Xiang Zhang: According to PEP3123, all accesses to ob_refcnt and ob_type MUST cast the object pointer to PyObject* (unless the pointer is already known to have that type), and SHOULD use the respective accessor macros. I find that there are still some macros in Python use (obj)->ob_type. Though right now they may not impose any error, but as macros, they may be used with arguments not of type PyObject* later and introduce errors. So I think change them to use Py_TYPE is not a bad idea. ---------- files: change_some_macros_using_Py_TYPE.patch keywords: patch messages: 263956 nosy: serhiy.storchaka, xiang.zhang priority: normal severity: normal status: open title: Make some macros use Py_TYPE Added file: http://bugs.python.org/file42563/change_some_macros_using_Py_TYPE.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 02:54:50 2016 From: report at bugs.python.org (ganix) Date: Fri, 22 Apr 2016 06:54:50 +0000 Subject: [New-bugs-announce] [issue26825] Variable defined in exec(code) unreachable inside function call with visible name in dir() results Message-ID: <1461308090.16.0.606527103016.issue26825@psf.upfronthosting.co.za> New submission from ganix: here is a code show what happend: >>> def func(): exec('ans=1') print(dir()) return ans >>> func() ['ans'] Traceback (most recent call last): File "", line 1, in func() File "", line 4, in func return ans NameError: name 'ans' is not defined i tried this code in version 2.7,it is ok ---------- components: Argument Clinic messages: 263967 nosy: 324857679, larry priority: normal severity: normal status: open title: Variable defined in exec(code) unreachable inside function call with visible name in dir() results type: crash versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 07:40:39 2016 From: report at bugs.python.org (Marcos Dione) Date: Fri, 22 Apr 2016 11:40:39 +0000 Subject: [New-bugs-announce] [issue26826] Expose new copy_file_range() syscal in os module and use it to improve shutils.copy() Message-ID: <1461325239.41.0.30054024313.issue26826@psf.upfronthosting.co.za> New submission from Marcos Dione: copy_file_range() has been introduced in the Linux kernel since version 4.5 (mid march 2016). This new syscall allows to copy data from one fd to another without passing by user space, improving speed in most cases. You can read more about it here: https://lwn.net/Articles/659523/ I intend to start working on adding a binding for it in the os module and then, if it's available, use it in shutils.copy() to improve its efficiency. I have a couple of questions: If the syscall is not available, should I implement a user space alternative or should the method not exist at all? ---------- components: Library (Lib) messages: 264000 nosy: StyXman priority: normal severity: normal status: open title: Expose new copy_file_range() syscal in os module and use it to improve shutils.copy() type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 08:08:15 2016 From: report at bugs.python.org (Herbert) Date: Fri, 22 Apr 2016 12:08:15 +0000 Subject: [New-bugs-announce] [issue26827] PyObject *PyInit_myextention -> PyMODINIT_FUNC PyInit_myextention Message-ID: <1461326895.36.0.898462963597.issue26827@psf.upfronthosting.co.za> New submission from Herbert: I think PyObject *PyInit_myextention(void) should be PyMODINIT_FUNC PyInit_myextention(void) on https://docs.python.org/3/howto/cporting.html#module-initialization-and-state It didn't work for me until I replaced this with a message in the about 'undefined PyInit_myextention'. However, when I used nm to inspect the .so object file, I fond the PyInit_myextention (but probably with the wrong return type). Moreover, whenever I would remove the same .so importing resulted in a different error complaining that the module does not exist (strongly suggesting that I did not mix up .so files). Good luck! ---------- assignee: docs at python components: Documentation messages: 264005 nosy: docs at python, prinsherbert priority: normal severity: normal status: open title: PyObject *PyInit_myextention -> PyMODINIT_FUNC PyInit_myextention versions: Python 3.2 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 08:12:40 2016 From: report at bugs.python.org (STINNER Victor) Date: Fri, 22 Apr 2016 12:12:40 +0000 Subject: [New-bugs-announce] [issue26828] Implement __length_hint__() on map() and filter() to optimize list(map) and list(filter) Message-ID: <1461327160.24.0.621419084727.issue26828@psf.upfronthosting.co.za> New submission from STINNER Victor: When I compared the performance of filter() and map() between Python 2.7 and 3.4, I noticed a huge performance drop in Python 3! http://bugs.python.org/issue26814#msg264003 I didn't analyze yet exactly why Python 3 is so much slower (almost 100% slower for the case of fitler!). Maybe it's because filter() returns a list on Python 2, whereas filter() returns an iterator on Python 3. In Python 2, filter() and map() use _PyObject_LengthHint(seq, 8) to create the result list. Why don't we do the same in Python 3? filter.__length_hint__() and map.__length_hint__() would return seq.__length_hint__() of the input sequence, or return 8. It's a weak estimation, but it can help a lot of reduce the number of realloc() when the list is slowly growing. See also the PEP 424 -- A method for exposing a length hint. Note: the issue #26814 (fastcall) does make filter() and map() faster on Python 3.6 compared to Python 2.7, but it's not directly related to this issue. IMHO using length hint would make list(filter) and list(map) even faster. ---------- messages: 264007 nosy: alex, haypo, serhiy.storchaka priority: normal severity: normal status: open title: Implement __length_hint__() on map() and filter() to optimize list(map) and list(filter) type: performance versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 10:03:54 2016 From: report at bugs.python.org (Ethan Furman) Date: Fri, 22 Apr 2016 14:03:54 +0000 Subject: [New-bugs-announce] [issue26829] update docs: when creating classes a new dict is created for the final class object Message-ID: <1461333834.83.0.0321229546915.issue26829@psf.upfronthosting.co.za> New submission from Ethan Furman: https://docs.python.org/3/reference/datamodel.html#creating-the-class-object This section should mention that the final class is created with a new dict(), and all key/value pairs from the dict used during creation are copied over. ---------- assignee: docs at python components: Documentation messages: 264016 nosy: docs at python, ethan.furman priority: normal severity: normal status: open title: update docs: when creating classes a new dict is created for the final class object versions: Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 12:59:17 2016 From: report at bugs.python.org (Francisco Couzo) Date: Fri, 22 Apr 2016 16:59:17 +0000 Subject: [New-bugs-announce] [issue26830] Refactor Tools/scripts/google.py Message-ID: <1461344357.96.0.182735682536.issue26830@psf.upfronthosting.co.za> Changes by Francisco Couzo : ---------- files: scripts_google.patch keywords: patch nosy: franciscouzo priority: normal severity: normal status: open title: Refactor Tools/scripts/google.py type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42572/scripts_google.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 18:37:37 2016 From: report at bugs.python.org (Hans-Peter Jansen) Date: Fri, 22 Apr 2016 22:37:37 +0000 Subject: [New-bugs-announce] [issue26831] ConfigParser parsing failures with default_section and ExtendedInterpolation options Message-ID: <1461364657.99.0.761573071574.issue26831@psf.upfronthosting.co.za> New submission from Hans-Peter Jansen: ConfigParser fails in interesting ways, when using default_section and ExtendedInterpolation options. Running the attached script results in: ConfigParser() with expected result: global: [('loglevel', 'WARNING'), ('logfile', '-')] section1: [('key_a', 'value'), ('key_b', 'morevalue')] section2: [('key_c', 'othervalue'), ('key_d', 'differentvalue')] ConfigParser(default_section='global') mangles section separation: section1: [('loglevel', 'WARNING'), ('logfile', '-'), ('key_a', 'value'), ('key_b', 'morevalue')] section2: [('loglevel', 'WARNING'), ('logfile', '-'), ('key_c', 'othervalue'), ('key_d', 'differentvalue')] ConfigParser(interpolation=ExtendedInterpolation) fails with strange error: Traceback (most recent call last): File "configparser-test.py", line 36, in print_sections(cp) File "configparser-test.py", line 21, in print_sections cp.read_string(__doc__) File "/usr/lib64/python3.4/configparser.py", line 696, in read_string self.read_file(sfile, source) File "/usr/lib64/python3.4/configparser.py", line 691, in read_file self._read(f, source) File "/usr/lib64/python3.4/configparser.py", line 1089, in _read self._join_multiline_values() File "/usr/lib64/python3.4/configparser.py", line 1101, in _join_multiline_values name, val) TypeError: before_read() missing 1 required positional argument: 'value' while it is expected to behave identical. ---------- components: Library (Lib) files: configparser-test.py messages: 264031 nosy: frispete priority: normal severity: normal status: open title: ConfigParser parsing failures with default_section and ExtendedInterpolation options versions: Python 3.4 Added file: http://bugs.python.org/file42573/configparser-test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 22 23:45:50 2016 From: report at bugs.python.org (Gabriel Mesquita Cangussu) Date: Sat, 23 Apr 2016 03:45:50 +0000 Subject: [New-bugs-announce] [issue26832] ProactorEventLoop doesn't support stdin/stdout nor files with connect_read_pipe/connect_write_pipe Message-ID: <1461383150.45.0.258316694453.issue26832@psf.upfronthosting.co.za> New submission from Gabriel Mesquita Cangussu: The documentation of asyncio specifies that the methods connect_read_pipe and connect_write_pipe are available on Windows with the ProactorEventLoop. The documentation then says that those methods accept file-like objects on the pipe parameter. However, the methods doesn't seem to work with stdio or any disk file under Windows. The following example catches this problem: import asyncio import sys class MyProtocol(asyncio.Protocol): def connection_made(self, transport): print('connection established') def data_received(self, data): print('received: {!r}'.format(data.decode())) def connection_lost(self, exc): print('lost connection') if sys.platform.startswith('win32'): loop = asyncio.ProactorEventLoop() else: loop = asyncio.SelectorEventLoop() coro = loop.connect_read_pipe(MyProtocol, sys.stdin) loop.run_until_complete(coro) loop.run_forever() loop.close() This code when executed on Ubuntu have the desired behavior, but under Windows 10 it gives OSError: [WinError 6] The handle is invalid. The complete output is this: c:\Users\Gabriel\Documents\Python Scripts>python async_pipe.py connection established Fatal read error on pipe transport protocol: <__main__.MyProtocol object at 0x000001970EB2FAC8> transport: <_ProactorReadPipeTransport fd=0> Traceback (most recent call last): File "C:\Program Files\Python35\lib\asyncio\proactor_events.py", line 195, in _loop_reading self._read_fut = self._loop._proactor.recv(self._sock, 4096) File "C:\Program Files\Python35\lib\asyncio\windows_events.py", line 425, in recv self._register_with_iocp(conn) File "C:\Program Files\Python35\lib\asyncio\windows_events.py", line 606, in _register_with_iocp _overlapped.CreateIoCompletionPort(obj.fileno(), self._iocp, 0, 0) OSError: [WinError 6] Identificador inv?lido lost connection I think that the documentation should state that there is no support for disk files and stdio with the methods in question and also state what exactly they support (an example would be nice). And, of course, better support for watching file descriptors on Windows on future Python releases would be nice. Thank you, Gabriel ---------- components: Windows, asyncio messages: 264043 nosy: Gabriel Mesquita Cangussu, gvanrossum, haypo, paul.moore, steve.dower, tim.golden, yselivanov, zach.ware priority: normal severity: normal status: open title: ProactorEventLoop doesn't support stdin/stdout nor files with connect_read_pipe/connect_write_pipe type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 23 04:32:05 2016 From: report at bugs.python.org (Thomas) Date: Sat, 23 Apr 2016 08:32:05 +0000 Subject: [New-bugs-announce] [issue26833] returning ctypes._SimpleCData objects from callbacks Message-ID: <1461400325.28.0.01385438687.issue26833@psf.upfronthosting.co.za> New submission from Thomas: If a callback function returns a ctypes._SimpleCData object, it will fail with a type error and complain that it expects a basic type. Using the qsort example: def py_cmp_func(a, b): print(a.contents, b.contents) return c_int(0) > TypeError: an integer is required (got type c_int) > Exception ignored in: This is somewhat surprising as it is totally fine to pass a c_int (or an int) as an c_int argument. But this is really an issue for subclasses of fundamental data types: (sticking with qsort for simplicity, full example attached) class CmpRet(c_int): pass cmp_ctype = CFUNCTYPE(CmpRet, POINTER(c_int), POINTER(c_int)) def py_cmp_func(a, b): print(a.contents, b.contents) return CmpRet(0) > TypeError: an integer is required (got type CmpRet) > Exception ignored in: This is inconsistent with the no transparent argument/return type conversion rule for subclasses. Consider for instance an enum with a specific underlying type. A subclass (with __eq__ on value) from the corresponding ctype can be useful to provide a typesafe way to pass / receive those from C. Due to the described behavior, this doesn't work for callbacks. This is related to #5710, that discusses composite types. ---------- files: callback_ret_sub.py messages: 264056 nosy: tilsche priority: normal severity: normal status: open title: returning ctypes._SimpleCData objects from callbacks type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42575/callback_ret_sub.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 23 15:27:41 2016 From: report at bugs.python.org (Christian Heimes) Date: Sat, 23 Apr 2016 19:27:41 +0000 Subject: [New-bugs-announce] [issue26834] Add truncated SHA512/224 and SHA512/256 Message-ID: <1461439661.19.0.970129415182.issue26834@psf.upfronthosting.co.za> New submission from Christian Heimes: SHA512/224 and SHA512/256 are truncated versions of SHA512. Just like SHA384 they use the same algorithm but different initial values and a smaller digest. I took the start vectors and test values from libtomcrypt. Like in my blake2 branch I have add tp_new to the types and removed the old factory methods. Now it is possible to instantiate the types. The code is also in my github fork https://github.com/tiran/cpython/tree/feature/sha512truncated ---------- components: Extension Modules files: cpython-cheimes-0001-Add-truncate-SHA512-224-and-SHA512-256-hash-algorith.patch keywords: patch messages: 264068 nosy: christian.heimes, gregory.p.smith priority: normal severity: normal stage: patch review status: open title: Add truncated SHA512/224 and SHA512/256 type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42578/cpython-cheimes-0001-Add-truncate-SHA512-224-and-SHA512-256-hash-algorith.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 09:53:19 2016 From: report at bugs.python.org (Christian Heimes) Date: Sun, 24 Apr 2016 13:53:19 +0000 Subject: [New-bugs-announce] [issue26835] Add file-sealing ops to fcntl Message-ID: <1461505999.3.0.387096384665.issue26835@psf.upfronthosting.co.za> New submission from Christian Heimes: The file-sealing ops are useful for memfd_create(). The new syscall and ops are only available on Linux with a recent kernel. http://man7.org/linux/man-pages/man2/fcntl.2.html http://man7.org/linux/man-pages/man2/memfd_create.2.html Code: #include #ifndef F_ADD_SEALS /* * Set/Get seals */ #define F_ADD_SEALS (F_LINUX_SPECIFIC_BASE + 9) #define F_GET_SEALS (F_LINUX_SPECIFIC_BASE + 10) /* * Types of seals */ #define F_SEAL_SEAL 0x0001 /* prevent further seals from being set */ #define F_SEAL_SHRINK 0x0002 /* prevent file from shrinking */ #define F_SEAL_GROW 0x0004 /* prevent file from growing */ #define F_SEAL_WRITE 0x0008 /* prevent writes */ /* (1U << 31) is reserved for signed error codes */ #endif /* F_ADD_SEALS */ ---------- assignee: christian.heimes components: Extension Modules messages: 264108 nosy: christian.heimes priority: normal severity: normal stage: needs patch status: open title: Add file-sealing ops to fcntl type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 09:56:03 2016 From: report at bugs.python.org (Christian Heimes) Date: Sun, 24 Apr 2016 13:56:03 +0000 Subject: [New-bugs-announce] [issue26836] Add memfd_create to os module Message-ID: <1461506163.13.0.0180353008827.issue26836@psf.upfronthosting.co.za> New submission from Christian Heimes: Add memfd_create() and constants MFD_ALLOW_SEALING, MFD_CLOEXEC to the os module. A glibc wrapper for memfd_create() is not available yet but the interface has been standardized. http://man7.org/linux/man-pages/man2/memfd_create.2.html https://dvdhrm.wordpress.com/tag/memfd/ ---------- assignee: christian.heimes components: Extension Modules messages: 264109 nosy: christian.heimes priority: normal severity: normal stage: needs patch status: open title: Add memfd_create to os module type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 11:32:26 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 24 Apr 2016 15:32:26 +0000 Subject: [New-bugs-announce] [issue26837] assertSequenceEqual() raises BytesWarning when format message Message-ID: <1461511946.3.0.425674817359.issue26837@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: assertSequenceEqual() raises BytesWarning when format failure report. See for example http://buildbot.python.org/all/builders/AMD64%20OpenIndiana%203.x/builds/10575/steps/test/logs/stdio : ====================================================================== ERROR: test_close_fds_0_1 (test.test_subprocess.POSIXProcessTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/test/test_subprocess.py", line 1741, in test_close_fds_0_1 self.check_close_std_fds([0, 1]) File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/test/test_subprocess.py", line 1727, in check_close_std_fds self.assertEqual((out, err), (b'apple', b'orange')) File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/unittest/case.py", line 820, in assertEqual assertion_func(first, second, msg=msg) File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/unittest/case.py", line 1029, in assertTupleEqual self.assertSequenceEqual(tuple1, tuple2, msg, seq_type=tuple) File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/unittest/case.py", line 967, in assertSequenceEqual (i, item1, item2)) BytesWarning: str() on a bytes instance ====================================================================== Proposed patch fixes message formatting and adds tests for assertions that can emit BytesWarning. ---------- components: Library (Lib), Tests files: unittest_assert_bytes_warning.patch keywords: patch messages: 264110 nosy: ezio.melotti, michael.foord, rbcollins, serhiy.storchaka priority: normal severity: normal status: open title: assertSequenceEqual() raises BytesWarning when format message type: behavior versions: Python 2.7, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42581/unittest_assert_bytes_warning.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 11:58:51 2016 From: report at bugs.python.org (Alan Jenkins) Date: Sun, 24 Apr 2016 15:58:51 +0000 Subject: [New-bugs-announce] [issue26838] sax.xmlreader.InputSource.setCharacterStream() does not work? Message-ID: <1461513531.9.0.223983001389.issue26838@psf.upfronthosting.co.za> New submission from Alan Jenkins: python3-3.4.3-5.fc23-x86_64 So far I spelunked here. Starting from . I experimented with using setCharacterStream() instead of setByteStream() setCharacterStream() is shown in documentation but exercising it fails >>> help(InputSource) | setCharacterStream(self, charfile) | Set the character stream for this input source. (The stream | must be a Python 2.0 Unicode-wrapped file-like that performs | conversion to Unicode strings.) | | If there is a character stream specified, the SAX parser will | ignore any byte stream and will not attempt to open a URI | connection to the system identifier. Actually using an InputSource set up this way errors out as follows: File "/home/alan/.local/lib/python3.4/site-packages/feedparser-5.2.1-py3.4.egg/feedparser/api.py", line 236, in parse File "/usr/lib64/python3.4/site-packages/drv_libxml2.py", line 146, in parse source = saxutils.prepare_input_source(source) File "/usr/lib64/python3.4/xml/sax/saxutils.py", line 355, in prepare_input_source sysidfilename = os.path.join(basehead, sysid) File "/usr/lib64/python3.4/posixpath.py", line 79, in join if b.startswith(sep): AttributeError: 'NoneType' object has no attribute 'startswith' because the character stream is not actually used: def prepare_input_source(source, base=""): """This function takes an InputSource and an optional base URL and returns a fully resolved InputSource object ready for reading.""" if isinstance(source, str): source = xmlreader.InputSource(source) elif hasattr(source, "read"): f = source source = xmlreader.InputSource() source.setByteStream(f) if hasattr(f, "name") and isinstance(f.name, str): source.setSystemId(f.name) if source.getByteStream() is None: sysid = source.getSystemId() basehead = os.path.dirname(os.path.normpath(base)) sysidfilename = os.path.join(basehead, sysid) ---------- components: XML messages: 264111 nosy: sourcejedi priority: normal severity: normal status: open title: sax.xmlreader.InputSource.setCharacterStream() does not work? versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 15:04:14 2016 From: report at bugs.python.org (Matthias Klose) Date: Sun, 24 Apr 2016 19:04:14 +0000 Subject: [New-bugs-announce] [issue26839] python always calls getrandom() at start, causing long hang after boot Message-ID: <1461524654.13.0.869541747252.issue26839@psf.upfronthosting.co.za> New submission from Matthias Klose: [forwarded from https://bugs.debian.org/822431] This regression / change of behaviour was seen between 20160330 and 20160417 on the 3.5 branch. The only check-in which could affect this is the fix for issue #26735. 3.5.1-11 = 20160330 3.5.1-12 = 20160417 Martin writes: """ I just debugged the adt-virt-qemu failure with python 3.5.1-11 and tracked it down to python3.5 hanging for a long time when it gets called before the kernel initializes its RNG (which can take a minute in VMs which have low entropy sources). With 3.5.1-10: $ strace -e getrandom python3 -c 'True' +++ exited with 0 +++ With -11: $ strace -e getrandom python3 -c 'True' getrandom("\300\0209\26&v\232\264\325\217\322\303:]\30\212Q\314\244\257t%\206\"", 24, 0) = 24 +++ exited with 0 +++ When you do this with -11 right after booting a VM, the getrandom() can block for a long time, until the kernel initializes its random pool: 11:21:36.118034 getrandom("/V#\200^O*HD+D_\32\345\223M\205a\336/\36x\335\246", 24, 0) = 24 11:21:57.939999 ioctl(0, TCGETS, 0x7ffde1d152a0) = -1 ENOTTY (Inappropriate ioctl for device) [ 1.549882] [TTM] Initializing DMA pool allocator [ 39.586483] random: nonblocking pool is initialized (Note the time stamps in the strace in the first paragraph) This is really unfriendly -- it essentially means that you stop being able to use python3 early in the boot process or even early after booting. It would be better to initialize that random stuff lazily, until/if things actually need it. In the diff between -10 and -11 I do seem some getrandom() fixes to supply the correct buffer size (but that should be irrelevant as in -10 getrandom() wasn't called in the first place), and a new call which should apply to Solaris only (#ifdef sun), so it's not entirely clear where that comes from or how to work around it. It's very likely that this is the same cause as for #821877, but the description of that is both completely different and also very vague, so I file this separately for now. """ ---------- components: Interpreter Core messages: 264121 nosy: doko, haypo priority: normal severity: normal status: open title: python always calls getrandom() at start, causing long hang after boot versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 15:20:08 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 24 Apr 2016 19:20:08 +0000 Subject: [New-bugs-announce] [issue26840] Hidden test in test_heapq Message-ID: <1461525608.59.0.682456524569.issue26840@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: There are two methods test_get_only in TestErrorHandling in test_heapq. The latter hides the former. There is only one test_get_only in 2.7. ---------- assignee: rhettinger components: Tests messages: 264123 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: Hidden test in test_heapq type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 16:24:42 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 24 Apr 2016 20:24:42 +0000 Subject: [New-bugs-announce] [issue26841] Hidden test in ctypes tests Message-ID: <1461529482.09.0.936239527723.issue26841@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: There are two methods named "test_errors" in FunctionTestCase in Lib/ctypes/test/test_functions.py. The latter hides the former. ---------- components: Tests messages: 264127 nosy: amaury.forgeotdarc, belopolsky, meador.inge, serhiy.storchaka priority: normal severity: normal status: open title: Hidden test in ctypes tests type: behavior versions: Python 2.7, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 19:27:22 2016 From: report at bugs.python.org (Edward Segall) Date: Sun, 24 Apr 2016 23:27:22 +0000 Subject: [New-bugs-announce] [issue26842] Python Tutorial 4.7.1: Need to explain default parameter lifetime Message-ID: <1461540442.59.0.34324123093.issue26842@psf.upfronthosting.co.za> New submission from Edward Segall: I am using the tutorial to learn Python. I know many other languages, and I've taught programming language theory, but even so I found the warning in Section 4.7.1 about Default Argument Values to be confusing. After I spent some effort investigating what actually happens, I realized that the warning is incomplete. I'll suggest a fix below, after explaining what concerns me. Here is the warning in question: ----------------------------------------------------------------- Important warning: The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list, dictionary, or instances of most classes. ... def f(a, L=[]): L.append(a) return L print(f(1)) print(f(2)) print(f(3)) This will print [1] [1, 2] [1, 2, 3] ----------------------------------------------------------------- It's clear from this example that values are carried from one function invocation to another. That's pretty unusual behavior for a "traditional" function, but it's certainly not unheard of -- in C/C++/Java, you can preserve state across invocations by declaring that a local variable has static lifetime. When using this capability, though, it's essential to understand exactly what's happening -- or at least well enough to anticipate its behavior under a range of conditions. I don't believe the warning and example are sufficient to convey such an understanding. After playing with it for a while, I've concluded the following: "regular" local variables have the usual behavior (called "automatic" lifetime in C/C++ jargon), as do the function's formal parameters, EXCEPT when a default value is defined. Each default value is stored in a location that has static lifetime, and THAT is the reason it matters that (per the warning) the expression defining the default value is evaluated only once. This is very unfamiliar behavior -- I don't think I have used another modern language with this feature. So I think it's important that the explanation be very clear. I would like to suggest revising the warning and example to something more like the following: ----------------------------------------------------------------- Important warning: When you define a function with a default argument value, the expression defining the default value is evaluated only once, but the resultant value persists as long as the function is defined. If this value is a mutable object such as a list, dictionary, or instance of most classes, it is possible to change that object after the function is defined, and if you do that, the new (mutated) value will subsequently be used as the default value. For example, the following function accepts two arguments: def f(a, L=[]): L.append(a) return L This function is defined with a default value for its second formal parameter, called L. The expression that defines the default value denotes an empty list. When the function is defined, this expression is evaluated once. The resultant list is saved as the default value for L. Each time the function is called, it appends the first argument to the second one by invoking the second argument's append method. If we call the function with two arguments, the default value is not used. Instead, the list that is passed in as the second argument is modified. However, if we call the function with one argument, the default value is modified. Consider the following sequence of calls. First, we define a list and pass it in each time as the second argument. This list accumulates the first arguments, as follows: myList=[] print(f(0, myList)) print(f(1, myList)) This will print: [0] [0, 1] As you can see, myList is being used to accumulate the values passed in to the first as the first argument. If we then use the default value by passing in only one argument, as follows: print(f(2)) print(f(3)) we will see: [2] [2, 3] Here, the two invocations appended values to to the default list. Let's continue, this time going back to myList: print(f(4,myList)) Now the result will be: [0, 1, 4] because myList still contains the earlier values. The default value still has its earlier values too, as we can see here: print(f(5)) [2, 3, 5] To summarize, there are two distinct cases: 1) When the function is invoked with an argument list that includes a value for L, that L (the one being passed in) is changed by the function. 2) When the function is invoked with an argument list that does not include a value for L, the default value for L is changed, and that change persists through future invocations. ----------------------------------------------------------------- I hope this is useful. I realize it is much longer than the original. I had hoped to make it shorter, but when I did I found I was glossing over important details. ---------- assignee: docs at python components: Documentation messages: 264135 nosy: docs at python, esegall priority: normal severity: normal status: open title: Python Tutorial 4.7.1: Need to explain default parameter lifetime type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Apr 24 21:58:44 2016 From: report at bugs.python.org (Joshua Landau) Date: Mon, 25 Apr 2016 01:58:44 +0000 Subject: [New-bugs-announce] [issue26843] tokenize does not include Other_ID_Start or Other_ID_Continue in identifier Message-ID: <1461549524.26.0.280315753344.issue26843@psf.upfronthosting.co.za> New submission from Joshua Landau: This is effectively a continuation of https://bugs.python.org/issue9712. The line in Lib/tokenize.py Name = r'\w+' must be changed to a regular expression that accepts Other_ID_Start at the start and Other_ID_Continue elsewhere. Hence tokenize does not accept '??'. See the reference here: https://docs.python.org/3.5/reference/lexical_analysis.html#identifiers I'm unsure whether unicode normalization (aka the `xid` properties) needs to be dealt with too. Credit to toriningen from http://stackoverflow.com/a/29586366/1763356. ---------- components: Library (Lib) messages: 264145 nosy: Joshua.Landau priority: normal severity: normal status: open title: tokenize does not include Other_ID_Start or Other_ID_Continue in identifier type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 25 02:09:06 2016 From: report at bugs.python.org (Lev Maximov) Date: Mon, 25 Apr 2016 06:09:06 +0000 Subject: [New-bugs-announce] [issue26844] Wrong error message during import Message-ID: <1461564546.15.0.190869698481.issue26844@psf.upfronthosting.co.za> New submission from Lev Maximov: Error message was supposedly copy-pasted without change. Makes it pretty unintuinive to debug. Fix attached. ---------- components: Library (Lib) files: error.diff keywords: patch messages: 264157 nosy: Lev Maximov priority: normal severity: normal status: open title: Wrong error message during import type: enhancement versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42584/error.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 25 15:56:07 2016 From: report at bugs.python.org (ProgVal) Date: Mon, 25 Apr 2016 19:56:07 +0000 Subject: [New-bugs-announce] [issue26845] Misleading variable name in exception handling Message-ID: <1461614167.42.0.470146541912.issue26845@psf.upfronthosting.co.za> New submission from ProgVal: In Python/errors.c, PyErr_Restore is defined this way: void PyErr_Restore(PyObject *type, PyObject *value, PyObject *traceback) In Python/ceval.c, in the END_FINALLY case, it is called like this: PyErr_Restore(status, exc, tb); I believe ?exc? should be renamed to ?val?. Indeed, END_FINALLY pops values from the stack like this: PyObject *status = POP(); // ... else if (PyExceptionClass_Check(status)) { PyObject *exc = POP(); PyObject *tb = POP(); PyErr_Restore(status, exc, tb); And, they are pushed like this, in fast_block_end: PUSH(tb); PUSH(val); PUSH(exc); ---------- components: Interpreter Core messages: 264198 nosy: Valentin.Lorentz priority: normal severity: normal status: open title: Misleading variable name in exception handling versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 25 16:47:32 2016 From: report at bugs.python.org (Stefan Krah) Date: Mon, 25 Apr 2016 20:47:32 +0000 Subject: [New-bugs-announce] [issue26846] Workaround for non-standard stdlib.h on Android Message-ID: <1461617252.68.0.745100271283.issue26846@psf.upfronthosting.co.za> New submission from Stefan Krah: Android's stdlib.h pollutes the namespace by including a memory.h header. ---------- assignee: skrah components: Build messages: 264199 nosy: skrah, xdegaye priority: normal severity: normal status: open title: Workaround for non-standard stdlib.h on Android type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Apr 25 23:15:40 2016 From: report at bugs.python.org (Luke) Date: Tue, 26 Apr 2016 03:15:40 +0000 Subject: [New-bugs-announce] [issue26847] filter docs unclear wording Message-ID: <1461640540.65.0.428299683802.issue26847@psf.upfronthosting.co.za> New submission from Luke: The current docs for both filter and itertools.filterfalse use the following wording (emphasis added): all elements that are false are removed returns the items that are false This could be confusing for a new Python programmer, because the actual behaviour is that elements are equality-compared, not identity-compared. Suggested wording: "are equal to False" https://docs.python.org/3.5/library/functions.html#filter https://docs.python.org/3.5/library/itertools.html#itertools.filterfalse ---------- assignee: docs at python components: Documentation messages: 264206 nosy: docs at python, unfamiliarplace priority: normal severity: normal status: open title: filter docs unclear wording type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 00:40:34 2016 From: report at bugs.python.org (Jack O'Connor) Date: Tue, 26 Apr 2016 04:40:34 +0000 Subject: [New-bugs-announce] [issue26848] asyncio.subprocess's communicate() method mishandles empty input bytes Message-ID: <1461645634.01.0.295749785908.issue26848@psf.upfronthosting.co.za> New submission from Jack O'Connor: Setting stdin=PIPE and then calling communicate(b"") should close the child's stdin immediately, similar to stdin=DEVNULL. Instead, communicate() treats b"" like None and leaves the child's stdin open, which makes the child hang forever if it tries to read anything. I have a PR open with a fix and a test: https://github.com/python/cpython/pull/33 ---------- components: asyncio messages: 264212 nosy: gvanrossum, haypo, oconnor663, yselivanov priority: normal severity: normal status: open title: asyncio.subprocess's communicate() method mishandles empty input bytes type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 05:24:50 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 09:24:50 +0000 Subject: [New-bugs-announce] [issue26849] android does not support versioning in SONAME Message-ID: <1461662690.12.0.966004358302.issue26849@psf.upfronthosting.co.za> New submission from Xavier de Gaye: When python is cross-compiled for android with --enable-shared, the following error occurs: # python -c "import socket" Fatal Python error: PyThreadState_Get: no current thread This also occurs when importing subprocess, asyncore or asyncio but not when importing posix (not a shared library). This is fixed by building python without soname versioning, although I have no idea why a problem with the android loader would cause this error. Patch attached. Some references to the android loader and soname versioning: https://code.google.com/p/android/issues/detail?id=55868 https://groups.google.com/forum/#!msg/android-ndk/_UhNpRJlA1k/hbryqzEgN94J ---------- components: Cross-Build files: soname_versioning.patch keywords: patch messages: 264241 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: android does not support versioning in SONAME type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42596/soname_versioning.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 07:35:23 2016 From: report at bugs.python.org (STINNER Victor) Date: Tue, 26 Apr 2016 11:35:23 +0000 Subject: [New-bugs-announce] [issue26850] PyMem_RawMalloc(): update also sys.getallocatedblocks() in debug mode Message-ID: <1461670523.49.0.555580906198.issue26850@psf.upfronthosting.co.za> New submission from STINNER Victor: I modified PyMem_Malloc() to use the pymalloc allocator in the issue #26249. This change helped to find a memory leak in test_format that I introduced in Python 3.6: http://bugs.python.org/issue26249#msg264174 This memory leak gave me an idea: PyMem_RawMalloc() should also update sys.getallocatedblocks() (number of currently allocated blocks). It would help to find memory leaks using "python -m test -R 3:3" in extension modules using PyMem_RawMalloc() (and not PyMem_Malloc() or PyObject_Malloc()). Attached patch uses an atomic variable _Py_AllocatedBlocks, but only in debug mode. I chose to only change the variable in debug mode to: * not impact performances * I don't know if atomic variables are well supported (especially the "var++" operation) * I don't know yet the impact of this change (how sys.getallocatedblocks() is used). (The patch would be simpler if the release mode would also be impacted.) The patch changes _PyObject_Alloc() and _PyObject_Free() in debug mode to only account allocations directly made by pymalloc, to let PyMem_RawMalloc() and PyMem_RawFree() update the _Py_AllocatedBlocks variable. In release mode, _PyObject_Alloc() and _PyObject_Free() are responsible to update the _Py_AllocatedBlocks variable for allocations delegated to PyMem_RawMalloc() and PyMem_RawFree(). ---------- files: pymem_rawmalloc_blocks.patch keywords: patch messages: 264250 nosy: haypo, serhiy.storchaka priority: normal severity: normal status: open title: PyMem_RawMalloc(): update also sys.getallocatedblocks() in debug mode type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42599/pymem_rawmalloc_blocks.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 08:20:11 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 12:20:11 +0000 Subject: [New-bugs-announce] [issue26851] android compilation and link flags Message-ID: <1461673211.54.0.945034926098.issue26851@psf.upfronthosting.co.za> New submission from Xavier de Gaye: The attached patch: * Sets the recommended android compilation flags, see: http://developer.android.com/ndk/guides/standalone_toolchain.html#abi. Note that the android toolchains already set the -fpic flag, as shown with: arm-linux-androideabi-gcc -v -S main.c 2>&1 | grep main.c * Sets the Position independent executables (PIE) flag which is mandatory starting at API level 21 and supported starting with API level 16. ---------- components: Cross-Build files: build-flags.patch keywords: patch messages: 264266 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: android compilation and link flags type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42601/build-flags.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 08:43:31 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 12:43:31 +0000 Subject: [New-bugs-announce] [issue26852] add a COMPILEALL_FLAGS Makefile variable Message-ID: <1461674611.23.0.732409522866.issue26852@psf.upfronthosting.co.za> New submission from Xavier de Gaye: Add a COMPILEALL_FLAGS Makefile variable to allow setting this flag to have legacy locations for byte-code files and save space on mobile devices. Patch attached. The '-E' python command line option added to $(PYTHON_FOR_BUILD) in the patch is fixing a problem that should actually be entered in another issue. When cross-compiling, the PYTHON_FOR_BUILD command sets PYTHONPATH to the location of the cross-compiled shared libraries of the extension modules. So the compilation of modules that import extension modules fail without '-E'. ---------- components: Cross-Build files: compileall-flags.patch keywords: patch messages: 264272 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: add a COMPILEALL_FLAGS Makefile variable type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42603/compileall-flags.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:01:37 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 13:01:37 +0000 Subject: [New-bugs-announce] [issue26853] missing symbols in curses and readline modules on android Message-ID: <1461675697.75.0.0916516767531.issue26853@psf.upfronthosting.co.za> New submission from Xavier de Gaye: The android loader complains when shared libraries are not linked against their needed libraries (see also issue #21668). When ncurses is cross-compiled as a shared library, the curses and readline modules must be linked with libtinfow.so. The attached patch is a quick hack: setup.py spawns readelf to get the list of needed libraries, but this fails as the readelf path name is wrong (it is not the absolute path) and so cannot be used as is for a correct patch. ---------- components: Cross-Build files: curses_readline.patch keywords: patch messages: 264274 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: missing symbols in curses and readline modules on android type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42604/curses_readline.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:07:45 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 13:07:45 +0000 Subject: [New-bugs-announce] [issue26854] missing header on android for the ossaudiodev module Message-ID: <1461676065.72.0.0543666103766.issue26854@psf.upfronthosting.co.za> New submission from Xavier de Gaye: On linux /usr/include/sys/soundcard.h includes /usr/include/linux/soundcard.h while on android (also a linux) there is only /usr/include/linux/soundcard.h Patch attached. ---------- components: Cross-Build files: ossaudiodev.patch keywords: patch messages: 264276 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: missing header on android for the ossaudiodev module type: compile error versions: Python 3.6 Added file: http://bugs.python.org/file42605/ossaudiodev.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:18:31 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 13:18:31 +0000 Subject: [New-bugs-announce] [issue26855] add platform.android_ver() for android Message-ID: <1461676711.89.0.563475479331.issue26855@psf.upfronthosting.co.za> New submission from Xavier de Gaye: The attached patch misses a test case. Also how can we be sure that the '/system/build.prop' file may be guaranteed to exist on all android devices ? It is difficult to get a reliable information on the android infrastructure when the information does not relate to the java libraries. ---------- components: Cross-Build files: platform.patch keywords: patch messages: 264277 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: add platform.android_ver() for android type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42606/platform.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:29:05 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 13:29:05 +0000 Subject: [New-bugs-announce] [issue26856] android does not have pwd.getpwall() Message-ID: <1461677345.21.0.582848799296.issue26856@psf.upfronthosting.co.za> New submission from Xavier de Gaye: User ids on android are the ids of the applications and they are used to enforce the applications access rights. See the 'User IDs and File Access' section at http://developer.android.com/guide/topics/security/permissions.html. Most integers are existing user ids on android. This may explain why getpwall() is missing. Patch attached. ---------- components: Library (Lib) files: pwd.patch keywords: patch messages: 264279 nosy: xdegaye priority: normal severity: normal status: open title: android does not have pwd.getpwall() type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42608/pwd.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:33:19 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 13:33:19 +0000 Subject: [New-bugs-announce] [issue26857] gethostbyname_r() is broken on android Message-ID: <1461677599.23.0.153884294057.issue26857@psf.upfronthosting.co.za> New submission from Xavier de Gaye: HAVE_GETHOSTBYNAME_R is defined on android API 21, but importing the _socket module fails with: ImportError: dlopen failed: cannot locate symbol "gethostbyaddr_r" referenced by "_socket.cpython-36m-i386-linux-gnu.so" Patch attached. The patch does not take into account the fact that this may be fixed in future versions of android. ---------- components: Cross-Build files: socketmodule.patch keywords: patch messages: 264280 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: gethostbyname_r() is broken on android type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42609/socketmodule.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:37:22 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 13:37:22 +0000 Subject: [New-bugs-announce] [issue26858] setting SO_REUSEPORT fails on android Message-ID: <1461677842.81.0.574068471289.issue26858@psf.upfronthosting.co.za> New submission from Xavier de Gaye: Android defines SO_REUSEPORT on android API 21 but setting this option in the asyncio tests raises OSError: [Errno 92] Protocol not available. The attached patch assumes there is a platform.android_ver() function to detect that this is the android platform. The patch does not take into account the fact that this may be fixed in future versions of android. ---------- components: Cross-Build files: test.asyncio.patch keywords: patch messages: 264281 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: setting SO_REUSEPORT fails on android type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42610/test.asyncio.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:42:17 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 13:42:17 +0000 Subject: [New-bugs-announce] [issue26859] unittest fails with "Start directory is not importable" Message-ID: <1461678137.48.0.743276761028.issue26859@psf.upfronthosting.co.za> New submission from Xavier de Gaye: unittest fails to load tests when the tests are in a package that has an __init__.pyc file and no __init__.py file. Patch attached. ---------- components: Library (Lib) files: unittest.patch keywords: patch messages: 264283 nosy: xdegaye priority: normal severity: normal status: open title: unittest fails with "Start directory is not importable" type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42611/unittest.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:45:06 2016 From: report at bugs.python.org (Aviv Palivoda) Date: Tue, 26 Apr 2016 13:45:06 +0000 Subject: [New-bugs-announce] [issue26860] os.walk and os.fwalk yield namedtuple instead of tuple Message-ID: <1461678306.82.0.694875654943.issue26860@psf.upfronthosting.co.za> New submission from Aviv Palivoda: I am suggesting that os.walk and os.fwalk will yield a namedtuple instead of the regular tuple they currently yield. The use case for this change can be seen in the next example: def walk_wrapper(walk_it): for dir_entry in walk_it: if dir_entry[0] == "aaa": yield dir_entry Because walk_it can be either os.walk or os.fwalk I need to access dir_entry via index. My change will allow me to change this function to: def walk_wrapper(walk_it): for dir_entry in walk_it: if dir_entry.dirpath == "aaa": yield dir_entry Witch is more clear and readable. ---------- components: Library (Lib) files: os-walk-result-namedtuple.patch keywords: patch messages: 264285 nosy: loewis, palaviv priority: normal severity: normal status: open title: os.walk and os.fwalk yield namedtuple instead of tuple type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42612/os-walk-result-namedtuple.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:47:15 2016 From: report at bugs.python.org (Vukasin Felbab) Date: Tue, 26 Apr 2016 13:47:15 +0000 Subject: [New-bugs-announce] [issue26861] shutil.copyfile() doesn't close the opened files Message-ID: <1461678435.64.0.223785102956.issue26861@psf.upfronthosting.co.za> New submission from Vukasin Felbab: shutil.copyfile() doesn't close the opened files, so it is vulnerable to IO Error 24: too many files open actually, the src and dst files should be closed after copy ---------- components: IO messages: 264286 nosy: vocdetnojz priority: normal severity: normal status: open title: shutil.copyfile() doesn't close the opened files type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 09:49:48 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 13:49:48 +0000 Subject: [New-bugs-announce] [issue26862] SYS_getdents64 does not need to be defined on android API 21 Message-ID: <1461678588.46.0.828463582716.issue26862@psf.upfronthosting.co.za> New submission from Xavier de Gaye: Revert the changeset commited at issue #20307 as the compilation does not fail anymore on android API level 21. Patch attached. ---------- components: Cross-Build files: posixmodule.patch keywords: patch messages: 264287 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: SYS_getdents64 does not need to be defined on android API 21 type: compile error versions: Python 3.6 Added file: http://bugs.python.org/file42613/posixmodule.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 10:24:20 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 14:24:20 +0000 Subject: [New-bugs-announce] [issue26863] android lacks some declarations for the posix module Message-ID: <1461680660.31.0.0876884221413.issue26863@psf.upfronthosting.co.za> New submission from Xavier de Gaye: 'AT_EACCESS' is not declared although HAVE_FACCESSAT is defined 'I_PUSH' is not declared Patch attached. The patch does not take into account the fact that this may be fixed in future versions of android. ---------- components: Cross-Build files: posixmodule.patch keywords: patch messages: 264291 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: android lacks some declarations for the posix module type: compile error versions: Python 3.6 Added file: http://bugs.python.org/file42614/posixmodule.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 10:43:47 2016 From: report at bugs.python.org (Daniel Morrison) Date: Tue, 26 Apr 2016 14:43:47 +0000 Subject: [New-bugs-announce] [issue26864] urllib.request no_proxy check differs from curl Message-ID: <1461681827.76.0.743158989172.issue26864@psf.upfronthosting.co.za> New submission from Daniel Morrison: The no_proxy environment variable works in python as a case sensitive suffix check. Curl handles this variable as a case insensitive hostname check. Case sensitivity appears to be in conflict with the DNS Case Insensitivity RFC (https://tools.ietf.org/html/rfc4343). While the suffix check is documented (https://docs.python.org/3/library/urllib.request.html), this seems to be problematic and inconsistent with other tools on the system. I believe the ideal solution would be to have proxy_bypass be a method of ProxyHandler so that it can be overridden without dependence on undocumented methods. This would also allow for the requested behavior to be added without breaking backwards compatibility. ---------- components: Library (Lib) messages: 264297 nosy: Daniel Morrison priority: normal severity: normal status: open title: urllib.request no_proxy check differs from curl type: enhancement versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 11:36:51 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Tue, 26 Apr 2016 15:36:51 +0000 Subject: [New-bugs-announce] [issue26865] toward the support of the android platform Message-ID: <1461685011.99.0.617135447086.issue26865@psf.upfronthosting.co.za> New submission from Xavier de Gaye: This issue lists issues that may have to be fixed in the perspective of a future support of the android platform. build issue #26849: android does not support versioning in SONAME issue #26851: android compilation and link flags issue #26852: add a COMPILEALL_FLAGS Makefile variable curses, readline issue #26853: missing symbols in curses and readline modules on android ossaudiodev issue #26854: missing header on android for the ossaudiodev module platform issue #16353: add function to os module for getting path to default shell issue #26855: add platform.android_ver() for android pwd issue #26856: android does not have pwd.getpwall() socketmodule issue #26857: gethostbyname_r() is broken on android asyncio tests issue #26858: setting SO_REUSEPORT fails on android unittest issue #26859: unittest fails with "Start directory is not importable" posixmodule issue #26862: SYS_getdents64 does not need to be defined on android API 21 issue #26863: android lacks some declaration for the posix module ---------- components: Cross-Build messages: 264310 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: toward the support of the android platform type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 12:01:58 2016 From: report at bugs.python.org (Tom Middleton) Date: Tue, 26 Apr 2016 16:01:58 +0000 Subject: [New-bugs-announce] [issue26866] Inconsistent environment in Windows using "Open With" Message-ID: <1461686518.69.0.791676363676.issue26866@psf.upfronthosting.co.za> New submission from Tom Middleton: I have found that the execution of python scripts is inconsistent from the following methods: >From Explorer: 1) Right-Click->Open with->python.exe 2) Right-Click->Open (assuming python.exe being the "default" application) 3) Right-Click->Open with->"Choose default program..."->python.exe 4) Double-Click the script from the command prompt: 4) python 5) Of those listed, #1 opens the script with the CWD as c:\Windows\System32\ The remainder open the script with the CWD (as from os.getcwd()) as the current directory of the executed script. The issue arose when attempting to open a file in the same directory as the script. The issue of how to access a file in the same directory as the script isn't the point of this issue however it is that it is inconsistent across methods that would seem to be identical to the user. A use case for Open with->python.exe is if say you want your default behavior to be open with an IDE. The following other issues I found may be relevant. #22121 #25450 Some digging found this registry entry which may be relevant. HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.py ---------- components: Installation, Windows messages: 264315 nosy: busfault, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Inconsistent environment in Windows using "Open With" type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Apr 26 23:34:14 2016 From: report at bugs.python.org (Xiang Zhang) Date: Wed, 27 Apr 2016 03:34:14 +0000 Subject: [New-bugs-announce] [issue26867] test_ssl test_options fails on ubuntu 16.04 Message-ID: <1461728054.24.0.63704060185.issue26867@psf.upfronthosting.co.za> Changes by Xiang Zhang : ---------- nosy: xiang.zhang priority: normal severity: normal status: open title: test_ssl test_options fails on ubuntu 16.04 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 27 01:36:44 2016 From: report at bugs.python.org (Berker Peksag) Date: Wed, 27 Apr 2016 05:36:44 +0000 Subject: [New-bugs-announce] [issue26868] Incorrect check for return value of PyModule_AddObject in _csv.c Message-ID: <1461735404.8.0.653639688918.issue26868@psf.upfronthosting.co.za> New submission from Berker Peksag: This is probably harmless, but Modules/_csv.c has the following code: Py_INCREF(&Dialect_Type); if (PyModule_AddObject(module, "Dialect", (PyObject *)&Dialect_Type)) return NULL; However, PyModule_AddObject returns only -1 and 0. It also doesn't decref Dialect_Type if it returns -1 so I guess more correct code should be: Py_INCREF(&Dialect_Type); if (PyModule_AddObject(module, "Dialect", (PyObject *)&Dialect_Type) == -1) { Py_DECREF(&Dialect_Type); return NULL; } The same pattern can be found in a few more modules. ---------- components: Extension Modules files: csv.diff keywords: patch messages: 264350 nosy: berker.peksag priority: low severity: normal stage: patch review status: open title: Incorrect check for return value of PyModule_AddObject in _csv.c type: behavior versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42623/csv.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 27 03:47:47 2016 From: report at bugs.python.org (Thomas Guettler) Date: Wed, 27 Apr 2016 07:47:47 +0000 Subject: [New-bugs-announce] [issue26869] unittest longMessage docs Message-ID: <1461743267.94.0.620093650485.issue26869@psf.upfronthosting.co.za> New submission from Thomas Guettler: The first message of the longMessage docs is confusing: https://docs.python.org/3/library/unittest.html#unittest.TestCase.longMessage > If set to True then .... This reads between the lines, that the default is False. But that was long ago in Python2. In Python3 the default is True (which I prefer to the old default). I think the docs should be like. And the term "normal message" is not defined. For new comers the "normal message" is what I get if you don't change the default, not the behaviour of the Python2 version :-) I think "normal message" should be replaced with "short message" or "diff message" .. I am unsure. What do you think? ---------- assignee: docs at python components: Documentation messages: 264359 nosy: docs at python, guettli priority: normal severity: normal status: open title: unittest longMessage docs versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 27 03:52:14 2016 From: report at bugs.python.org (Tyler Crompton) Date: Wed, 27 Apr 2016 07:52:14 +0000 Subject: [New-bugs-announce] [issue26870] Unexpected call to readline's add_history in call_readline Message-ID: <1461743534.82.0.668302064476.issue26870@psf.upfronthosting.co.za> New submission from Tyler Crompton: I was implementing a REPL using the readline module and noticed that there are extraneous calls to readline's add_history function in call_readline[1]. This was a problem because there were some lines, that, based on their compositions, I might not want in the history. Figuring out why I was getting two entries for every The function call has been around ever since Python started supporting GNU Readline (first appeared in Python 1.4 or so, I believe)[2]. This behavior doesn't seem to be documented anywhere. I can't seem to find any code that depends on a line that is read in by call_readline to be added to the history. I guess the user might rely on the interactive interpreter to use the history feature. Beyond that, I can't think of any critical purpose for it. There are four potential workarounds: 1. Don't use the input function. Unfortunately, this is a non-solution as it prevents one from using Readline/libedit for input operations. 2. Don't use Readline/libedit. For the same reasons, this isn't a good solution. 3. Evaluate get_current_history_length() and store its result. Evaluate input(). Evaluate get_current_history_length() again. If the length changed, execute readline.remove_history_item(readline.get_current_history_length() - 1). Note that one can't assume that the length will change after a call to input, because blank lines aren't added to the history. This isn't an ideal solution for obvious reasons. It's a bit convoluted. 4. Use some clever combination of readline.get_line_buffer, tty.setcbreak, termios.tcgetattr, termios.tcsetattr, msvcrt.getwche, and try-except-finally blocks. Besides the obvious complexities in this solution, this isn't particularly platform-independent. I think that it's fair to say that none of the above options are desirable. So let's discuss potential solutions. 1. Remove this feature from call_readline. Not only will this cause a regression in the interactive interpreter, many people rely on this behavior when using the readline module. 2. Dynamically swap histories (or readline configurations in general) between readline-capable calls to input and prompts in the interactive interpreter. This would surely be too fragile and add unnecessary overhead. 3. Document this behavior and leave the code alone. I wouldn't say that this is a solution, but it would at least help other developers that would fall in the same trap that I did. 4. Add a keyword argument to input to instruct call_readline to not add the line to the history. Personally, this seems a bit dirty. 5. Add a readline function in the readline module that doesn't rely on call_readline. Admittedly, the implementation would have to look eerily similar to call_readline, so perhaps there could be a flag on call_readline. However, that would require touching a few files that don't seem to be particularly related. But a new function might be confusing since call_readline sounds like a name that you'd give such a function. I think that the last option would be a pretty clean change that would cause the least number of issues (if any) for existing code bases. Regardless of the implementation details, I think that this would be the best route?to add a Python function called readline to the readline module. I would imagine that this would be an easy change/addition. I'm attaching a sample script that demonstrates the described issue. [1]: https://github.com/python/cpython/blob/fa3fc6d78ee0ce899c9c828e796b66114400fbf0/Modules/readline.c#L1283 [2]: https://github.com/python/cpython/commit/e59c3ba80888034ef0e4341567702cd91e7ef70d ---------- components: Library (Lib) files: readline_history.py messages: 264360 nosy: Tyler Crompton priority: normal severity: normal status: open title: Unexpected call to readline's add_history in call_readline type: behavior versions: Python 2.7, Python 3.2, Python 3.3, Python 3.4, Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42626/readline_history.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 27 13:52:51 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 27 Apr 2016 17:52:51 +0000 Subject: [New-bugs-announce] [issue26871] Change weird behavior of PyModule_AddObject() Message-ID: <1461779571.56.0.960686965696.issue26871@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: PyModule_AddObject() has weird and counterintuitive behavior. It steals a reference only on success. The caller is responsible to decref it on error. This behavior was not documented and inconsistent with the behavior of other functions stealing a reference (PyList_SetItem() and PyTuple_SetItem()). Seems most developers don't use this function correctly, since only in few places in the stdlib a reference is decrefed explicitly after PyModule_AddObject() failure. This weird behavior was first reported in issue1782, and changing it was proposed. Related bugs in PyModule_AddIntConstant() and PyModule_AddStringConstant() were fixed, but the behavior of PyModule_AddObject() was not changed and not documented. This issue is opened for gradual changing the behavior of PyModule_AddObject(). Proposed patch introduces new macros PY_MODULE_ADDOBJECT_CLEAN that controls the behavior of PyModule_AddObject() as PY_SSIZE_T_CLEAN controls the behavior of PyArg_Parse* functions. If the macro is defined before including "Python.h", PyModule_AddObject() steals a reference unconditionally. Otherwise it steals a reference only on success, and the caller is responsible for decref'ing it on error (current behavior). This needs minimal changes to source code if PyModule_AddObject() was used incorrectly (i.e. as documented), and keep the code that explicitly decref a reference after PyModule_AddObject() working correctly. Use of PyModule_AddObject() without defining PY_MODULE_ADDOBJECT_CLEAN is declared deprecated (or we can defer this to 3.7). In the distant future (after dropping the support of 2.7) the old behavior will be dropped. See also a discussion on Python-Dev: http://comments.gmane.org/gmane.comp.python.devel/157545 . ---------- components: Extension Modules, Interpreter Core files: pymodule_addobject.patch keywords: patch messages: 264384 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Change weird behavior of PyModule_AddObject() type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42632/pymodule_addobject.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 27 16:51:25 2016 From: report at bugs.python.org (sorin) Date: Wed, 27 Apr 2016 20:51:25 +0000 Subject: [New-bugs-announce] [issue26872] Default ConfigParser in python is not able to load values habing percent in them Message-ID: <1461790285.78.0.75016384493.issue26872@psf.upfronthosting.co.za> New submission from sorin: The ConfigParser issue with % (percent) is taking huge proportions because it does have serious implications downstream. One such example is the fact that in breaks virtualenv in such way the if you create a virtual env in a path that contains percent at some point you will endup with a virtualenv where you can install only about 50% of existing python packages (serious ones like numpy or pandas will fail to install or even to perform a simple python setup.py egg_info on them). This is related to distutils which is trying to use the ConfigParser to load the python PATH (which now contains the percent). Switching to RawConfigParser does resolve the problem but this seems like an almost impossible attempt because the huge number of occurrences in the wild. You will find the that only class that is able to load a value with percent inside is RawConfigParser and I don't think that this is normal. Here is some code I created to exemplify the defective behaviour: https://github.com/ssbarnea/test-configparser/blob/master/tests/test-configparser.py#L21 The code is executed by Travis with multiple version of python, see one result example: https://travis-ci.org/ssbarnea/test-configparser/jobs/126145032 My personal impression is that the decision to process the % (percent) and to change the behaviour between Python 2 and 3 was a very unfortunate one. Ini/Cfg file specification does not specify the percent as an escape character. Introduction of the %(VAR) feature sounds more like bug than a feature in this case. ---------- components: Distutils messages: 264402 nosy: dstufft, eric.araujo, sorin priority: normal severity: normal status: open title: Default ConfigParser in python is not able to load values habing percent in them type: compile error versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Apr 27 21:49:22 2016 From: report at bugs.python.org (Nathan Williams) Date: Thu, 28 Apr 2016 01:49:22 +0000 Subject: [New-bugs-announce] [issue26873] xmlrpclib raises when trying to convert an int to string when unicode is available Message-ID: <1461808162.86.0.277160090083.issue26873@psf.upfronthosting.co.za> New submission from Nathan Williams: I am using xmlrpclib against an internal xmlrpc server. One of the responses returns integer values, and it raises an exception in "_stringify" The code for _stringify is (xmlrpclib.py:180 in python2.7): if unicode: def _stringify(string): # convert to 7-bit ascii if possible try: return string.encode("ascii") except UnicodeError: return string else: def _stringify(string): return string So when "unicode" is available, .encode is called on the parameter (which are the returned objects from the server) which fails for ints. Without the unicode path it works fine, proven with the following monkey-patch: xmlrpclib._stringify = lambda s: s I am using the above patch as a workaround, but a fix to the library should be straightforward, simply checking for AttributeError in the except clause would solve it while retaining the existing functionality. The traceback: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/xmlrpclib.py", line 1199, in __call__ return self.__send(self.__name, args) File "/usr/lib/python2.6/xmlrpclib.py", line 1489, in __request verbose=self.__verbose File "/usr/lib/python2.6/xmlrpclib.py", line 1253, in request return self._parse_response(h.getfile(), sock) File "/usr/lib/python2.6/xmlrpclib.py", line 1387, in _parse_response p.feed(response) File "/usr/lib/python2.6/xmlrpclib.py", line 601, in feed self._parser.Parse(data, 0) File "/usr/lib/python2.6/xmlrpclib.py", line 868, in end return f(self, join(self._data, "")) File "/usr/lib/python2.6/xmlrpclib.py", line 935, in end_struct dict[_stringify(items[i])] = items[i+1] File "/usr/lib/python2.6/xmlrpclib.py", line 176, in _stringify return string.encode("ascii") AttributeError: 'int' object has no attribute 'encode' ---------- components: Library (Lib) messages: 264407 nosy: Nathan Williams priority: normal severity: normal status: open title: xmlrpclib raises when trying to convert an int to string when unicode is available type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 28 12:08:50 2016 From: report at bugs.python.org (mbarao) Date: Thu, 28 Apr 2016 16:08:50 +0000 Subject: [New-bugs-announce] [issue26874] Docstring error in divmod function Message-ID: <1461859730.5.0.461826994455.issue26874@psf.upfronthosting.co.za> New submission from mbarao: The documentation of the divmod function says that a tuple ((x-x%y)/y, x%y) is returned, but this is not correct anymore for python3. I think it should be ((x-x%y)//y, x%y) where an integer division is used. ---------- assignee: docs at python components: Documentation messages: 264434 nosy: docs at python, mbarao priority: normal severity: normal status: open title: Docstring error in divmod function type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 28 13:37:13 2016 From: report at bugs.python.org (Xiang Zhang) Date: Thu, 28 Apr 2016 17:37:13 +0000 Subject: [New-bugs-announce] [issue26875] mmap doc gives wrong code example Message-ID: <1461865033.17.0.77697230941.issue26875@psf.upfronthosting.co.za> New submission from Xiang Zhang: The code given in mmap doc import mmap with mmap.mmap(-1, 13) as mm: mm.write("Hello world!") should be mm.write(b"Hello world!") The *b* is left out and then causes exception. ---------- assignee: docs at python components: Documentation files: mmap_doc.patch keywords: patch messages: 264438 nosy: docs at python, xiang.zhang priority: normal severity: normal status: open title: mmap doc gives wrong code example Added file: http://bugs.python.org/file42641/mmap_doc.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 28 14:05:19 2016 From: report at bugs.python.org (Rohit Jamuar) Date: Thu, 28 Apr 2016 18:05:19 +0000 Subject: [New-bugs-announce] [issue26876] Extend MSVCCompiler class to respect environment variables Message-ID: <1461866719.82.0.2605655084.issue26876@psf.upfronthosting.co.za> New submission from Rohit Jamuar: The UnixCompiler class respects flags (CC, LD, AR, CFLAGS and LDFLAGS) set to the environment, whereas MSVCCompiler class does not. This change allows building CPython and any module that invokes distutils to utilize flags and executables that have been set to the environment. Inclusion of this change would ensure MSVCCompiler's behavior to be same as that of UnixCompiler and would also allow using a different set of compiler / linker / archiver, on Windows, without having the necessity for implementing separate compiler classes - using environment variables it should be possible to use a separate set of build executables - for example icl, clang, etc. ---------- components: Distutils files: msvc_respect_env_flags.patch keywords: patch messages: 264439 nosy: dstufft, eric.araujo, r.david.murray, rohitjamuar, zach.ware priority: normal severity: normal status: open title: Extend MSVCCompiler class to respect environment variables type: enhancement versions: Python 2.7, Python 3.5 Added file: http://bugs.python.org/file42642/msvc_respect_env_flags.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 28 17:44:42 2016 From: report at bugs.python.org (=?utf-8?b?0JzQsNGA0Log0JrQvtGA0LXQvdCx0LXRgNCz?=) Date: Thu, 28 Apr 2016 21:44:42 +0000 Subject: [New-bugs-announce] [issue26877] tarfile use wrong code when read from fileobj Message-ID: <1461879882.57.0.719679075431.issue26877@psf.upfronthosting.co.za> New submission from ???? ?????????: tarfile.py: _FileInFile(): (near line 687) b = self.fileobj.read(length) if len(b) != length: raise ReadError("unexpected end of data") every read() API does not guarantee that it will read `length` bytes. So, if fileobj reads less than requestedm that is not an error (!) In my case it was a pipe... ---------- components: Library (Lib) messages: 264450 nosy: mmarkk priority: normal severity: normal status: open title: tarfile use wrong code when read from fileobj type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 28 18:25:26 2016 From: report at bugs.python.org (DqASe) Date: Thu, 28 Apr 2016 22:25:26 +0000 Subject: [New-bugs-announce] [issue26878] Allow doctest to deep copy globals Message-ID: <1461882326.3.0.694986363749.issue26878@psf.upfronthosting.co.za> New submission from DqASe: Currently doctest makes shallow copies of the environment or globs argument. This is somewhat un-symmetrical: on the one hand, a test cannot see variables created by another, but on the other, it can alter mutable objects. This is inconvenient e.g. when documenting several methods that change an object (say, obj.append() , then obj.insert() - one would hope that the results are independent on the order tests are executed ). An option to make deep copies of the variables in the context, instead of shallow ones, would in my opinion solve the issue cleanly. ---------- components: Library (Lib) messages: 264452 nosy: DqASe priority: normal severity: normal status: open title: Allow doctest to deep copy globals type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Apr 28 19:39:25 2016 From: report at bugs.python.org (Rogi) Date: Thu, 28 Apr 2016 23:39:25 +0000 Subject: [New-bugs-announce] [issue26879] new message Message-ID: <00004ebffd0d$54a76e66$dd6b35d1$@linuxmail.org> New submission from Rogi: Hello! You have a new message, please read rogi at linuxmail.org ---------- messages: 264455 nosy: Rogi priority: normal severity: normal status: open title: new message _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 02:29:20 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 29 Apr 2016 06:29:20 +0000 Subject: [New-bugs-announce] [issue26880] Remove redundant checks from set.__init__ Message-ID: <1461911360.58.0.917817733289.issue26880@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: set.__init__ have checks PyAnySet_Check(self) and PySet_Check(self). They are redundant since set.__init__ can't be called for non-set. >>> set.__init__(frozenset(), ()) Traceback (most recent call last): File "", line 1, in TypeError: descriptor '__init__' requires a 'set' object but received a 'frozenset' Do I miss something? ---------- assignee: rhettinger components: Interpreter Core files: set_init.patch keywords: patch messages: 264467 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Remove redundant checks from set.__init__ type: performance versions: Python 3.6 Added file: http://bugs.python.org/file42644/set_init.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 02:50:49 2016 From: report at bugs.python.org (STINNER Victor) Date: Fri, 29 Apr 2016 06:50:49 +0000 Subject: [New-bugs-announce] [issue26881] modulefinder should reuse the dis module Message-ID: <1461912649.38.0.754834410981.issue26881@psf.upfronthosting.co.za> New submission from STINNER Victor: The scan_opcodes_25() method of the modulefinder module implements a disassembler of Python bytecode. The implementation is incomplete, it doesn't support EXTENDED_ARG. I suggest to drop the disassembler and reuse the dis module. See also the issue #26647 "Wordcode" changes the bytecode. ---------- messages: 264468 nosy: Demur Rumed, haypo, serhiy.storchaka priority: normal severity: normal status: open title: modulefinder should reuse the dis module versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 04:12:01 2016 From: report at bugs.python.org (=?utf-8?b?0JDQu9C10LrRgdCw0L3QtNGAINCS0LjQvdC+0LPRgNCw0LTQvtCy?=) Date: Fri, 29 Apr 2016 08:12:01 +0000 Subject: [New-bugs-announce] [issue26882] The Python process stops responding immediately after starting Message-ID: <1461917521.83.0.656374171868.issue26882@psf.upfronthosting.co.za> New submission from ????????? ??????????: I start in Windows 7 virtual machine the Python x86 subprocess from another console application with commandline: c:\python35\python.exe -c print('hello') Immediately after the startup process stops responding and hanging forever. If you run it with the parameter -version python shows the version information to the console normally. ---------- components: Interpreter Core files: python.dmp messages: 264475 nosy: ????????? ?????????? priority: normal severity: normal status: open title: The Python process stops responding immediately after starting type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42648/python.dmp _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 05:54:24 2016 From: report at bugs.python.org (Stefan Forstenlechner) Date: Fri, 29 Apr 2016 09:54:24 +0000 Subject: [New-bugs-announce] [issue26883] input() call blocks multiprocessing Message-ID: <1461923664.77.0.748212868392.issue26883@psf.upfronthosting.co.za> New submission from Stefan Forstenlechner: If input is called right away after applying a single job to multiprocessing.Pool or submitting concurrent.futures.ProcessPoolExecutor then the processes are not started. If multiple jobs are submitted everything works fine. This only seems to be a problem on Windows (probably only 10) and python version 3.x See my stackoverflow question: http://stackoverflow.com/questions/36919678/python-multiprocessing-pool-does-not-start-right-away ---------- components: Windows messages: 264480 nosy: Stefan Forstenlechner, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: input() call blocks multiprocessing type: behavior versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 06:32:10 2016 From: report at bugs.python.org (Xavier de Gaye) Date: Fri, 29 Apr 2016 10:32:10 +0000 Subject: [New-bugs-announce] [issue26884] cross-compilation of extension module links to the wrong python library Message-ID: <1461925930.98.0.579147295527.issue26884@psf.upfronthosting.co.za> New submission from Xavier de Gaye: configure of the cross compilation is run with '--enable-shared --with-pydebug'. The cross-compilation fails attempting to link the extension module objects with a non existing libpython3.6m instead of libpython3.6dm, when the native python that is used to run setup.py had not been configured with --with-pydebug. The attached patch fixes this problem. ---------- components: Cross-Build files: build.patch keywords: patch messages: 264484 nosy: Alex.Willmer, xdegaye priority: normal severity: normal status: open title: cross-compilation of extension module links to the wrong python library type: compile error versions: Python 3.6 Added file: http://bugs.python.org/file42650/build.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 11:33:03 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 29 Apr 2016 15:33:03 +0000 Subject: [New-bugs-announce] [issue26885] Add parsing support for more types in xmlrpc Message-ID: <1461943983.79.0.576979457392.issue26885@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Apache XML-RPC server supports additional data types (http://ws.apache.org/xmlrpc/types.html). Proposed patch adds support of parsing some of these types: "ex:nil", "ex:i1", "ex:i2", "ex:i8", "ex:biginteger", "ex:float", "ex:bigdecimal". "nil" and "i8" without a prefix was already supported, but the support of "i8" was not documented. The support of "ex:dateTime" can be added after resolving issue15873. ---------- components: Library (Lib) files: xmlrpc_extensions.patch keywords: patch messages: 264504 nosy: loewis, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Add parsing support for more types in xmlrpc type: enhancement versions: Python 3.6 Added file: http://bugs.python.org/file42652/xmlrpc_extensions.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 12:29:35 2016 From: report at bugs.python.org (Peter L) Date: Fri, 29 Apr 2016 16:29:35 +0000 Subject: [New-bugs-announce] [issue26886] Cross-compiling python:3.5.x fails with "Parser/pgen: Parser/pgen: cannot execute binary file" Message-ID: <1461947375.28.0.628311795437.issue26886@psf.upfronthosting.co.za> New submission from Peter L: Cross-compiling python-3.5.x fails with "Parser/pgen: Parser/pgen: cannot execute binary file" (CBUILD="x86_64-pc-linux-gnu" and CHOST="armv7a-hardfloat-linux-gnueabi"). python-3.5.x requires "pgen" and "_freeze_importlib" to be compiled and executed at build time. Otherwise, it fails with "Parser/pgen: Parser/pgen: cannot execute binary file". ---------- components: Cross-Build messages: 264508 nosy: Alex.Willmer, Peter L2 priority: normal severity: normal status: open title: Cross-compiling python:3.5.x fails with "Parser/pgen: Parser/pgen: cannot execute binary file" type: compile error versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 16:56:45 2016 From: report at bugs.python.org (Ron Barak) Date: Fri, 29 Apr 2016 20:56:45 +0000 Subject: [New-bugs-announce] [issue26887] Erratum in https://docs.python.org/2.6/library/multiprocessing.html Message-ID: <1461963405.27.0.843692247406.issue26887@psf.upfronthosting.co.za> New submission from Ron Barak: Erratum in https://docs.python.org/2.6/library/multiprocessing.html: The chunksize argument is the same as the one used by the map() method. For very long iterables using a large value for chunksize can make >>>make<<< the job complete much faster than using the default value of 1. ---------- messages: 264520 nosy: ronbarak priority: normal severity: normal status: open title: Erratum in https://docs.python.org/2.6/library/multiprocessing.html versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 17:14:58 2016 From: report at bugs.python.org (Aleksander Gajewski) Date: Fri, 29 Apr 2016 21:14:58 +0000 Subject: [New-bugs-announce] [issue26888] Multiple memory leaks after raw Py_Initialize and Py_Finalize. Message-ID: <1461964498.64.0.249239175864.issue26888@psf.upfronthosting.co.za> New submission from Aleksander Gajewski: There are a lot of memory leaks detected by AddressSanitzer (used with gcc-6.1). The sample program with its cmakelists and output can be found in the attachment. Exact list of memory leaks is placed in log_3_python_test.txt. I am using Python3.5.1 compile from sources (placed in /opt/python). ---------- components: Library (Lib) files: python_leak.tar messages: 264522 nosy: Aleksander Gajewski priority: normal severity: normal status: open title: Multiple memory leaks after raw Py_Initialize and Py_Finalize. type: resource usage versions: Python 3.5 Added file: http://bugs.python.org/file42655/python_leak.tar _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 17:39:15 2016 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 29 Apr 2016 21:39:15 +0000 Subject: [New-bugs-announce] [issue26889] Improve Doc/library/xmlrpc.client.rst Message-ID: <1461965955.82.0.756722302068.issue26889@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch makes minor improvements of Doc/library/xmlrpc.client.rst. ---------- components: Extension Modules files: docs_xmlrpc_client.patch keywords: patch messages: 264524 nosy: effbot, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Improve Doc/library/xmlrpc.client.rst type: enhancement versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file42656/docs_xmlrpc_client.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Apr 29 23:33:07 2016 From: report at bugs.python.org (Sebastien Bourdeauducq) Date: Sat, 30 Apr 2016 03:33:07 +0000 Subject: [New-bugs-announce] [issue26890] inspect.getsource gets source copy on disk even when module has not been reloaded Message-ID: <1461987187.54.0.900828586529.issue26890@psf.upfronthosting.co.za> New submission from Sebastien Bourdeauducq: The fix of https://bugs.python.org/issue1218234 is a bit zealous. If the module has not been reloaded, e.g. calling a function will execute code older than what inspect.getsource returns. ---------- components: Library (Lib) messages: 264538 nosy: sebastien.bourdeauducq priority: normal severity: normal status: open title: inspect.getsource gets source copy on disk even when module has not been reloaded versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 30 00:36:37 2016 From: report at bugs.python.org (Larry Hastings) Date: Sat, 30 Apr 2016 04:36:37 +0000 Subject: [New-bugs-announce] [issue26891] CPython doesn't work when you disable refcounting Message-ID: <1461990997.54.0.540088358275.issue26891@psf.upfronthosting.co.za> New submission from Larry Hastings: So here's a strange one. I want to do some mysterious experiments with CPython. So I disabled refcount changes in CPython. I changed Py_INCR and Py_DECR so they expand to nothing. I had to change some other macros to match (SETREF, XSETREF, and the Py_RETURN_* ones) to fix some compiler errors and warnings. Also, to prevent the str object from making in-place edits, I changed _Py_NewReference so that the initial reference count for all objects is 2. CPython builds, then gets to the "generate-posix-vars" step and fails with this output: ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi Fatal Python error: Py_Initialize: Unable to get the locale encoding Traceback (most recent call last): File "", line 1078, in _path_importer_cache KeyError: '/usr/local/lib/python36.zip' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 979, in _find_and_load File "", line 964, in _find_and_load_unlocked File "", line 903, in _find_spec File "", line 1137, in find_spec File "", line 1108, in _get_spec File "", line 1080, in _path_importer_cache File "", line 1056, in _path_hooks File "", line 1302, in path_hook_for_FileFinder File "", line 96, in _path_isdir File "", line 81, in _path_is_mode_type File "", line 75, in _path_stat AttributeError: module 'posix' has no attribute 'stat' Aborted (core dumped) generate-posix-vars failed Makefile:598: recipe for target 'pybuilddir.txt' failed make: *** [pybuilddir.txt] Error 1 I'm stumped. Why should CPython be dependent on reference counts actually changing? I figured I'd just leak memory like crazy, not change behavior. Attached is my patch against current trunk (1ceb91974dc4) in case you want to try it yourself. Testing was done on Ubuntu 15.10 64-bit, gcc 5.2.1. ---------- components: Interpreter Core files: larry.turn.off.refcounts.1.diff.txt messages: 264541 nosy: brett.cannon, larry priority: low severity: normal stage: needs patch status: open title: CPython doesn't work when you disable refcounting type: behavior Added file: http://bugs.python.org/file42660/larry.turn.off.refcounts.1.diff.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 30 06:04:43 2016 From: report at bugs.python.org (Chi Hsuan Yen) Date: Sat, 30 Apr 2016 10:04:43 +0000 Subject: [New-bugs-announce] [issue26892] debuglevel not honored in urllib Message-ID: <1462010683.91.0.12780176416.issue26892@psf.upfronthosting.co.za> New submission from Chi Hsuan Yen: The following test program: import sys try: import urllib.request as urllib_request except: import urllib2 as urllib_request print(sys.version) handler = urllib_request.HTTPSHandler(debuglevel=1) opener = urllib_request.build_opener(handler) print(opener.open('https://httpbin.org/user-agent').read().decode('utf-8')) Works as expected in Python 2: $ python2 test_urllib_debuglevel.py 2.7.11 (default, Mar 31 2016, 06:18:34) [GCC 5.3.0] send: 'GET /user-agent HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: httpbin.org\r\nConnection: close\r\nUser-Agent: Python-urllib/2.7\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' header: Server: nginx header: Date: Sat, 30 Apr 2016 10:02:32 GMT header: Content-Type: application/json header: Content-Length: 40 header: Connection: close header: Access-Control-Allow-Origin: * header: Access-Control-Allow-Credentials: true { "user-agent": "Python-urllib/2.7" } But the verbose output is unavailable in Python 3.x: $ ./python test_urllib_debuglevel.py 3.6.0a0 (default:1ceb91974dc4, Apr 30 2016, 17:44:57) [GCC 5.3.0] { "user-agent": "Python-urllib/3.6" } Patch attached. ---------- components: Library (Lib) files: urllib_debuglevel.patch keywords: patch messages: 264547 nosy: Chi Hsuan Yen priority: normal severity: normal status: open title: debuglevel not honored in urllib type: behavior versions: Python 3.6 Added file: http://bugs.python.org/file42662/urllib_debuglevel.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 30 08:35:26 2016 From: report at bugs.python.org (Julien Enche) Date: Sat, 30 Apr 2016 12:35:26 +0000 Subject: [New-bugs-announce] [issue26893] ValueError exception raised when using IntEnum with an attribute called "name" and @unique decorator Message-ID: <1462019726.06.0.0957572026567.issue26893@psf.upfronthosting.co.za> New submission from Julien Enche: The linked file fails with the following error : ValueError: duplicate values found in : id -> User.name, name -> User.name This exception was not raised with Python 3.4. ---------- files: enumtest.py messages: 264554 nosy: Julien Enche priority: normal severity: normal status: open title: ValueError exception raised when using IntEnum with an attribute called "name" and @unique decorator type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file42664/enumtest.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 30 10:45:06 2016 From: report at bugs.python.org (Memeplex) Date: Sat, 30 Apr 2016 14:45:06 +0000 Subject: [New-bugs-announce] [issue26894] Readline not aborting line edition on sigint Message-ID: <1462027506.98.0.545203031593.issue26894@psf.upfronthosting.co.za> New submission from Memeplex: Maybe this is just a bug in ipython but as it's closely related to http://bugs.python.org/issue23735 I'm reporting it here too, just in case. -- My original report to bug-readline: using readline with ipython 4.1.2 and the TkAgg (or GTK3Agg) backend for matplotlib, I observed the following behavior: 1) open ipython in pylab mode (ipython --pylab or use the %pylab magic once inside the repl). 2) Type something. 3) Press Ctrl-C: line edition is not aborted as expected. -- And this was Chet answer: This is probably the result of the same signal handling issues as with SIGWINCH that we discussed a little more than a year ago. Readline catches the signal, sets a flag, and, when it's safe, resends it to the calling application. It expects that if the calling application catches SIGINT, it will take care of cleaning up the readline state. Sometimes applications don't want to kill the current line on SIGINT. Notice it doesn't happen with the Qt5Agg backend. ---------- components: Extension Modules messages: 264559 nosy: memeplex priority: normal severity: normal status: open title: Readline not aborting line edition on sigint type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 30 11:02:11 2016 From: report at bugs.python.org (Simmo Saan) Date: Sat, 30 Apr 2016 15:02:11 +0000 Subject: [New-bugs-announce] [issue26895] regex matching on bytes considers zero byte as end Message-ID: <1462028531.93.0.527876280571.issue26895@psf.upfronthosting.co.za> New submission from Simmo Saan: Regex functions on bytes consider zero byte as end and stop matching at that point. This is completely nonsensical since python has no problems working with zero bytes otherwise. For example: Matches as expected: re.match(b'a', b'abc') Does not match unexpectedly: re.match(b'a', b'\x00abc') ---------- components: Regular Expressions messages: 264561 nosy: Simmo Saan, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: regex matching on bytes considers zero byte as end type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Apr 30 16:01:36 2016 From: report at bugs.python.org (Oren Milman) Date: Sat, 30 Apr 2016 20:01:36 +0000 Subject: [New-bugs-announce] [issue26896] mix-up with the terms 'importer', 'finder', 'loader' in the import system and related code Message-ID: <1462046496.61.0.778323150916.issue26896@psf.upfronthosting.co.za> New submission from Oren Milman: the proposed changes: 1. it seems there is some mix-up with the terms 'importer' and 'finder' (and rarely also 'loader') in the import system and in related code (I guess most of it is just relics from the time before PEP 302). The rational is simply https://docs.python.org/3/glossary.html#term-importer, which means (as I understand it) that the only place where 'importer' is appropriate is where the described object is guaranteed to be both a finder and loader object (i.e. currently only any of the following: BuiltinImporter, FrozenImporter, ZipImporter, mock_modules, mock_spec, TestingImporter). Note: At first I pondered about also changing local variable names and even the name of a static C function, so I posted a question to the core-mentorship mailing list, and ultimately accepted (of course) the answer - https://mail.python.org/mailman/private/core-mentorship/2016-April/003541.html - fix only docs. these proposed changes are in the following files: - Python/import.c (also changed the line saying that get_path_importer returning None tells the caller it should fall back to the built-in import mechanism, as it is no longer true, according to https://docs.python.org/3/reference/import.html#path-entry-finders. As I understand it, the last is indeed the right one) - Doc/c-api/import.rst (also changed the parallel doc of the aforementioned comment in Python/import.c) - Lib/pkgutil.py - Doc/Library/pkgutil.rst - Lib/runpy.py (also changed the function comment of _get_module_details, which specified wrong return values) - Lib/test/test_pkgutil.py 2. While scanning every CPython file that contains the string 'importer', Anaconda (a Sublime Package for Python) found two local variable assignments which were never used. I commented both out (and added a comment stating why). I would be happy to change those two tiny fixes if needed. these proposed changes are in the following files: - Lib/test/test_importlib/import_/test_meta_path.py - Lib/test/test_importlib/util.py diff: The patches diff is attached. tests: I built the changed *.rst files, and they looked fine. I played a little with the interpreter, and everything worked as usual. In addition, I run 'python -m test' (on my 64-bit Windows 10) before and after applying the patch, and got quite the same output. the outputs of both runs are attached. ---------- components: Library (Lib) files: issue.diff keywords: patch messages: 264576 nosy: Oren Milman priority: normal severity: normal status: open title: mix-up with the terms 'importer', 'finder', 'loader' in the import system and related code versions: Python 3.6 Added file: http://bugs.python.org/file42666/issue.diff _______________________________________ Python tracker _______________________________________