From report at bugs.python.org Mon Sep 1 02:44:46 2014 From: report at bugs.python.org (Tim Chase) Date: Mon, 01 Sep 2014 00:44:46 +0000 Subject: [New-bugs-announce] [issue22319] mailbox.MH chokes on directories without .mh_sequences Message-ID: <1409532286.19.0.0096423454666.issue22319@psf.upfronthosting.co.za> New submission from Tim Chase: If a mailbox.MH() object is created by pointing at a path that exists but doesn't contain a ".mh_sequences" file, it raises an exception upon iteration over .{iter,}items() rather than gracefully assuming that the file is empty. I encountered this by pointing it at a Claws Mail IMAP-cache folder (which claims to store its messages in MH format? but it doesn't place a .mh_sequences file in those folders) only to have it raise an exception. To replicate: $ mkdir empty $ python >>> import mailbox >>> for msg in mailbox.MH('empty').values(): pass I suspect this could simply wrap the "f = open(os.path.join(self._path, '.mh_sequences'), 'r')" and following lines in a check to ignore the file if it doesn't exist (returning the empty "results"). ? http://www.claws-mail.org/faq/index.php/General_Information#How_does_Claws_Mail_store_mails.3F ---------- components: Library (Lib) messages: 226197 nosy: gumnos priority: normal severity: normal status: open title: mailbox.MH chokes on directories without .mh_sequences type: behavior versions: Python 2.7, Python 3.1, Python 3.2 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 1 03:47:13 2014 From: report at bugs.python.org (Hideaki Muranami) Date: Mon, 01 Sep 2014 01:47:13 +0000 Subject: [New-bugs-announce] [issue22320] Invalid link in General Python FAQ Message-ID: <1409536033.16.0.941096686527.issue22320@psf.upfronthosting.co.za> New submission from Hideaki Muranami: Link of "Developer FAQ" on https://docs.python.org/3.4/faq/general.html#how-do-i-obtain-a-copy-of-the-python-source is invalid. ---------- assignee: docs at python components: Documentation messages: 226199 nosy: docs at python, mnamihdk priority: normal severity: normal status: open title: Invalid link in General Python FAQ versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 1 12:03:22 2014 From: report at bugs.python.org (Dima Tisnek) Date: Mon, 01 Sep 2014 10:03:22 +0000 Subject: [New-bugs-announce] [issue22321] odd result for datetime.time.strftime("%s") Message-ID: <1409565802.22.0.948210004323.issue22321@psf.upfronthosting.co.za> New submission from Dima Tisnek: $ python2 -c 'import datetime; print datetime.time(10, 44, 11).strftime("%s")' -2208955189 $ python3 -c 'import datetime; print (datetime.time(10, 44, 11).strftime("%s"))' -2208955189 So apparently, datetime.time(...).strftime("%s") semantically "seconds since unix epoch" assumes Jan 1, 1900 for missing date part. However datetime module doesn't allow subtracting time objects, i.e. no assumption of date is made, where "same date" chould be reasonable. ---------- components: Extension Modules messages: 226224 nosy: Dima.Tisnek priority: normal severity: normal status: open title: odd result for datetime.time.strftime("%s") type: behavior versions: Python 2.7, Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 1 17:11:58 2014 From: report at bugs.python.org (Peter Wu) Date: Mon, 01 Sep 2014 15:11:58 +0000 Subject: [New-bugs-announce] [issue22322] Zip files created by `git archive` result in a SyntaxError (due to comment?) Message-ID: <1409584318.1.0.656275318382.issue22322@psf.upfronthosting.co.za> New submission from Peter Wu: Files created by `git archive` are not understood by the Python interpreter. This could be caused by the additional comment (for the commit hash) in the file. echo 'print(1)' > __main__.py git init && git add __main__.py && git commit -m init git archive --format=zip HEAD > y.zip python y.zip Packing it with `zip x.zip __main__.py && python x.zip` works. ---------- components: Extension Modules messages: 226230 nosy: lekensteyn priority: normal severity: normal status: open title: Zip files created by `git archive` result in a SyntaxError (due to comment?) type: behavior versions: Python 2.7, Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 1 23:32:14 2014 From: report at bugs.python.org (STINNER Victor) Date: Mon, 01 Sep 2014 21:32:14 +0000 Subject: [New-bugs-announce] [issue22323] Rewrite PyUnicode_AsWideChar() and PyUnicode_AsWideCharString() Message-ID: <1409607134.14.0.210351686888.issue22323@psf.upfronthosting.co.za> New submission from STINNER Victor: I would like to deprecate PyUnicode_AsUnicode(), see the issue #22271 for the rationale (hint: memory footprint). The first step is to rewrite PyUnicode_AsWideChar() and PyUnicode_AsWideCharString() to not call PyUnicode_AsUnicode() anymore. Attached patch implements this. The code is based on PyUnicode_AsUnicode(), but it's more tricky because PyUnicode_AsWideChar() can truncate the string, and PyUnicode_AsUnicode() does no copy characters if kind == sizeof(wchar_t), PyASCIIObject.wstr "just" points to data. I hate PyUnicode_AsWideChar(), but we must keep it for backward compatibility :-) It would be possible to write an optimized PyUnicode_AsWideCharString() which computes the length, allocate memory and write wide characters, but I don't want to have 3 functions converting a Python string to a wide character string. There are already PyUnicode_AsUnicodeAndSize() and unicode_aswidechar() (+ unicode_aswidechar_len()). ---------- messages: 226244 nosy: haypo, loewis priority: normal severity: normal status: open title: Rewrite PyUnicode_AsWideChar() and PyUnicode_AsWideCharString() type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 2 00:39:50 2014 From: report at bugs.python.org (STINNER Victor) Date: Mon, 01 Sep 2014 22:39:50 +0000 Subject: [New-bugs-announce] [issue22324] Use PyUnicode_AsWideCharString() instead of PyUnicode_AsUnicode() Message-ID: <1409611190.39.0.820072774539.issue22324@psf.upfronthosting.co.za> New submission from STINNER Victor: I would like to deprecate PyUnicode_AsUnicode(), see the issue #22271 for the rationale (hint: memory footprint). To deprecate PyUnicode_AsUnicode(), we should stop using it internally. The attached patch is a work-in-progress patch, untested on Windows (only tested on Linux). It gives an idea of how many files should be modified. TODO: * Modify posixmodule.c: I don't understand how the Argument Clinic generates the call to PyUnicode_AsUnicode() when the parameter type is declared as "unicode". What is the "unicode" type? Where is the code generating the call to PyUnicode_AsUnicode()? * Modify a few other files. ---------- files: wchar.patch keywords: patch messages: 226247 nosy: haypo, loewis priority: normal severity: normal status: open title: Use PyUnicode_AsWideCharString() instead of PyUnicode_AsUnicode() type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36522/wchar.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 2 12:03:47 2014 From: report at bugs.python.org (Constantino Antunes) Date: Tue, 02 Sep 2014 10:03:47 +0000 Subject: [New-bugs-announce] [issue22325] wrong subtraction result Message-ID: <1409652227.24.0.50311509349.issue22325@psf.upfronthosting.co.za> New submission from Constantino Antunes: I was using python as my calculator when I got a result which had to be rounded to be the value I expected. So I made some tests with a simple case. Given that 0.38 - 0.20 = 0.18, then 1000.38 - 1000.20 should be also 0.18. Here is this calculation done on three different systems: Python 2.7.5 (default, Mar 9 2014, 22:15:05) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 0.38 - 0.20 0.18 >>> test = lambda x: (x+0.38) - (x+0.20) >>> test(0) 0.18 >>> test(1000) 0.17999999999994998 >>> test(1000000) 0.18000000005122274 >>> test(1000000000) 0.1799999475479126 >>> test(1000000000000) 0.1800537109375 >>> test(1000000000000000) 0.125 --- Python 2.7.3 (default, Mar 13 2014, 11:03:55) [GCC 4.7.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 0.38 - 0.20 0.18 >>> test = lambda x: (x+0.38) - (x+0.20) >>> test(0) 0.18 >>> test(1000) 0.17999999999994998 >>> test(1000000000000000) 0.125 --- Python 2.4.3 (#1, Oct 23 2012, 22:02:41) [GCC 4.1.2 20080704 (Red Hat 4.1.2-54)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 0.38 - 0.20 0.17999999999999999 >>> test = lambda x: (x+0.38) - (x+0.20) >>> test(0) 0.17999999999999999 >>> test(1000) 0.17999999999994998 >>> test(1000000000000000) 0.125 ---------- messages: 226270 nosy: Constantino.Antunes priority: normal severity: normal status: open title: wrong subtraction result versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 2 14:00:06 2014 From: report at bugs.python.org (Frank Thommen) Date: Tue, 02 Sep 2014 12:00:06 +0000 Subject: [New-bugs-announce] [issue22326] tempfile.TemporaryFile fails on NFS v4 filesystems Message-ID: <1409659206.56.0.836891924559.issue22326@psf.upfronthosting.co.za> New submission from Frank Thommen: Hi, tempfile.TemporaryFile fails on NFS v4 filesystems. Assume the following mounts: $ mount [...] psi:/volumes/vol1 on /mnt/nfsv4 type nfs4 (rw,addr=xx.xx.xx.xx) psi:/volumes/vol1 on /mnt/nfsv3 type nfs (rw,addr=xx.xx.xx.xx) [...] $ and the following script "testtmpfile.py": --------------- #! env python import tempfile def _is_writable_dir_unnamed(p): try: t = tempfile.TemporaryFile(dir=p) t.write('1') t.close() except OSError: return False else: return True def _is_writable_dir_named(p): try: t = tempfile.NamedTemporaryFile(dir=p) t.write('1') t.close() except OSError: return False else: return True if not _is_writable_dir_unnamed("."): print "(unnamed) . is not writable" else: print "(unnamed) OK" if not _is_writable_dir_named("."): print "(named) . is not writable" else: print "(named) OK" --------------- Then you'll find the following behaviour: $ pwd /mnt/nfsv4 $ /g/software/bin/python-2.7 /tmp/testtmpfile.py (unnamed) . is not writable (named) OK $ $ pwd /mnt/nfsv3 $ /g/software/bin/python-2.7 /tmp/testtmpfile.py (unnamed) OK (named) OK $ Additionally in the failing case, a - writable - temporary file named "tmp*" is left in the directory. Observed on CentOS 5.10 with kernel 2.6.18-371.11.1.el5 and on CentOS 6.5 with kernel 2.6.32-431.23.3.el6.x86_64. The problem appears with Python 2.4, 2.6 and 2.7. Cheers frank ---------- messages: 226271 nosy: drosera priority: normal severity: normal status: open title: tempfile.TemporaryFile fails on NFS v4 filesystems versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 2 22:40:31 2014 From: report at bugs.python.org (Barry A. Warsaw) Date: Tue, 02 Sep 2014 20:40:31 +0000 Subject: [New-bugs-announce] [issue22327] test_gdb failures on Ubuntu 14.10 Message-ID: <1409690431.38.0.272932349651.issue22327@psf.upfronthosting.co.za> New submission from Barry A. Warsaw: Lots of them, just like this: ====================================================================== FAIL: test_NULL_ob_type (test.test_gdb.PrettyPrintTests) Ensure that a PyObject* with NULL ob_type is handled gracefully ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/barry/projects/python/cpython/Lib/test/test_gdb.py", line 449, in test_NULL_ob_type 'set v->ob_type=0') File "/home/barry/projects/python/cpython/Lib/test/test_gdb.py", line 420, in assertSane cmds_after_breakpoint=cmds_after_breakpoint) File "/home/barry/projects/python/cpython/Lib/test/test_gdb.py", line 206, in get_gdb_repr import_site=import_site) File "/home/barry/projects/python/cpython/Lib/test/test_gdb.py", line 184, in get_stack_trace self.assertEqual(unexpected_errlines, []) AssertionError: Lists differ: ["Got object file from memory but can't read symbols: File truncated."] != [] First list contains 1 additional elements. First extra element 0: Got object file from memory but can't read symbols: File truncated. - ["Got object file from memory but can't read symbols: File truncated."] + [] ---------- components: Tests messages: 226280 nosy: barry priority: normal severity: normal status: open title: test_gdb failures on Ubuntu 14.10 versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 2 22:55:18 2014 From: report at bugs.python.org (Markus Unterwaditzer) Date: Tue, 02 Sep 2014 20:55:18 +0000 Subject: [New-bugs-announce] [issue22328] ur'foo\d' raises SyntaxError Message-ID: <1409691318.67.0.463109559126.issue22328@psf.upfronthosting.co.za> New submission from Markus Unterwaditzer: The string literal `ur'foo\d'` causes a SyntaxError while the equivalent `r'foo\d'` causes the correct string to be produced. ---------- components: Interpreter Core messages: 226281 nosy: untitaker priority: normal severity: normal status: open title: ur'foo\d' raises SyntaxError type: behavior versions: Python 3.1, Python 3.2, Python 3.3, Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 3 02:26:07 2014 From: report at bugs.python.org (Llelan D.) Date: Wed, 03 Sep 2014 00:26:07 +0000 Subject: [New-bugs-announce] [issue22329] Windows installer can't recover partially installed state Message-ID: <1409703967.84.0.334306520264.issue22329@psf.upfronthosting.co.za> New submission from Llelan D.: Python v3.4.1 x64 on Windows 7 x64. If the python installation directory is deleted, the installer can not remove, change, or repair the installation. When I run the python-3.4.1.amd64.msi installer and choose Remove, it gives me a dialog saying a required file is missing about halfway through. It gives me no clue as to what this file is. If I choose Repair, it gives me a dialog saying "The specified account already exists" about halfway through. Totally cryptic. If I choose Change and either select that all features or no features will be installed, it gives me a dialog saying "The specified account already exists" about halfway through. It turns out that the installer is relying on both files in the installation directory and a Windows Intaller key. If the required files are missing, the installer refuses to either remove or repair. If the Windows Installer key still exists, the installer refuses to re-install. When this key is removed, an install can then be done. To be safe, a remove should then be done to clear up any problems and then another clean install. Both of these requirement violate good MSI practice. You are *NEVER* to use a Windows Installer key as an indication of the installed state because that list can be, and often is, easily corrupted. The installer should always be able to perform a complete repair and especially remove without requiring *ANY* installed files or registry keys. This installer desperately needs a complete re-write. It should use its own key to indicate whether the application is installed but should not depend on it in case of a partially installed/removed state, should not require any installed file or registry key to fully repair or remove the application, should be able to re-install no matter the state of a previous installation, and should query the user if any information required is missing from the installation or registry. In other words, the normal MSI installer guidelines. ---------- components: Installation messages: 226291 nosy: LlelanD, steve.dower priority: normal severity: normal status: open title: Windows installer can't recover partially installed state type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 3 11:04:29 2014 From: report at bugs.python.org (Masahiro Konishi) Date: Wed, 03 Sep 2014 09:04:29 +0000 Subject: [New-bugs-announce] [issue22330] PyOS_mystricmp is broken Message-ID: <1409735069.35.0.121112016094.issue22330@psf.upfronthosting.co.za> New submission from Masahiro Konishi: int x = PyOS_mystricmp("foo", "none"); expected: x < 0 actual: x == 0 while (*s1 && (tolower((unsigned)*s1++) == tolower((unsigned)*s2++))) { ; } return (tolower((unsigned)*s1) - tolower((unsigned)*s2)); The while-loop is finished when *s1 != *s2 (ex. *s1 == 'f', *s2 == 'n'), but s1 and s2 already point to next characters (ex. *s1 == 'o', *s2 == 'o'), so PyOS_mystricmp returns difference between these characters. ---------- components: Interpreter Core files: pystrcmp.c.patch keywords: patch messages: 226303 nosy: kakkoko priority: normal severity: normal status: open title: PyOS_mystricmp is broken type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file36529/pystrcmp.c.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 3 23:31:48 2014 From: report at bugs.python.org (STINNER Victor) Date: Wed, 03 Sep 2014 21:31:48 +0000 Subject: [New-bugs-announce] [issue22331] test_io.test_interrupted_write_text() hangs on the buildbot FreeBSD 7.2 Message-ID: <1409779908.87.0.4046116696.issue22331@psf.upfronthosting.co.za> New submission from STINNER Victor: http://buildbot.python.org/all/builders/x86%20FreeBSD%207.2%203.4/builds/332/steps/test/logs/stdio Log: --- [157/389/1] test_io Timeout (1:00:00)! Thread 0x2a4f0790 (most recent call first): File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/test_io.py", line 3257 in _read File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/threading.py", line 869 in run File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/threading.py", line 921 in _bootstrap_inner File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/threading.py", line 889 in _bootstrap Thread 0x28401040 (most recent call first): File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/threading.py", line 1077 in _wait_for_tstate_lock File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/threading.py", line 1061 in join File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/test_io.py", line 3278 in check_interrupted_write File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/test_io.py", line 3302 in test_interrupted_write_text File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/case.py", line 577 in run File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/case.py", line 625 in __call__ File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/suite.py", line 125 in run File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/suite.py", line 87 in __call__ File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/suite.py", line 125 in run File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/suite.py", line 87 in __call__ File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/suite.py", line 125 in run File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/suite.py", line 87 in __call__ File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/unittest/runner.py", line 168 in run File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/support/__init__.py", line 1750 in _run_suite File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/support/__init__.py", line 1784 in run_unittest File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/regrtest.py", line 1279 in test_runner File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/regrtest.py", line 1280 in runtest_inner File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/regrtest.py", line 967 in runtest File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/regrtest.py", line 763 in main File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/regrtest.py", line 1564 in main_in_temp_cwd File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/test/__main__.py", line 3 in File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/runpy.py", line 85 in _run_code File "/usr/home/db3l/buildarea/3.4.bolen-freebsd7/build/Lib/runpy.py", line 170 in _run_module_as_main --- See also the issue #11859 (fixed). ---------- components: Tests keywords: buildbot messages: 226326 nosy: haypo priority: normal severity: normal status: open title: test_io.test_interrupted_write_text() hangs on the buildbot FreeBSD 7.2 versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 3 23:44:43 2014 From: report at bugs.python.org (STINNER Victor) Date: Wed, 03 Sep 2014 21:44:43 +0000 Subject: [New-bugs-announce] [issue22332] test_multiprocessing_main_handling fail on buildbot "x86 FreeBSD 6.4 3.x" Message-ID: <1409780683.92.0.966783842009.issue22332@psf.upfronthosting.co.za> New submission from STINNER Victor: The test requires SemLock which is not supported on FreeBSD 6.4. The whole test_multiprocessing_main_handling should be skipped on this platform. http://buildbot.python.org/all/builders/x86%20FreeBSD%206.4%203.x/builds/5010/steps/test/logs/stdio ====================================================================== FAIL: test_zipfile (test.test_multiprocessing_main_handling.SpawnCmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_multiprocessing_main_handling.py", line 213, in test_zipfile self._check_script(zip_name) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/test_multiprocessing_main_handling.py", line 153, in _check_script rc, out, err = assert_python_ok(*run_args, __isolated=False) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/script_helper.py", line 69, in assert_python_ok return _assert_python(True, *args, **env_vars) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/test/script_helper.py", line 55, in _assert_python "stderr follows:\n%s" % (rc, err.decode('ascii', 'ignore'))) AssertionError: Process return code is 1, stderr follows: Traceback (most recent call last): File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/multiprocessing/synchronize.py", line 29, in from _multiprocessing import SemLock, sem_unlink ImportError: cannot import name 'SemLock' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/runpy.py", line 170, in _run_module_as_main "__main__", mod_spec) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/tmpwgjabqgk/test_zip.zip/__main__.py", line 16, in File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/multiprocessing/context.py", line 118, in Pool context=self.get_context()) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/multiprocessing/pool.py", line 150, in __init__ self._setup_queues() File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/multiprocessing/pool.py", line 243, in _setup_queues self._inqueue = self._ctx.SimpleQueue() File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/multiprocessing/context.py", line 111, in SimpleQueue return SimpleQueue(ctx=self.get_context()) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/multiprocessing/queues.py", line 319, in __init__ self._rlock = ctx.Lock() File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/multiprocessing/context.py", line 65, in Lock from .synchronize import Lock File "/usr/home/db3l/buildarea/3.x.bolen-freebsd/build/Lib/multiprocessing/synchronize.py", line 34, in " function, see issue 3770.") ImportError: This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770. ---------- components: Tests keywords: buildbot messages: 226329 nosy: haypo priority: normal severity: normal status: open title: test_multiprocessing_main_handling fail on buildbot "x86 FreeBSD 6.4 3.x" versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 4 09:32:41 2014 From: report at bugs.python.org (STINNER Victor) Date: Thu, 04 Sep 2014 07:32:41 +0000 Subject: [New-bugs-announce] [issue22333] test_threaded_import.test_parallel_meta_path() failed on x86 Windows7 3.x Message-ID: <1409815961.59.0.0325236414674.issue22333@psf.upfronthosting.co.za> New submission from STINNER Victor: ====================================================================== FAIL: test_parallel_meta_path (test.test_threaded_import.ThreadedImportTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_threaded_import.py", line 133, in test_parallel_meta_path self.check_parallel_module_init() File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_threaded_import.py", line 121, in check_parallel_module_init self.assertTrue(done.wait(60)) AssertionError: False is not true ---------- components: Tests keywords: buildbot messages: 226341 nosy: haypo priority: normal severity: normal status: open title: test_threaded_import.test_parallel_meta_path() failed on x86 Windows7 3.x versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 4 09:52:44 2014 From: report at bugs.python.org (STINNER Victor) Date: Thu, 04 Sep 2014 07:52:44 +0000 Subject: [New-bugs-announce] [issue22334] test_tcl.test_split() fails on "x86 FreeBSD 7.2 3.x" buildbot Message-ID: <1409817164.01.0.942615937948.issue22334@psf.upfronthosting.co.za> New submission from STINNER Victor: The test was added by the issue #18101 (changeset 9486c07929a1). On FreeBSD 7.2, Tcl version is 8.6b1 (seen in the test output). http://buildbot.python.org/all/builders/x86%20FreeBSD%207.2%203.x/builds/5560/steps/test/logs/stdio ====================================================================== FAIL: test_split (test.test_tcl.TclTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/test/test_tcl.py", line 563, in test_split self.assertEqual(split(arg), res, msg=arg) AssertionError: Tuples differ: (12, '\u20ac', b'\xe2\x82\xac', (3.4,)) != ('12', '\u20ac', '\xe2\x82\xac', '3.4') First differing element 0: 12 12 - (12, '\u20ac', b'\xe2\x82\xac', (3.4,)) + ('12', '\u20ac', '\xe2\x82\xac', '3.4') : 12 \u20ac \xe2\x82\xac 3.4 ====================================================================== FAIL: test_splitlist (test.test_tcl.TclTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/test/test_tcl.py", line 513, in test_splitlist self.assertEqual(splitlist(arg), res, msg=arg) AssertionError: Tuples differ: (12, '\u20ac', b'\xe2\x82\xac', (3.4,)) != ('12', '\u20ac', '\xe2\x82\xac', '3.4') First differing element 0: 12 12 - (12, '\u20ac', b'\xe2\x82\xac', (3.4,)) + ('12', '\u20ac', '\xe2\x82\xac', '3.4') : 12 \u20ac \xe2\x82\xac 3.4 ---------- components: Tests, Tkinter keywords: buildbot messages: 226343 nosy: haypo, serhiy.storchaka priority: normal severity: normal status: open title: test_tcl.test_split() fails on "x86 FreeBSD 7.2 3.x" buildbot _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 4 12:38:57 2014 From: report at bugs.python.org (swanson) Date: Thu, 04 Sep 2014 10:38:57 +0000 Subject: [New-bugs-announce] [issue22335] Python 3: Segfault instead of MemoryError when bytearray too big Message-ID: <1409827137.12.0.977974506247.issue22335@psf.upfronthosting.co.za> New submission from swanson: On Python 3, but not Python 2, you crash with a Segmentation Fault instead of getting a MemoryError as expected. It seems to only be a problem with bytearray, not with other things like tuple: $ python3 Python 3.4.0 (default, Apr 11 2014, 13:05:18) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> bytearray(0x7FFFFFFF) Segmentation fault (core dumped) $ compare to: $ python Python 2.7.6 (default, Mar 22 2014, 22:59:38) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> bytearray(0x7FFFFFFF) Traceback (most recent call last): File "", line 1, in MemoryError >>> $ python3 Python 3.4.0 (default, Apr 11 2014, 13:05:18) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> (0,)*0x7FFFFFFF Traceback (most recent call last): File "", line 1, in MemoryError >>> ---------- messages: 226356 nosy: swanson priority: normal severity: normal status: open title: Python 3: Segfault instead of MemoryError when bytearray too big type: crash versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 4 17:46:10 2014 From: report at bugs.python.org (STINNER Victor) Date: Thu, 04 Sep 2014 15:46:10 +0000 Subject: [New-bugs-announce] [issue22336] _tkinter should use Python PyMem_Malloc() instead of Tcl ckalloc() Message-ID: <1409845570.99.0.729089959618.issue22336@psf.upfronthosting.co.za> New submission from STINNER Victor: The PyMem_Malloc(size) function has a well defined behaviour: if size is 0, a pointer different than NULL is returned. It looks like ckalloc(size) returns NULL if the size is NULL: see issue #21951 (bug on AIX). Moreover, memory allocated by ckalloc() is not traced by the new tracemalloc module! Attached patch replaces calls to ckalloc() and ckfree() with PyMem_Malloc() and PyMem_Free(). It may fix the issue #21951 on AIX. There is still a call to ckfree() in Tkapp_SplitList(). This memory block is allocated internally by Tcl, not directly by _tkinter.c. ---------- files: pymem.patch keywords: patch messages: 226363 nosy: David.Edelsohn, haypo, serhiy.storchaka priority: normal severity: normal status: open title: _tkinter should use Python PyMem_Malloc() instead of Tcl ckalloc() type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36533/pymem.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 4 23:40:20 2014 From: report at bugs.python.org (DenisCrnic) Date: Thu, 04 Sep 2014 21:40:20 +0000 Subject: [New-bugs-announce] [issue22337] Modulus not returning proper remainder Message-ID: <1409866820.23.0.262547020156.issue22337@psf.upfronthosting.co.za> New submission from DenisCrnic: Modulus is returning wrong remainder when working with negative numbers. For example 34%-3 should return 1, but it returns -2. 34/-3=-11, remainder: 1, proof: -11*(-3)+1=34 <--- Math logic (works in Java for example) but python goes like that: 34/-3=12, remainder: -2. proof: -12*(-3)+(-2)=34 <--- Python logic (works) I know that python is looking for the int which is the closest to the a (a%b), and is divisible with it, but in this case that's not right. it should seek for the closest integer, which is lower than a, in this case 33 If explanation is not clear enough, reply me, i'll take more time to make it clear. maybe my theory is wrong, and python syntax works better than in Java. ---------- files: Screenshot_1.png messages: 226381 nosy: DenisCrnic priority: normal severity: normal status: open title: Modulus not returning proper remainder type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file36537/Screenshot_1.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 5 01:27:17 2014 From: report at bugs.python.org (STINNER Victor) Date: Thu, 04 Sep 2014 23:27:17 +0000 Subject: [New-bugs-announce] [issue22338] test_json crash on memory allocation failure Message-ID: <1409873237.86.0.801776520079.issue22338@psf.upfronthosting.co.za> New submission from STINNER Victor: Using pyfailmalloc, I'm able to reproduce the crash seen on a buildbot. Attached patch fixes two bugs in error handlers. http://buildbot.python.org/all/builders/AMD64%20OpenIndiana%203.x/builds/8557/steps/test/logs/stdio [191/390] test_urllib2net Fatal Python error: Segmentation fault Current thread 0x0000000000000001 (most recent call first): File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line[192/390] test_json 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 420 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 430 in _iterencode File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/json/encoder.py", line 317 in _iterencode_list ... Traceback (most recent call last): File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/runpy.py", line 170, in _run_module_as_main "__main__", mod_spec) File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/runpy.py", line 85, in _run_code exec(code, run_globals) File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/test/__main__.py", line 3, in regrtest.main_in_temp_cwd() File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/test/regrtest.py", line 1564, in main_in_temp_cwd main() File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/test/regrtest.py", line 738, in main raise Exception("Child error on {}: {}".format(test, result[1])) Exception: Child error on test_json: Exit code -11 make: *** [buildbottest] Error 1 program finished with exit code 2 ---------- files: json.patch keywords: patch messages: 226390 nosy: haypo priority: normal severity: normal status: open title: test_json crash on memory allocation failure Added file: http://bugs.python.org/file36542/json.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 5 02:50:11 2014 From: report at bugs.python.org (=?utf-8?q?Kiss_Gy=C3=B6rgy?=) Date: Fri, 05 Sep 2014 00:50:11 +0000 Subject: [New-bugs-announce] [issue22339] Incorrect behavior when subclassing enum.Enum Message-ID: <1409878211.89.0.422808973529.issue22339@psf.upfronthosting.co.za> New submission from Kiss Gy?rgy: There is a small inconvenience in the ``enum`` module. When I subclass ``enum.Enum`` and redefine the ``value`` dynamic attribute, the aliasing behavior doesn't work correctly, because ``member.value`` is used in some places instead of ``member._value_``. I attached a patch where I fixed all these places. This causes no harm to the internal working, but makes subclassing behave correctly. ---------- components: Library (Lib) files: enum.patch keywords: patch messages: 226392 nosy: Walkman priority: normal severity: normal status: open title: Incorrect behavior when subclassing enum.Enum type: enhancement versions: Python 3.4 Added file: http://bugs.python.org/file36543/enum.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 5 12:45:10 2014 From: report at bugs.python.org (STINNER Victor) Date: Fri, 05 Sep 2014 10:45:10 +0000 Subject: [New-bugs-announce] [issue22340] Fix Python 3 warnings in Python 2 tests Message-ID: <1409913910.6.0.350974401324.issue22340@psf.upfronthosting.co.za> New submission from STINNER Victor: Running Python 2 test suite with "python -3 -Wd" displays a lot of DeprecationWarning warnings. Just one example: /home/haypo/prog/python/2.7/Lib/test/test_ssl.py:2368: DeprecationWarning: urllib.urlopen() has been removed in Python 3.0 in favor of urllib2.urlopen() Attached patch fix most of them (maybe all). ---------- components: Tests files: fix_py3k_warn.patch keywords: patch messages: 226418 nosy: haypo priority: normal severity: normal status: open title: Fix Python 3 warnings in Python 2 tests versions: Python 2.7 Added file: http://bugs.python.org/file36547/fix_py3k_warn.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 5 13:55:29 2014 From: report at bugs.python.org (Martin Panter) Date: Fri, 05 Sep 2014 11:55:29 +0000 Subject: [New-bugs-announce] [issue22341] Python 3 crc32 documentation clarifications Message-ID: <1409918129.4.0.321897542374.issue22341@psf.upfronthosting.co.za> New submission from Martin Panter: This is regarding the Python 3 documentation for binascii.crc32(), . It repeatedly recommends correcting the sign by doing crc32() & 0xFFFFFFFF, but it is not immediately clear why. Only after reading the Python 2 documentation does one realise that the value is always unsigned for Python 3, so you only really need the workaround if you want to support earlier Pythons. I also suggest documenting the initial CRC input value, which is zero. Suggested wording: binascii.crc32(data[, crc]) Compute CRC-32, the 32-bit checksum of ?data?, starting with the given ?crc?. The default initial value is zero. The algorithm is consistent with the ZIP file checksum. Since the algorithm is designed for use as a checksum algorithm, it is not suitable for use as a general hash algorithm. Use as follows: print(binascii.crc32(b"hello world")) # Or, in two pieces: crc = binascii.crc32(b"hello", 0) crc = binascii.crc32(b" world", crc) print('crc32 = {:#010x}'.format(crc)) I would simply drop the notice box with the workaround, because I gather that the Python 3 documentation generally omits Python 2 details. (There are no ?new in version 2.4 tags? for instance.) Otherwise, clarify if ?packed binary format? is a reference to the ?struct? module, or something else. Similar fixes are probably appropriate for zlib.crc32() and zlib.alder32(). Also, what is the relationship between binascii.crc32() and zlib.crc32()? I vaguely remember reading that ?zlib? is not always available, so I tend to use ?binascii? instead. Is there any advantage in using the ?zlib? version? The ?hashlib? documentation points to ?zlib? without mentioning ?binascii? at all. ---------- assignee: docs at python components: Documentation messages: 226419 nosy: docs at python, vadmium priority: normal severity: normal status: open title: Python 3 crc32 documentation clarifications versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 5 16:12:14 2014 From: report at bugs.python.org (Gael Robin) Date: Fri, 05 Sep 2014 14:12:14 +0000 Subject: [New-bugs-announce] [issue22342] Fix typo in PEP 380 -- Syntax for Delegating to a Subgenerator Message-ID: <1409926334.06.0.657946256906.issue22342@psf.upfronthosting.co.za> New submission from Gael Robin: The first line of the Refactoring Principle subsection of the Rationale section contains the following typo : "It should be possible to take an section of code" should be "It should be possible to take a section of code" ---------- assignee: docs at python components: Documentation messages: 226425 nosy: Gael.Robin, docs at python priority: normal severity: normal status: open title: Fix typo in PEP 380 -- Syntax for Delegating to a Subgenerator type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 5 22:24:08 2014 From: report at bugs.python.org (Kevin Christopher Henry) Date: Fri, 05 Sep 2014 20:24:08 +0000 Subject: [New-bugs-announce] [issue22343] Install bash activate script on Windows when using venv Message-ID: <1409948648.7.0.892236136888.issue22343@psf.upfronthosting.co.za> New submission from Kevin Christopher Henry: When I use venv to create a new virtual environment in Windows I'm given two activate scripts, a .bat file and a .ps1 file (which is consistent with the documentation). However, bash (and probably the other shells as well) works just fine in Windows under Cygwin. Since you have these scripts anyway, please include them in the Windows virtual environment (as virtualenv did). ---------- components: Windows messages: 226452 nosy: marfire priority: normal severity: normal status: open title: Install bash activate script on Windows when using venv type: enhancement versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 5 23:02:07 2014 From: report at bugs.python.org (py.user) Date: Fri, 05 Sep 2014 21:02:07 +0000 Subject: [New-bugs-announce] [issue22344] Reorganize unittest.mock docs into linear manner Message-ID: <1409950927.16.0.630847623783.issue22344@psf.upfronthosting.co.za> New submission from py.user: Now, it's very inconvenient to learn this documentation. It's not compact (could be reduced in 2 or even 3 times). It repeats itself: many examples copy-pasted unchangeably. And it cites to terms that placed under those cites, so people should start at the bottom to anderstand what it's talking about on the top. Remember Zen "If the implementation is hard to explain, it's a bad idea." - Move the MagicMock description to the point between Mock and PropertyMock descriptions - Remove "copy-paste" examples - Rename print to print() and StringIO to io - Split the Autospecing section to several chunks ---------- assignee: docs at python components: Documentation messages: 226456 nosy: docs at python, py.user priority: normal severity: normal status: open title: Reorganize unittest.mock docs into linear manner type: enhancement versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 5 23:13:19 2014 From: report at bugs.python.org (=?utf-8?q?Zbyszek_J=C4=99drzejewski-Szmek?=) Date: Fri, 05 Sep 2014 21:13:19 +0000 Subject: [New-bugs-announce] [issue22345] https://docs.python.org/release/1.4/ returns 403 Message-ID: <1409951599.8.0.482931497215.issue22345@psf.upfronthosting.co.za> New submission from Zbyszek J?drzejewski-Szmek: This is the last link on https://www.python.org/doc/versions/. ---------- assignee: docs at python components: Documentation messages: 226457 nosy: docs at python, zbysz priority: normal severity: normal status: open title: https://docs.python.org/release/1.4/ returns 403 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 6 03:29:13 2014 From: report at bugs.python.org (Thomas Kluyver) Date: Sat, 06 Sep 2014 01:29:13 +0000 Subject: [New-bugs-announce] [issue22346] asyncio documentation does not mention provisional status Message-ID: <1409966953.11.0.406607337237.issue22346@psf.upfronthosting.co.za> New submission from Thomas Kluyver: >From PEP 411: """ A package will be marked provisional by a notice in its documentation page and its docstring. The following paragraph will be added as a note at the top of the documentation page: The package has been included in the standard library on a provisional basis. Backwards incompatible changes (up to and including removal of the package) may occur if deemed necessary by the core developers. """ PEP 3156 says that asyncio is in provisional status until Python 3.5, but the main asyncio page in the docs does not even mention this, let alone having the new warning: https://docs.python.org/3/library/asyncio.html I freely admit this is nitpicking, but if the idea of provisional status is to be taken seriously, I think asyncio, as a very high profile new package, should set a good example of it. ---------- assignee: docs at python components: Documentation, asyncio messages: 226463 nosy: docs at python, gvanrossum, haypo, takluyver, yselivanov priority: normal severity: normal status: open title: asyncio documentation does not mention provisional status versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 6 04:52:37 2014 From: report at bugs.python.org (Martin Panter) Date: Sat, 06 Sep 2014 02:52:37 +0000 Subject: [New-bugs-announce] [issue22347] mimetypes.guess_type("//example.com") misinterprets host name as file name Message-ID: <1409971957.28.0.64915967452.issue22347@psf.upfronthosting.co.za> New submission from Martin Panter: The documentation says that guess_type() takes a URL, but: >>> mimetypes.guess_type("http://example.com") ('application/x-msdownload', None) I suspect the MS download is a reference to *.com files (like DOS's command.com). My current workaround is to strip out the host name from the URL, since I cannot imagine it would be useful for determining the content type. I am also stripping the fragment part. An argument could probably be made for stripping the ?;parameters? and ??query? parts as well. >>> # Workaround for mimetypes.guess_type("//example.com") ... # interpreting host name as file name ... url = urlparse("http://example.com") >>> url = net.url_replace(url, netloc="", fragment="") >>> url 'http://' >>> mimetypes.guess_type(url, strict=False) (None, None) ---------- components: Library (Lib) messages: 226467 nosy: vadmium priority: normal severity: normal status: open title: mimetypes.guess_type("//example.com") misinterprets host name as file name type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 6 14:53:47 2014 From: report at bugs.python.org (Martin Richard) Date: Sat, 06 Sep 2014 12:53:47 +0000 Subject: [New-bugs-announce] [issue22348] Documentation of asyncio.StreamWriter.drain() Message-ID: <1410008027.32.0.320660376013.issue22348@psf.upfronthosting.co.za> New submission from Martin Richard: Hi, Following the discussion on the python-tulip group, I'd like to propose a patch for the documentation of StreamWriter.drain(). This patch aims to give a better description of what drain() is intended to do, and when to use it. In particular, it highlights the fact that calling drain() does not mean that any write operation will be performed, nor is required to be called. ---------- components: asyncio files: asyncio-streams-drain-doc.patch hgrepos: 273 keywords: patch messages: 226487 nosy: gvanrossum, haypo, martius, yselivanov priority: normal severity: normal status: open title: Documentation of asyncio.StreamWriter.drain() type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file36561/asyncio-streams-drain-doc.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 6 21:58:37 2014 From: report at bugs.python.org (Thomas Kluyver) Date: Sat, 06 Sep 2014 19:58:37 +0000 Subject: [New-bugs-announce] [issue22349] Remove more unnecessary version checks from distutils Message-ID: <1410033517.64.0.25782139484.issue22349@psf.upfronthosting.co.za> New submission from Thomas Kluyver: Following on from issue 22200, this removes some more code in distutils that checks which Python version it's running on. As part of the standard library, distutils should always be running on the version of Python which it ships with. ---------- components: Distutils files: rm-more-distutils-version-checks.patch keywords: patch messages: 226510 nosy: dstufft, eric.araujo, takluyver priority: normal severity: normal status: open title: Remove more unnecessary version checks from distutils versions: Python 3.5 Added file: http://bugs.python.org/file36562/rm-more-distutils-version-checks.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 7 05:40:19 2014 From: report at bugs.python.org (Martin Panter) Date: Sun, 07 Sep 2014 03:40:19 +0000 Subject: [New-bugs-announce] [issue22350] nntplib file write failure causes exception from QUIT command Message-ID: <1410061219.03.0.0230581897355.issue22350@psf.upfronthosting.co.za> New submission from Martin Panter: The following code triggers an NNTPProtocolError, as long as the body is large enough to cause an intermediate flush of the output file. The reason I think is that the body() method aborts in the middle of reading the BODY response, and when the context manager exits, a QUIT command is attempted, which continues to read the BODY response. >>> with NNTP("localhost") as nntp: ... nntp.body("", file="/dev/full") ... Traceback (most recent call last): File "/usr/lib/python3.4/nntplib.py", line 491, in _getlongresp file.write(line) OSError: [Errno 28] No space left on device During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 2, in File "/usr/lib/python3.4/nntplib.py", line 757, in body return self._artcmd(cmd, file) File "/usr/lib/python3.4/nntplib.py", line 727, in _artcmd resp, lines = self._longcmd(line, file) File "/usr/lib/python3.4/nntplib.py", line 518, in _longcmd return self._getlongresp(file) File "/usr/lib/python3.4/nntplib.py", line 504, in _getlongresp openedFile.close() OSError: [Errno 28] No space left on device During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 2, in File "/usr/lib/python3.4/nntplib.py", line 367, in __exit__ self.quit() File "/usr/lib/python3.4/nntplib.py", line 936, in quit resp = self._shortcmd('QUIT') File "/usr/lib/python3.4/nntplib.py", line 512, in _shortcmd return self._getresp() File "/usr/lib/python3.4/nntplib.py", line 459, in _getresp raise NNTPProtocolError(resp) nntplib.NNTPProtocolError: It is hard to work around because the context manager and quit() methods seem to be the only public interfaces for closing the connection, and they both try to do a QUIT command first. I am thinking of something equivalent to this for a workaround, however it is bit hacky and may not be reliable in all cases: nntp = NNTP("localhost") abort = False try: ... try: nntp.body("", file="/dev/full") except (NNTPTemporaryError, NNTPPermanentError): raise # NNTP connection still intact except: abort = True raise ... finally: try: nntp.quit() except NNTPError: # Connection cleaned up despite exception if not abort: raise Perhaps the ?nntplib? module could abort the connection itself if any command does not complete according to the protocol. Or at the very least, provide an API to manually abort the connection without poking at the internals. ---------- components: Library (Lib) messages: 226526 nosy: vadmium priority: normal severity: normal status: open title: nntplib file write failure causes exception from QUIT command versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 7 06:07:08 2014 From: report at bugs.python.org (Martin Panter) Date: Sun, 07 Sep 2014 04:07:08 +0000 Subject: [New-bugs-announce] [issue22351] NNTP constructor exception leaves socket for garbage collector Message-ID: <1410062828.01.0.535616885323.issue22351@psf.upfronthosting.co.za> New submission from Martin Panter: If the nntplib.NNTP constructor fails, it often leaves the connection and socket open until the garbage collector cleans them up and emits a ResourceWarning: >>> try: ... NNTP("localhost") ... except Exception as err: ... print(repr(err)) ... NNTPTemporaryError('400 Service temporarily unavailable',) >>> gc.collect() __main__:1: ResourceWarning: unclosed 12 This happens both for error responses that are expected by the protocol, e.g. service unavailable as above, authentication errors. It also happens for other exceptions such as EOFError if the connection is closed with no response. ---------- components: Library (Lib) messages: 226528 nosy: vadmium priority: normal severity: normal status: open title: NNTP constructor exception leaves socket for garbage collector type: resource usage versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 7 06:58:11 2014 From: report at bugs.python.org (Nick Coghlan) Date: Sun, 07 Sep 2014 04:58:11 +0000 Subject: [New-bugs-announce] [issue22352] Ensure opcode names and args fit in disassembly output Message-ID: <1410065891.58.0.12430207232.issue22352@psf.upfronthosting.co.za> New submission from Nick Coghlan: While exploring display options for issue 11822, I found that the new matrix multiplication opcode names (BINARY_MATRIX_MULTIPLY and INPLACE_MATRIX_MULTIPLY) don't fit in the nominal field width in the disassembly output (which is currently 20 characters). These two clock in at 22 and 23 characters respectively. In practice, they do fit, since neither takes on argument, which effectively allows an extra 5 characters (while still looking neat) and unlimited characters if we ignore expanding past the column of opcode arguments. However, it would be good to: 1. Factor out the opname and oparg sizes to private class attributes on dis.Instruction 2. have a test in test_dis that scans dis.opnames and ensures all opcodes < dis.HAVE_ARGUMENT have names shorter than the combined length of the two fields, and that all opcodes >= HAVE_ARGUMENT will fit in the opname field, even with an argument present. Have such a test will ensure any new opcodes added can be displayed without any problems, rather than anyone having to remember to check manually. ---------- assignee: ncoghlan components: Library (Lib) messages: 226530 nosy: ncoghlan priority: low severity: normal status: open title: Ensure opcode names and args fit in disassembly output type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 7 14:35:32 2014 From: report at bugs.python.org (Mateusz Dobrowolny) Date: Sun, 07 Sep 2014 12:35:32 +0000 Subject: [New-bugs-announce] [issue22353] re.findall() documentation lacks information about finding THE LAST iteration of reoeated capturing group (greedy) Message-ID: <1410093332.63.0.87948506622.issue22353@psf.upfronthosting.co.za> New submission from Mateusz Dobrowolny: Python 3.4.1, Windows. help(re.findall) shows me: findall(pattern, string, flags=0) Return a list of all non-overlapping matches in the string. If one or more capturing groups are present in the pattern, return a list of groups; this will be a list of tuples if the pattern has more than one group. Empty matches are included in the result. It seems like there is missing information regarding greedy groups, i.e. (regular_expression)* Please take a look at my example: -------------EXAMPLE------------- import re text = 'To configure your editing environment, use the Editor settings page and its child pages. There is also a ' \ 'Quick Switch Scheme command that lets you change color schemes, themes, keymaps, etc. with a couple of ' \ 'keystrokes.' print('Text to be searched: \n' + text) print('\nSarching method: re.findall()') regexp_result = re.findall(r'\w+(\s+\w+)', text) print('\nRegexp rule: r\'\w+(\s+\w+)\' \nFound: ' + str(regexp_result)) print('This works as expected: findall() returns a list of groups (\s+\w+), and the groups are from non-overlapping matches.') regexp_result = re.findall(r'\w+(\s+\w+)*', text) print('\nHow about making the group greedy? Here we go: \nRegexp rule: r\'\w+(\s+\w+)*\' \nFound: ' + str(regexp_result)) print('This is a little bit unexpected for me: findall() returns THE LAST MATCHING group only, parsing from-left-to-righ.') regexp_result_list = re.findall(r'(\w+(\s+\w+)*)', text) first_group = list(i for i, j in regexp_result_list) print('\nThe solution is to put an extra group aroung the whole RE: \nRegexp rule: r\'(\w+(\s+\w+)*)\' \nFound: ' + str(first_group)) print('So finally I can get all strings I am looking for, just like expected from the FINDALL method, by accessing first elements in tuples.') ----------END OF EXAMPLE------------- I found the solution when practicing on this page: http://regex101.com/#python Entering: REGULAR EXPRESSION: \w+(\s+\w+)* TEST STRING: To configure your editing environment, use the Editor settings page and its child pages. There is also a Quick Switch Scheme command that lets you change color schemes, themes, keymaps, etc. with a couple of keystrokes. it showed me on the right side with nice color-coding: 1st Capturing group (\s+\w+)* Quantifier: Between zero and unlimited times, as many times as possible, giving back as needed [greedy] Note: A repeated capturing group will only capture the last iteration. Put a capturing group around the repeated group to capture all iterations or use a non-capturing group instead if you're not interested in the data I think some information regarding repeated groups should be included as well in Python documentation. BTW: I have one extra question. Searching for 'findall' in this tracker I found this issue: http://bugs.python.org/issue3384 It looks like information about ordering information is no longer in 3.4.1 documentation. Shouldn't this be there? Kind Regards ---------- assignee: docs at python components: Documentation messages: 226534 nosy: Mateusz.Dobrowolny, docs at python priority: normal severity: normal status: open title: re.findall() documentation lacks information about finding THE LAST iteration of reoeated capturing group (greedy) versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 7 15:17:02 2014 From: report at bugs.python.org (Christian Kleineidam) Date: Sun, 07 Sep 2014 13:17:02 +0000 Subject: [New-bugs-announce] [issue22354] Highlite tabs in the IDLE Message-ID: <1410095822.07.0.751550923398.issue22354@psf.upfronthosting.co.za> New submission from Christian Kleineidam: Python accepts both tabs and spaces. Code that mixes tab and spaces can lead to problematic issues. Especially beginners who are new to python can be confused if they copy some code and it doesn't work as they expected because of issues of invisible whitespace. Beginners are also more likely to use the editor that comes with the IDLE instead of using a more specialised editor. If the IDLE would highlite the fact that tabs are used instead of spaces, it would be easier to spot the issue. I therefore suggest that the IDLE highlites tabs both in the shell mode and the editor mode. Possible ways to highlite is to have a light grey: <--> ? (at the beginning of the tab) PyCharm style error underlining ---------- components: IDLE messages: 226535 nosy: Christian.Kleineidam priority: normal severity: normal status: open title: Highlite tabs in the IDLE type: enhancement versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 7 16:45:41 2014 From: report at bugs.python.org (Iestyn Elfick) Date: Sun, 07 Sep 2014 14:45:41 +0000 Subject: [New-bugs-announce] [issue22355] inconsistent results with inspect.getsource() / inspect.getsourcelines() Message-ID: <1410101141.24.0.25482778458.issue22355@psf.upfronthosting.co.za> New submission from Iestyn Elfick: The functions inspect.getsource() and inspect.getsourcelines() return inconsistent results for frames corresponding to class definitions within a function. Test code: import sys import inspect def case1(): class C: def __init__(self): pass c = C() def case2(): a = 1 class C: def __init__(self): pass c = C() def case3(): def fn(): pass class C: def __init__(self): pass c = C() def trace(frame,event,arg): code = frame.f_code print('name:',code.co_name) print('source:\n',inspect.getsource(code),'\n') for case in ('case1','case2','case3'): print('#####',case) call = getattr(sys.modules[__name__],case) sys.settrace(trace) try: call() finally: sys.settrace(None) Result: ##### case1 name: case1 source: def case1(): class C: def __init__(self): pass c = C() name: C source: def case1(): class C: def __init__(self): pass c = C() name: __init__ source: def __init__(self): pass ##### case2 name: case2 source: def case2(): a = 1 class C: def __init__(self): pass c = C() name: C source: def case2(): a = 1 class C: def __init__(self): pass c = C() name: __init__ source: def __init__(self): pass ##### case3 name: case3 source: def case3(): def fn(): pass class C: def __init__(self): pass c = C() name: C source: def fn(): pass name: __init__ source: def __init__(self): pass The source listed for frames named 'C' (the class creation code) is not consistent across all three cases. It could be considered incorrect in all cases as it does not correspond only to the class definition source lines. ---------- components: Library (Lib) messages: 226537 nosy: isedev priority: normal severity: normal status: open title: inconsistent results with inspect.getsource() / inspect.getsourcelines() type: behavior versions: Python 3.3 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 7 19:20:44 2014 From: report at bugs.python.org (Akira Li) Date: Sun, 07 Sep 2014 17:20:44 +0000 Subject: [New-bugs-announce] [issue22356] mention explicitly that stdlib assumes gmtime(0) epoch is 1970 Message-ID: <1410110444.5.0.966357232963.issue22356@psf.upfronthosting.co.za> New submission from Akira Li: See discussion on Python-ideas https://mail.python.org/pipermail/python-ideas/2014-September/029228.html ---------- assignee: docs at python components: Documentation files: docs-time-epoch_is_1970.diff keywords: patch messages: 226539 nosy: akira, docs at python priority: normal severity: normal status: open title: mention explicitly that stdlib assumes gmtime(0) epoch is 1970 type: behavior versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36567/docs-time-epoch_is_1970.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 7 23:23:12 2014 From: report at bugs.python.org (Iestyn Elfick) Date: Sun, 07 Sep 2014 21:23:12 +0000 Subject: [New-bugs-announce] [issue22357] inspect module documentation make no reference to __qualname__ attribute Message-ID: <1410124992.98.0.0231359390306.issue22357@psf.upfronthosting.co.za> New submission from Iestyn Elfick: The documentation for the 'inspect' module should list the '__qualname__' attribute for 'method', 'function' and 'builtin' types in section '29.12.1 Types and members'. ---------- assignee: docs at python components: Documentation messages: 226545 nosy: docs at python, isedev priority: normal severity: normal status: open title: inspect module documentation make no reference to __qualname__ attribute type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 01:15:51 2014 From: report at bugs.python.org (Attila Fazekas) Date: Sun, 07 Sep 2014 23:15:51 +0000 Subject: [New-bugs-announce] [issue22358] Unnecessary JUMP_FORWARD(0) (NOP) in if statements without else or elif Message-ID: <1410131751.3.0.286807905275.issue22358@psf.upfronthosting.co.za> New submission from Attila Fazekas: The following example function compiles to bytecode which contains, an unnecessary JUMP_FORWARD 0 instruction: def func(): if a: pass Actual: dis.dis(func) 2 0 LOAD_GLOBAL 0 (a) 3 POP_JUMP_IF_FALSE 9 6 JUMP_FORWARD 0 (to 9) >> 9 LOAD_CONST 0 (None) 12 RETURN_VALUE Expected: dis.dis(func) 2 0 LOAD_GLOBAL 0 (a) 3 POP_JUMP_IF_FALSE 6 >> 6 LOAD_CONST 0 (None) 9 RETURN_VALUE The above JUMP_FORWARD instruction increases the code size and also has a negative performance effect. I do not see any reason to have the extra NOP in the byte code in this case. *** The attached patch removes this NOP generation from the code compilation part, so it will take effect by default. I had a little trouble when the code compiled from ast, because the If.orelse had a different content. (NULL vs. zero sizes asdl_seq) * The generated Assembly code updated in dis unit test. * The compilation test updated to test a real 'if' by using a variable in the condition. (The True and False is not a variable anymore) ---------- components: Interpreter Core files: python_nop_ifelse.patch keywords: patch messages: 226547 nosy: Attila.Fazekas priority: normal severity: normal status: open title: Unnecessary JUMP_FORWARD(0) (NOP) in if statements without else or elif type: performance versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36569/python_nop_ifelse.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 10:44:29 2014 From: report at bugs.python.org (Jonas Wagner) Date: Mon, 08 Sep 2014 08:44:29 +0000 Subject: [New-bugs-announce] [issue22359] Remove incorrect uses of recursive make Message-ID: <1410165869.76.0.203082844365.issue22359@psf.upfronthosting.co.za> New submission from Jonas Wagner: The attached patch fixes issues with Python's Makefile, which manifest when doing parallel builds. The Makefile invoked "make" recursively for some targets. This caused some files (which were depended upon by multiple targets) to be built by both the original "make" and the sub-"make". Besides duplicate work, this caused failed builds with non-threadsafe compilers. The proposed patch removes recursive calls to "make", and instead builds all targets in the same "make" process. ---------- components: Build files: makefile_parallel.patch keywords: patch messages: 226563 nosy: Sjlver priority: normal severity: normal status: open title: Remove incorrect uses of recursive make type: compile error versions: Python 3.5 Added file: http://bugs.python.org/file36570/makefile_parallel.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 11:29:02 2014 From: report at bugs.python.org (Christoph Wruck) Date: Mon, 08 Sep 2014 09:29:02 +0000 Subject: [New-bugs-announce] [issue22360] Adding manually offset parameter to str/bytes split function Message-ID: <1410168542.37.0.66219638693.issue22360@psf.upfronthosting.co.za> New submission from Christoph Wruck: Currently we have a "split" function which splits a str/bytestr into chunks of their underlying data. This works great for the most tivial jobs. But there is no possibility to pass an offset parameter into the split function which indicates the next "user-defined" starting index. Actually the next starting position will be build upon the last starting position (of found sep.) + separator length + 1. It should be possible to manipulate the next starting index by changing this behavior into: last starting position (of found sep.) + separator length + OFFSET. NOTE: The slicing start index (for substring) stay untouched. This will help us to solve splitting sequences with one or more consecutive separators. The following demonstrates the actually behavior. >>> s = 'abc;;def;hij' >>> s.split(';') ['abc', '', 'def', 'hij'] This works fine for both str/bytes values. The following demonstrates an "offset variant" of split function. >>> s = 'abc;;def;hij' >>> s.split(';', offset=1) ['abc', ';def', 'hij'] The behavior of maxcount/None sep. parameter should be generate the same output as before. A change will be affect (as far as I can see): - split.h - split_char/rsplit_char - split/rsplit ---------- messages: 226564 nosy: cwr priority: normal severity: normal status: open title: Adding manually offset parameter to str/bytes split function type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 12:29:45 2014 From: report at bugs.python.org (Luca Falavigna) Date: Mon, 08 Sep 2014 10:29:45 +0000 Subject: [New-bugs-announce] [issue22361] Ability to join() threads in concurrent.futures.ThreadPoolExecutor Message-ID: <1410172185.62.0.239101325071.issue22361@psf.upfronthosting.co.za> New submission from Luca Falavigna: I have a program which waits for external events (mostly pyinotify events), and when events occur a new worker is created using concurrent.futures.ThreadPoolExecutor. The following snippet represents shortly what my program does: from time import sleep from concurrent.futures import ThreadPoolExecutor def func(): print("start") sleep(10) print("stop") ex = ThreadPoolExecutor(1) # New workers will be scheduled when an event # is triggered (i.e. pyinotify events) ex.submit(func) # Dummy sleep sleep(60) When func() is complete, I'd like the underlying thread to be terminated. I realize I could call ex.shutdown() to achieve this, but this would prevent me from adding new workers in case new events occur. Not calling ex.shutdown() leads to have unfinished threads which pile up considerably: (gdb) run test.py Starting program: /usr/bin/python3.4-dbg test.py [Thread debugging using libthread_db enabled] [New Thread 0x7ffff688e700 (LWP 17502)] start stop ^C Program received signal SIGINT, Interrupt. 0x00007ffff6e41963 in select () from /lib/x86_64-linux-gnu/libc.so.6 (gdb) info threads Id Target Id Frame 2 Thread 0x7ffff688e700 (LWP 17502) "python3.4-dbg" 0x00007ffff7bce420 in sem_wait () from /lib/x86_64-linux-gnu/libpthread.so.0 * 1 Thread 0x7ffff7ff1700 (LWP 17501) "python3.4-dbg" 0x00007ffff6e41963 in select () from /lib/x86_64-linux-gnu/libc.so.6 (gdb) Would it be possible to add a new method (or a ThreadPoolExecutor option) which allows to join the underlying thread when the worker function returns? ---------- components: Library (Lib) messages: 226569 nosy: dktrkranz priority: normal severity: normal status: open title: Ability to join() threads in concurrent.futures.ThreadPoolExecutor type: enhancement versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 13:07:20 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 08 Sep 2014 11:07:20 +0000 Subject: [New-bugs-announce] [issue22362] Warn about octal escapes > 0o377 in re Message-ID: <1410174440.93.0.911211496637.issue22362@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Currently the re module accepts octal escapes from \400 to \777, but ignore highest bit. >>> re.search(r'\542', 'abc') <_sre.SRE_Match object; span=(1, 2), match='b'> This behavior looks surprising and is inconsistent with the regex module which preserve highest bit. Such escaping is not portable across different regular exception engines. I propose to add a warning when octal escape value is larger than 0o377. Here is preliminary patch which adds UserWarning. Or may be better to emit DeprecationWarning and then replace it by ValueError in future releases? ---------- components: Library (Lib), Regular Expressions files: re_octal_escape_overflow.patch keywords: patch messages: 226570 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Warn about octal escapes > 0o377 in re type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36571/re_octal_escape_overflow.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 13:46:27 2014 From: report at bugs.python.org (Zacrath) Date: Mon, 08 Sep 2014 11:46:27 +0000 Subject: [New-bugs-announce] [issue22363] argparse AssertionError with add_mutually_exclusive_group and help=SUPPRESS Message-ID: <1410176787.11.0.945188170372.issue22363@psf.upfronthosting.co.za> New submission from Zacrath: Executing the attached script causes an AssertionError. Traceback (most recent call last): File "bug.py", line 18, in parser.format_usage() File "/usr/lib/python3.4/argparse.py", line 2318, in format_usage return formatter.format_help() File "/usr/lib/python3.4/argparse.py", line 287, in format_help help = self._root_section.format_help() File "/usr/lib/python3.4/argparse.py", line 217, in format_help func(*args) File "/usr/lib/python3.4/argparse.py", line 338, in _format_usage assert ' '.join(opt_parts) == opt_usage AssertionError The script was tested in a clean Python installation. If any of the arguments are removed, there is no AssertionError exception. If "help=SUPPRESS" is removed, there is no AssertionError exception. This bug appears to have existed since Python 3.2, the first version that included argparse. ---------- files: bug.py messages: 226572 nosy: Zacrath, bethard priority: normal severity: normal status: open title: argparse AssertionError with add_mutually_exclusive_group and help=SUPPRESS type: behavior versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36572/bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 14:22:56 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 08 Sep 2014 12:22:56 +0000 Subject: [New-bugs-announce] [issue22364] Unify error messages of re and regex Message-ID: <1410178976.27.0.421938794422.issue22364@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: In some cases standard re module and third-party regex modules raise exceptions with different error messages. 1. re.match(re.compile('.'), 'A', re.I) re: Cannot process flags argument with a compiled pattern regex: can't process flags argument with a compiled pattern 2. re.compile('(?P 3. re.compile('(?Pa)(?P=foo_123') re: unterminated name regex: missing ) 4. regex.sub('(?Px)', r'\g 5. re.sub('(?Px)', r'\g<', 'xx') re: unterminated group name regex: bad group name 6. re.sub('(?Px)', r'\g', 'xx') re: bad character in group name regex: bad group name 7. re.sub('(?Px)', r'\g<-1>', 'xx') re: negative group number regex: bad group name 8. re.compile('(?Pa)(?P=!)') re: bad character in backref group name '!' regex: bad group name 9. re.sub('(?Px)', r'\g', 'xx') re: missing group name regex: missing < 10. re.compile('a\\') re.sub('x', '\\', 'x') re: bogus escape (end of line) regex: bad escape 11. re.compile(r'\1') re: bogus escape: '\1' regex: unknown group 12. re.compile('[a-') re: unexpected end of regular expression regex: bad set 13. re.sub(b'.', 'b', b'c') re: expected bytes, bytearray, or an object with the buffer interface, str found regex: expected bytes instance, str found 14. re.compile(r'\w', re.UNICODE | re.ASCII) re: ASCII and UNICODE flags are incompatible regex: ASCII, LOCALE and UNICODE flags are mutually incompatible 15. re.compile('(abc') re: unbalanced parenthesis regex: missing ) 16. re.compile('abc)') re: unbalanced parenthesis regex: trailing characters in pattern 17. re.compile(r'((.)\1+)') re: cannot refer to open group regex: can't refer to an open group Looks as in one case re messages are better, and in other cases regex messages are better. In any case it would be good to unify error messages in both modules. ---------- components: Library (Lib), Regular Expressions messages: 226575 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: Unify error messages of re and regex type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 16:54:46 2014 From: report at bugs.python.org (Ralph Broenink) Date: Mon, 08 Sep 2014 14:54:46 +0000 Subject: [New-bugs-announce] [issue22365] SSLContext.load_verify_locations(cadata) does not accept CRLs Message-ID: <1410188086.75.0.333692035483.issue22365@psf.upfronthosting.co.za> New submission from Ralph Broenink: Issue #18138 added support for the cadata argument in SSLContext.load_verify_locations. However, this argument does not support certificate revocation lists (CRLs) to be added (at least not in PEM format): ssl.SSLError: [PEM: NO_START_LINE] no start line (_ssl.c:2633) The documentation of this method is rather vague on this subject and does not state explicitly this is not allowed: This method can also load certification revocation lists (CRLs) in PEM or or DER format. In order to make use of CRLs, SSLContext.verify_flags must be configured properly. I think CRLs should be allowed to be loaded using the cadata argument. However, the documentation could use some polishing too: "At least one of cafile or capath must be specified." is outdated since the introduction of cadata. ---------- components: Extension Modules messages: 226582 nosy: Ralph.Broenink priority: normal severity: normal status: open title: SSLContext.load_verify_locations(cadata) does not accept CRLs versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 19:45:52 2014 From: report at bugs.python.org (Alex Gaynor) Date: Mon, 08 Sep 2014 17:45:52 +0000 Subject: [New-bugs-announce] [issue22366] urllib.request.urlopen shoudl take a "context" (SSLContext) argument Message-ID: <1410198352.63.0.605714024836.issue22366@psf.upfronthosting.co.za> New submission from Alex Gaynor: Instead of the ca* arguments it currently takes, these can all be encapsulated into an SSLContext argument, which the underlying http.client already supports. ---------- components: Library (Lib) messages: 226594 nosy: alex, christian.heimes, dstufft, giampaolo.rodola, janssen, orsenthil, pitrou priority: normal severity: normal status: open title: urllib.request.urlopen shoudl take a "context" (SSLContext) argument versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 8 22:18:49 2014 From: report at bugs.python.org (Andrew Lutomirski) Date: Mon, 08 Sep 2014 20:18:49 +0000 Subject: [New-bugs-announce] [issue22367] Please add F_OFD_SETLK, etc support to fcntl.lockf Message-ID: <1410207529.54.0.778133775349.issue22367@psf.upfronthosting.co.za> New submission from Andrew Lutomirski: Linux 3.15 and newer support a vastly superior API for file locking, in which locks are owned by open file descriptions instead of by processes. This is how everyone seems to expect POSIX locks to work, but now they can finally work that way. Please add some interface to these locks to fcntl.lockf. One option would be to use them by default and to fall back to standard POSIX locks if they're not available. I don't know whether this would break existing code. See http://man7.org/linux/man-pages/man2/fcntl.2.html for details. ---------- components: Library (Lib) messages: 226610 nosy: Andrew.Lutomirski priority: normal severity: normal status: open title: Please add F_OFD_SETLK, etc support to fcntl.lockf type: enhancement versions: Python 2.7, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 9 18:21:20 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 09 Sep 2014 16:21:20 +0000 Subject: [New-bugs-announce] [issue22369] "context management protocol" vs "context manager protocol" Message-ID: <1410279680.04.0.486393108686.issue22369@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Both terms are used in the documentation. Currently "context manager" wins (with score 225:19 in rst files). We should unify terminology across the documentation. ---------- assignee: docs at python components: Documentation messages: 226643 nosy: docs at python, eric.araujo, ezio.melotti, georg.brandl, serhiy.storchaka priority: normal severity: normal status: open title: "context management protocol" vs "context manager protocol" versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 9 18:24:51 2014 From: report at bugs.python.org (Antony Lee) Date: Tue, 09 Sep 2014 16:24:51 +0000 Subject: [New-bugs-announce] [issue22370] pathlib OS detection Message-ID: <1410279891.7.0.749304743952.issue22370@psf.upfronthosting.co.za> New submission from Antony Lee: Currently, pathlib contains the following check for the OS in the import section: try: import nt except ImportError: nt = None else: if sys.getwindowsversion()[:2] >= (6, 0): from nt import _getfinalpathname else: supports_symlinks = False _getfinalpathname = None I would like to suggest to switch this on testing for the value of `os.name` (as `PurePath.__new__` does), or possibly testing whether `sys.getwindowsversion` exists: the `nt` module is not publicly defined, so it wouldn't be unreasonable to have a file named `nt.py` on an Unix system (where this shouldn't cause any problems), in which case importing `pathlib` raises an AttributeError at the `getwindowsversion` line. ---------- components: Library (Lib) messages: 226644 nosy: Antony.Lee priority: normal severity: normal status: open title: pathlib OS detection versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 9 20:00:20 2014 From: report at bugs.python.org (Matthias Klose) Date: Tue, 09 Sep 2014 18:00:20 +0000 Subject: [New-bugs-announce] [issue22371] tests failing with -uall and http_proxy and https_proxy set Message-ID: <1410285620.78.0.374799566495.issue22371@psf.upfronthosting.co.za> New submission from Matthias Klose: there are some tests failing when http_proxy and https_proxy is set and the network resource is enabled. I didn't analyze things, but I assume there needs some more fine-grained control about the network resource. the log of such a run is attached. the system doesn't have any outward connection besides the http_proxy and https_proxies. ---------- files: python3.4-net-tests messages: 226650 nosy: doko priority: normal severity: normal status: open title: tests failing with -uall and http_proxy and https_proxy set Added file: http://bugs.python.org/file36584/python3.4-net-tests _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 9 20:28:41 2014 From: report at bugs.python.org (Riccardo) Date: Tue, 09 Sep 2014 18:28:41 +0000 Subject: [New-bugs-announce] [issue22373] PyArray_FromAny tries to deallocate double: 12 (d) Message-ID: <8938213B-EE74-4E68-B8F4-F30C79AF45B5@vodafone.it> New submission from Riccardo: Hi, I found this strange behaviour of PyArray_FromAny that manifest only inside a parallel region of openmp. I am using python 2.7.4 and numpy 1.8.0 *** Reference count error detected an attempt was made to deallocate 12 (d) *** and this is due to the PyArray_FromAny function that apparently does something also to the reference of NPY_DOUBLE or invisibly uses some kind of global variable generating race condition in a parallel implementation? If i insert PyArray_FromAny in a critical region the error never pops up. Best Regards, Riccardo P.s. i tried to send you more details and a piece of code but some characters are not accepted by your e-mail, ask me if you want more details ---------- messages: 226655 nosy: Il_Fox priority: normal severity: normal status: open title: PyArray_FromAny tries to deallocate double: 12 (d) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 9 22:58:12 2014 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 09 Sep 2014 20:58:12 +0000 Subject: [New-bugs-announce] [issue22374] Replace contextmanager example and improve explanation Message-ID: <1410296292.35.0.34327651623.issue22374@psf.upfronthosting.co.za> New submission from Terry J. Reedy: https://docs.python.org/3/library/contextlib.html#contextlib.contextmanager The current html contextmanager example is 'not recommended' for actual use, because there are better ways to accomplish the same goal. To me, is also unsatifactory in that the context management is only metaphorical (conceptual) and not actual. I propose the following as a replacement. It actually manages context and is, I believe, both useful and currently* the best way to accomplish the goal of temporarily monkeypatching a module, It was directly inspired by #20752, see msg226657, though there have been other issues where monkeypatching as a solution has been discussed. --- from contextlib import contextmanager import itertools as its @contextmanager def mp(ob, attr, new): old = getattr(ob, attr) setattr(ob, attr, new) yield setattr(ob, attr, old) def f(): pass print(its.count, its.cycle) with mp(its, 'count', f), mp(its, 'cycle', f): print(its.count, its.cycle) print(its.count, its.cycle) # # # --- I am aware that the above does not follow the current style, which I dislike, of confusingly mixing together batch file code and interactive input code. I think the above is how the example should be written. It would work even better if Sphinx gave comment lines a different background color. (A '##' prefix could be used to differentiate code comments from commented-out output lines.) In the same section, I find the following paragraph a bit confusing (perhaps is tries to say too much): "contextmanager() uses ContextDecorator so the context managers it creates can be used as decorators as well as in with statements. When used as a decorator, a new generator instance is implicitly created on each function call (this allows the otherwise ?one-shot? context managers created by contextmanager() to meet the requirement that context managers support multiple invocations in order to be used as decorators)." I am guessing that this means, among other things, that ContextDecorator is necessary and sufficient to use mp twice in one with statement. I intentionally added the double use to the example to make this possibility clear. * The only better way I know of would be if mp (spelled out as 'monkeypatch' were considered useful enough to be added to contextlib. ---------- assignee: docs at python components: Documentation messages: 226661 nosy: docs at python, ncoghlan, terry.reedy priority: normal severity: normal status: open title: Replace contextmanager example and improve explanation versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 9 23:50:05 2014 From: report at bugs.python.org (Alan Evangelista) Date: Tue, 09 Sep 2014 21:50:05 +0000 Subject: [New-bugs-announce] [issue22375] urllib2.urlopen().read().splitlines() opening a directory in a FTP server randomly returns incorrect results Message-ID: <1410299405.34.0.167812211508.issue22375@psf.upfronthosting.co.za> New submission from Alan Evangelista: Examples in Python command line: Try 1 ----- >>> import urllib2 urllib2.urlopen('ftp://:@/packages/repodata').read().splitlines() Output: Try 2 ----- >>> import urllib2 urllib2.urlopen('ftp://:@/packages/repodata').read().splitlines() Output: [] If I split urllib2.urlopen().read().splitlines() statement in 2 statements (urllib2.urlopen() and read().splitlines()), I always get correct results. import urllib2 a = urllib2.urlopen('ftp://alan_infinite:pass4root at 9.8.234.55/packages/repodata') >>> a.read().splitlines() Output: Verified in Python 2.6.6 in RHEL 6.4, Python 2.7.x in RHEL 7 and Python 2.7.3 in Ubuntu 14.04 ---------- components: Library (Lib) messages: 226662 nosy: alanoe priority: normal severity: normal status: open title: urllib2.urlopen().read().splitlines() opening a directory in a FTP server randomly returns incorrect results type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 9 23:53:40 2014 From: report at bugs.python.org (Alan Evangelista) Date: Tue, 09 Sep 2014 21:53:40 +0000 Subject: [New-bugs-announce] [issue22376] urllib2.urlopen().read().splitlines() opening a directory in a FTP server randomly returns incorrect result Message-ID: <1410299620.11.0.341288273378.issue22376@psf.upfronthosting.co.za> New submission from Alan Evangelista: Examples in Python command line: Try 1 ----- >>> import urllib2 urllib2.urlopen('ftp://:@/packages/repodata').read().splitlines() Output: Try 2 ----- >>> import urllib2 urllib2.urlopen('ftp://:@/packages/repodata').read().splitlines() Output: [] If I split urllib2.urlopen().read().splitlines() statement in 2 statements (urllib2.urlopen() and read().splitlines()), I always get correct results. import urllib2 a = urllib2.urlopen('ftp://:@/packages/repodata') >>> a.read().splitlines() Output: Verified in Python 2.6.6 in RHEL 6.4, Python 2.7.x in RHEL 7 and Python 2.7.3 in Ubuntu 14.04 ---------- components: Library (Lib) messages: 226664 nosy: alanoe priority: normal severity: normal status: open title: urllib2.urlopen().read().splitlines() opening a directory in a FTP server randomly returns incorrect result type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 10 00:24:15 2014 From: report at bugs.python.org (Ram Rachum) Date: Tue, 09 Sep 2014 22:24:15 +0000 Subject: [New-bugs-announce] [issue22377] %Z in strptime doesn't match EST and others Message-ID: <1410301455.57.0.912701656833.issue22377@psf.upfronthosting.co.za> New submission from Ram Rachum: The documentation for %Z ( https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior ) says it matches `EST` among others, but in practice it doesn't: Python 3.4.0 (v3.4.0:04f714765c13, Mar 16 2014, 19:25:23) [MSC v.1600 64 bit (AMD64)] on win32 Type "copyright", "credits" or "license()" for more information. DreamPie 1.2.1 >>> import datetime >>> datetime.datetime.strptime('2016-12-04 08:00:00 UTC', '%Y-%m-%d %H:%M:%S %Z') 0: datetime.datetime(2016, 12, 4, 8, 0) >>> datetime.datetime.strptime('2016-12-04 08:00:00 EST', '%Y-%m-%d %H:%M:%S %Z') Traceback (most recent call last): File "", line 1, in datetime.datetime.strptime('2016-12-04 08:00:00 EST', '%Y-%m-%d %H:%M:%S %Z') File "C:\Python34\lib\_strptime.py", line 500, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "C:\Python34\lib\_strptime.py", line 337, in _strptime (data_string, format)) ValueError: time data '2016-12-04 08:00:00 EST' does not match format '%Y-%m-%d %H:%M:%S %Z' >>> ---------- components: Library (Lib) messages: 226668 nosy: cool-RR priority: normal severity: normal status: open title: %Z in strptime doesn't match EST and others type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 10 03:42:53 2014 From: report at bugs.python.org (jpv) Date: Wed, 10 Sep 2014 01:42:53 +0000 Subject: [New-bugs-announce] [issue22378] SO_MARK support for Linux Message-ID: <1410313373.25.0.741471917751.issue22378@psf.upfronthosting.co.za> New submission from jpv: Please add support for SO_MARK in socket module. >From man 7 socket: SO_MARK (since Linux 2.6.25): Set the mark for each packet sent through this socket (similar to the netfilter MARK target but socket-based). Changing the mark can be used for mark-based routing without netfilter or for packet filtering. Setting this option requires the CAP_NET_ADMIN capability. ---------- components: Extension Modules files: python-so_mark.patch keywords: patch messages: 226673 nosy: jpv priority: normal severity: normal status: open title: SO_MARK support for Linux type: enhancement versions: Python 2.7, Python 3.4 Added file: http://bugs.python.org/file36588/python-so_mark.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 10 04:54:51 2014 From: report at bugs.python.org (Yongzhi Pan) Date: Wed, 10 Sep 2014 02:54:51 +0000 Subject: [New-bugs-announce] [issue22379] Empty exception message of str.join Message-ID: <1410317691.24.0.990266868285.issue22379@psf.upfronthosting.co.za> New submission from Yongzhi Pan: In the 2.7 branch, the exception message of str.join is missing when the argument is non-sequence: >>> ' '.join(None) Traceback (most recent call last): File "", line 1, in TypeError I fix this with a patch. After this: >>> ' '.join(None) Traceback (most recent call last): File "", line 1, in TypeError: can only join an iterable I also add test to this case. Can the test also be added to 3.x branches? ---------- components: ctypes files: str_join_exception_message.diff keywords: patch messages: 226675 nosy: fossilet priority: normal severity: normal status: open title: Empty exception message of str.join type: enhancement versions: Python 2.7 Added file: http://bugs.python.org/file36589/str_join_exception_message.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 10 07:07:24 2014 From: report at bugs.python.org (Elizabeth Myers) Date: Wed, 10 Sep 2014 05:07:24 +0000 Subject: [New-bugs-announce] [issue22380] Y2K compliance section in FAQ is 14 years too old Message-ID: <1410325644.2.0.420860619285.issue22380@psf.upfronthosting.co.za> New submission from Elizabeth Myers: As seen at https://docs.python.org/3/faq/general.html#is-python-y2k-year-2000-compliant; this is 2014 - Y2K compliance hasn't been a relevant topic for, well, 14 years, and I doubt this is a "frequently asked question" nowadays. The "As of August 2003" portion is even out of date (11 years old!). IMHO this ought to be taken out of the docs, unless this is still something people are asking about. ---------- assignee: docs at python components: Documentation messages: 226678 nosy: Elizacat, docs at python priority: normal severity: normal status: open title: Y2K compliance section in FAQ is 14 years too old type: enhancement versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 10 11:34:35 2014 From: report at bugs.python.org (Matthias Klose) Date: Wed, 10 Sep 2014 09:34:35 +0000 Subject: [New-bugs-announce] [issue22381] update zlib in 2.7 to 1.2.8 Message-ID: <1410341675.94.0.179025133528.issue22381@psf.upfronthosting.co.za> New submission from Matthias Klose: I'd like to update zlib in 2.7 to 1.2.8. zlib isn't used at all for posix builds, because it requires a system installed zlib. However I don't know what is is used for Windows and MacOSX. Please could somebody check? My rationale for the update is that the .exe files require a zlib, and I'd like to build these on Linux with a mingw cross compiler. The patch brings zlib to the status as found on the 3.4 branch. ---------- assignee: ronaldoussoren components: Build, Macintosh, Windows files: zlib-update.diff keywords: patch messages: 226688 nosy: doko, hynek, ned.deily, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: update zlib in 2.7 to 1.2.8 versions: Python 2.7 Added file: http://bugs.python.org/file36591/zlib-update.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 10 20:30:15 2014 From: report at bugs.python.org (william tonkin) Date: Wed, 10 Sep 2014 18:30:15 +0000 Subject: [New-bugs-announce] [issue22382] sqlite3 connection built from apsw connection should raise IntegrityError, not DatabaseError Message-ID: <1410373815.55.0.143596528672.issue22382@psf.upfronthosting.co.za> New submission from william tonkin: python Python 2.7.6 (default, Dec 23 2013, 13:16:30) [GCC 4.1.2 20080704 (Red Hat 4.1.2-54)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> ----- test script ----- import apsw import sqlite3 print 'sqlite3.version:', sqlite3.version print 'sqlite3.sqlite_version:', sqlite3.sqlite_version print 'apsw.apswversion:', apsw.apswversion() sqlite3_not_from_apsw = sqlite3.connect(':memory:') apsw_conn = apsw.Connection(':memory:') sqlite3_from_apsw = sqlite3.connect(apsw_conn) cursor_not_from_apsw = sqlite3_not_from_apsw.cursor() cursor_from_apsw = sqlite3_from_apsw.cursor() sqlite3_not_from_apsw.execute( "create table foo ( a timestamp check(datetime(a) is not null))") try: sqlite3_not_from_apsw.execute( "insert into foo values (?)", ('',)) except sqlite3.DatabaseError as foo: print 'not from apsw, repr(foo)', repr(foo) sqlite3_from_apsw.execute( "create table foo ( a timestamp check(datetime(a) is not null))") try: sqlite3_from_apsw.execute( "insert into foo values (?)", ('',)) except sqlite3.DatabaseError as foo: print 'from apsw, repr(foo)', repr(foo) ------------- output: sqlite3.version: 2.6.0 sqlite3.sqlite_version: 3.8.2 apsw.apswversion: 3.8.2-r1 not from apsw, repr(foo) IntegrityError('CHECK constraint failed: foo',) from apsw, repr(foo) DatabaseError('CHECK constraint failed: foo',) -------- Note that when the sqlite3 connection is built from an apsw connection the insert statement that violates the check constraint raises 'DatabaseError' and not 'IntegrityError'. ---------- messages: 226704 nosy: wtonkin priority: normal severity: normal status: open title: sqlite3 connection built from apsw connection should raise IntegrityError, not DatabaseError versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 10 21:05:03 2014 From: report at bugs.python.org (Christian Kleineidam) Date: Wed, 10 Sep 2014 19:05:03 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue22383=5D_Crazy_unicode_=3A_?= =?utf-8?q?How_g_and_=C9=A1_look_the_same_but_are_two_different_characters?= Message-ID: <1410375903.75.0.0668351290623.issue22383@psf.upfronthosting.co.za> New submission from Christian Kleineidam: g = 2 i = 2 ? = 1 a = g + i a >>> 4 Given the font on which this bug tracker runs it's possible to see why a is 4 and not 3. On the other hand there are plenty of fonts (such as Arial, Tahoma or Courier New) that display chr(103) and chr(609) the same way. If a programmer is not aware of the issue it will make it nearly impossible to spot bugs that come up when someone names variables or functions via using chr(609). Python should either forbid people from using chr(609) to name functions and variables or treat it as a synonym of chr(103). ---------- components: Unicode messages: 226708 nosy: Christian.Kleineidam, ezio.melotti, haypo priority: normal severity: normal status: open title: Crazy unicode : How g and ? look the same but are two different characters type: behavior versions: Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 10 22:14:34 2014 From: report at bugs.python.org (Aivar Annamaa) Date: Wed, 10 Sep 2014 20:14:34 +0000 Subject: [New-bugs-announce] [issue22384] Tk.report_callback_exception kills process when run with pythonw.exe Message-ID: <1410380074.61.0.464879354999.issue22384@psf.upfronthosting.co.za> New submission from Aivar Annamaa: Seems that the statement 'sys.stderr.write("Exception in Tkinter callback\n")' in Tk.report_callback_exception fails when the program is run with pythonw.exe, and brings down the whole process. A simple sample is attached. ---------- files: demo.py messages: 226712 nosy: Aivar.Annamaa priority: normal severity: normal status: open title: Tk.report_callback_exception kills process when run with pythonw.exe Added file: http://bugs.python.org/file36594/demo.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 11 01:55:01 2014 From: report at bugs.python.org (Nick Coghlan) Date: Wed, 10 Sep 2014 23:55:01 +0000 Subject: [New-bugs-announce] [issue22385] Allow 'x' and 'X' to accept bytes objects in string formatting Message-ID: <1410393301.25.0.193851592698.issue22385@psf.upfronthosting.co.za> New submission from Nick Coghlan: Inspired by the discussion in issue 9951, I believe it would be appropriate to extend the default handling of the "x" and "X" format characters to accept arbitrary bytes-like objects. The processing of these characters would be as follows: "x": display a-f as lowercase digits "X": display A-F as uppercase digits "#": includes 0x prefix ".precision": chunks output, placing a space after every bytes ",": uses a comma as the separator, rather than a space Output order would match binascii.hexlify() Examples: format(b"xyz", "x") -> '78797a' format(b"xyz", "X") -> '78797A' format(b"xyz", "#x") -> '0x78797a' format(b"xyz", ".1x") -> '78 79 7a' format(b"abcdwxyz", ".4x") -> '61626364 7778797a' format(b"abcdwxyz", "#.4x") -> '0x61626364 0x7778797a' format(b"xyz", ",.1x") -> '78,79,7a' format(b"abcdwxyz", ",.4x") -> '61626364,7778797a' format(b"abcdwxyz", "#,.4x") -> '0x61626364,0x7778797a' This approach makes it easy to inspect binary data, with the ability to inject regular spaces or commas to improved readability. Those are the basic features needed to support debugging. Anything more complicated than that, and we're starting to want something more like the struct module. ---------- components: Interpreter Core messages: 226733 nosy: ncoghlan priority: normal severity: normal status: open title: Allow 'x' and 'X' to accept bytes objects in string formatting versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 11 03:09:58 2014 From: report at bugs.python.org (Clark Boylan) Date: Thu, 11 Sep 2014 01:09:58 +0000 Subject: [New-bugs-announce] [issue22386] Python 3.4 logging.getLevelName() no longer maps string to level. Message-ID: <1410397798.62.0.061648555693.issue22386@psf.upfronthosting.co.za> New submission from Clark Boylan: Prior to http://hg.python.org/cpython/rev/5629bf4c6bba?revcount=60 logging.getLevelName(lvl) would map string lvl args like 'INFO' to the integer level value 20. After this change the string to int level mapping is removed and you can only map level to string. This removes the only public method for doing this mapping in the logging module. Old Behavior: >>> logging.getLevelName('INFO') 20 >>> logging.getLevelName(20) 'INFO' >>> New Behavior: >>> logging.getLevelName('INFO') 'Level INFO' >>> logging.getLevelName(20) 'INFO' >>> logging.getLevelName(logging.INFO) 'INFO' >>> The old behavior is valuable because it allows you to sanity check log levels provided as strings before attempting to use them. It seems that without this public mapping you have to rely on Logger.setLevel(lvl) throwing a ValueError which it seems to do only in 2.7 and greater. ---------- components: Library (Lib) messages: 226739 nosy: cboylan priority: normal severity: normal status: open title: Python 3.4 logging.getLevelName() no longer maps string to level. type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 11 10:15:45 2014 From: report at bugs.python.org (Antony Lee) Date: Thu, 11 Sep 2014 08:15:45 +0000 Subject: [New-bugs-announce] [issue22387] Making tempfile.NamedTemporaryFile a class Message-ID: <1410423345.31.0.535332998594.issue22387@psf.upfronthosting.co.za> New submission from Antony Lee: Currently, tempfile.TemporaryFile and tempfile.NamedTemporaryFile are functions, not classes, despite what their names suggest, preventing subclassing. It would arguably be not so easy to make TemporaryFile a class, as its return value is whatever "_io.open" returns, which can be of various types, but NamedTemporaryFile can trivially converted into a class by reusing the body of _TemporaryFileWrapper (which is not used elsewhere). ---------- components: Library (Lib) messages: 226752 nosy: Antony.Lee priority: normal severity: normal status: open title: Making tempfile.NamedTemporaryFile a class versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 11 14:31:15 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 11 Sep 2014 12:31:15 +0000 Subject: [New-bugs-announce] [issue22388] Unify style of "Contributed by" notes Message-ID: <1410438675.22.0.0998241704025.issue22388@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Here is a patch which converts "Contributed by" notes in whatsnews to most prevalent style. This means: * Previous sentence ends by a period and the "Contributed" is titled. * The note ends by a period and it is placed before closing parent. * The note is separated from previous sentence by two spaces or newline (as any sentences). Overwhelming majority of "Contributed by" notes are written in this style (except of unfinished 3.5 whatsnews). ---------- assignee: docs at python components: Documentation files: doc_contributed_by_style.patch keywords: patch messages: 226767 nosy: docs at python, r.david.murray, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Unify style of "Contributed by" notes type: enhancement versions: Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36598/doc_contributed_by_style.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 11 16:25:13 2014 From: report at bugs.python.org (Barry A. Warsaw) Date: Thu, 11 Sep 2014 14:25:13 +0000 Subject: [New-bugs-announce] [issue22389] Generalize contextlib.redirect_stdout Message-ID: <1410445513.66.0.871890162607.issue22389@psf.upfronthosting.co.za> New submission from Barry A. Warsaw: redirect_stdout is almost exactly what I want, except I want to redirect stderr! redirect_stdout.__init__() should take a 'stream_name' argument (possibly keyword-only) which could be set to 'stderr'. I propose it's implemented as setattr(sys, stream_name, new_target) ---------- components: Library (Lib) messages: 226774 nosy: barry priority: normal severity: normal status: open title: Generalize contextlib.redirect_stdout versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 11 16:30:52 2014 From: report at bugs.python.org (STINNER Victor) Date: Thu, 11 Sep 2014 14:30:52 +0000 Subject: [New-bugs-announce] [issue22390] test.regrtest should complain if a test doesn't remove temporary files Message-ID: <1410445852.03.0.724241204359.issue22390@psf.upfronthosting.co.za> New submission from STINNER Victor: A change in test_glob of issue #13968 started to fail because a previous test created temporary files but didn't remove them. test.regrtest should at least emit a warning if the temporary directory used to run tests is not empty before removing it. ---------- components: Tests messages: 226777 nosy: haypo, serhiy.storchaka priority: normal severity: normal status: open title: test.regrtest should complain if a test doesn't remove temporary files versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 11 18:53:29 2014 From: report at bugs.python.org (Kevin Phillips) Date: Thu, 11 Sep 2014 16:53:29 +0000 Subject: [New-bugs-announce] [issue22391] MSILIB truncates last character in summary information stream Message-ID: <1410454409.06.0.0184961474822.issue22391@psf.upfronthosting.co.za> New submission from Kevin Phillips: I recently exploited a subtle bug with the msilib module's GetProperty method on the SummaryInformation class. When retrieving string-typed properties from the stream the last character in the string gets truncated, replaced by a null-byte. I am using Python v3.2.5 64bit on Windows 7, and I've managed to reproduce the error with the following code snippet: filename = "sample.msp" patch_database = msilib.OpenDatabase(filename, msilib.MSIDBOPEN_READONLY | msilib.MSIDBOPEN_PATCHFILE) summary_info = patch_database.GetSummaryInformation(20) print (summary_info.GetProperty(msilib.PID_REVNUMBER)) The PID_REVNUMBER returns the patch-GUID for the Windows Installer patch file. In this example the GUID is returned properly however the character string is supposed to be delimited by curly braces - { }. Examination of the returned byte array shows that the leading curly brace is included by the final curly brace is not. Closer examination also shows that the last character in the byte array is \x00. While it is possible, in this situation, to circumvent the bug by simply removing the trailing bytes and replacing them with a closing curly brace, this may not be so easy to work around for general character strings if the last character in the sequence is not static. As such I'd highly recommend fixing this in the source for the msilib module. ---------- components: Library (Lib), Windows messages: 226789 nosy: Kevin.Phillips priority: normal severity: normal status: open title: MSILIB truncates last character in summary information stream type: behavior versions: Python 3.2 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 11 20:52:32 2014 From: report at bugs.python.org (David Gilman) Date: Thu, 11 Sep 2014 18:52:32 +0000 Subject: [New-bugs-announce] [issue22392] Clarify documentation of __getinitargs__ Message-ID: <1410461552.96.0.589396040073.issue22392@psf.upfronthosting.co.za> New submission from David Gilman: Implementations of __getinitargs__ return a tuple of the positional arguments for __init__. This wasn't initially apparent to me after reading the docs: I thought you were passing a tuple (args, kwargs) that would get called f(*args, **kwargs) and had to go to the pickle implementation to find out what you were supposed to do. The proposed documentation enhancement: mention that you're just supposed to return a tuple of positional args and that it doesn't support kwargs. ---------- assignee: docs at python components: Documentation messages: 226795 nosy: David.Gilman, docs at python priority: normal severity: normal status: open title: Clarify documentation of __getinitargs__ type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 12 00:33:06 2014 From: report at bugs.python.org (Dan O'Reilly) Date: Thu, 11 Sep 2014 22:33:06 +0000 Subject: [New-bugs-announce] [issue22393] multiprocessing.Pool shouldn't hang forever if a worker process dies unexpectedly Message-ID: <1410474786.78.0.264797717105.issue22393@psf.upfronthosting.co.za> New submission from Dan O'Reilly: This is essentially a dupe of issue9205, but it was suggested I open a new issue, since that one ended up being used to fix this same problem in concurrent.futures, and was subsequently closed. Right now, should a worker process in a Pool unexpectedly get terminated while a blocking Pool method is running (e.g. apply, map), the method will hang forever. This isn't a normal occurrence, but it does occasionally happen (either because someone sends a SIGTERM, or because of a bug in the interpreter or a C-extension). It would be preferable for multiprocessing to follow the lead of concurrent.futures.ProcessPoolExecutor when this happens, and abort all running tasks and close down the Pool. Attached is a patch that implements this behavior. Should a process in a Pool unexpectedly exit (meaning, *not* because of hitting the maxtasksperchild limit), the Pool will be closed/terminated and all cached/running tasks will raise a BrokenProcessPool exception. These changes also prevent the Pool from going into a bad state if the "initializer" function raises an exception (previously, the pool would end up infinitely starting new processes, which would immediately die because of the exception). One concern with the patch: The way timings are altered with these changes, the Pool seems to be particularly susceptible to issue6721 in certain cases. If processes in the Pool are being restarted due to maxtasksperchild just as the worker is being closed or joined, there is a chance the worker will be forked while some of the debug logging inside of Pool is running (and holding locks on either sys.stdout or sys.stderr). When this happens, the worker deadlocks on startup, which will hang the whole program. I believe the current implementation is susceptible to this as well, but I could reproduce it much more consistently with this patch. I think its rare enough in practice that it shouldn't prevent the patch from being accepted, but thought I should point it out. (I do think issue6721 should be addressed, or at the very least internal I/O locks should always reset after forking.) ---------- components: Library (Lib) files: multiproc_broken_pool.diff keywords: patch messages: 226805 nosy: dan.oreilly, jnoller, pitrou, sbt priority: normal severity: normal status: open title: multiprocessing.Pool shouldn't hang forever if a worker process dies unexpectedly type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36603/multiproc_broken_pool.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 12 16:37:26 2014 From: report at bugs.python.org (Brett Cannon) Date: Fri, 12 Sep 2014 14:37:26 +0000 Subject: [New-bugs-announce] [issue22394] Update documentation building to use venv and pip Message-ID: <1410532646.93.0.821580086561.issue22394@psf.upfronthosting.co.za> New submission from Brett Cannon: Now that we have ensurepip, is there any reason to not have the Doc/ Makefile create a venv for building the docs instead of requiring people to install sphinx into either their global Python interpreter or some venv outside of their checkout? Basically it would be like going back to the old Makefile of checking out the code but instead do a better isolation job and let pip manage fetching everything, updating the projects, etc. ---------- assignee: docs at python components: Documentation messages: 226821 nosy: brett.cannon, docs at python priority: low severity: normal stage: needs patch status: open title: Update documentation building to use venv and pip type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 12 18:49:47 2014 From: report at bugs.python.org (Justin Foo) Date: Fri, 12 Sep 2014 16:49:47 +0000 Subject: [New-bugs-announce] [issue22395] test_pathlib error for complex symlinks on Windows Message-ID: <1410540587.88.0.335092451275.issue22395@psf.upfronthosting.co.za> New submission from Justin Foo: The _check_complex_symlinks function compares paths for string equality instead of using the assertSame helper function. Patch attached. ---------- components: Tests messages: 226828 nosy: jfoo priority: normal severity: normal status: open title: test_pathlib error for complex symlinks on Windows type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 12 20:20:03 2014 From: report at bugs.python.org (David Edelsohn) Date: Fri, 12 Sep 2014 18:20:03 +0000 Subject: [New-bugs-announce] [issue22396] AIX posix_fadvise and posix_fallocate Message-ID: <1410546003.27.0.984640019214.issue22396@psf.upfronthosting.co.za> New submission from David Edelsohn: As with Solaris and Issue10812, test_posix fadvise and fallocate fail on AIX. Python is compiled with _LARGE_FILES, which changes the function signature for posix_fadvise and posix_fallocate so that off_t is "long long" on 32 bit system passed in two registers. The Python call to those functions does not place the arguments in the correct registers, causing an EINVAL error. This patch fixes the failures in a similar way to Solaris ZFS kludge for Issue10812. ---------- components: Tests files: 10812_aix.patch keywords: patch messages: 226834 nosy: David.Edelsohn, pitrou priority: normal severity: normal status: open title: AIX posix_fadvise and posix_fallocate type: behavior versions: Python 3.3, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36611/10812_aix.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 12 20:44:13 2014 From: report at bugs.python.org (David Edelsohn) Date: Fri, 12 Sep 2014 18:44:13 +0000 Subject: [New-bugs-announce] [issue22397] test_socket failure on AIX Message-ID: <1410547453.79.0.287492326275.issue22397@psf.upfronthosting.co.za> New submission from David Edelsohn: AIX has the same test_socket problem with FDPassSeparate as Darwin in Issue12958 so skip some tests. ---------- components: Library (Lib) files: 12958_aix.patch keywords: patch messages: 226837 nosy: David.Edelsohn, pitrou priority: normal severity: normal status: open title: test_socket failure on AIX type: behavior versions: Python 3.3, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36612/12958_aix.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 12 20:46:13 2014 From: report at bugs.python.org (Steve Dower) Date: Fri, 12 Sep 2014 18:46:13 +0000 Subject: [New-bugs-announce] [issue22398] Tools/msi enhancements for 2.7 Message-ID: <1410547573.26.0.0466053668999.issue22398@psf.upfronthosting.co.za> New submission from Steve Dower: This patch has some minor changes to the build scripts for Python 2.7 on Windows. They're fully tested on my build machine, but I wanted someone who's more familiar with how the buildbots are set up to either confirm that the Tools/msi scripts are not used or that the changes won't have an impact. The Tools/msi/msi.py changes to use environment variables are mostly to make my life easier. Apparently the old way was to actually modify the file before making an official release... The Tools/msi/msilib.py fix is necessary because of some new files that were added for 2.7.9. Technically it's a release blocker, though it won't actually hold anything up since I spotted it. ---------- assignee: steve.dower components: Installation, Windows files: Tool_msi_27.patch keywords: patch messages: 226839 nosy: pitrou, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: patch review status: open title: Tools/msi enhancements for 2.7 type: enhancement versions: Python 2.7 Added file: http://bugs.python.org/file36613/Tool_msi_27.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 12 22:27:48 2014 From: report at bugs.python.org (Philippe Dessauw) Date: Fri, 12 Sep 2014 20:27:48 +0000 Subject: [New-bugs-announce] [issue22399] Doc: missing anchor for dict in library/functions.html Message-ID: <1410553668.99.0.783608222954.issue22399@psf.upfronthosting.co.za> New submission from Philippe Dessauw: There is a missing anchor for the dict functions in the documentation at library/functions.html. It is present in the documentation of all python version. It seems to impact cross-referencing in Sphinx (using intersphinx). ---------- assignee: docs at python components: Documentation messages: 226845 nosy: docs at python, pdessauw priority: normal severity: normal status: open title: Doc: missing anchor for dict in library/functions.html type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 01:31:44 2014 From: report at bugs.python.org (Jeremy Kloth) Date: Sat, 13 Sep 2014 23:31:44 +0000 Subject: [New-bugs-announce] [issue22400] Stable API broken on Windows for PyUnicode_* Message-ID: <1410651104.92.0.853615369335.issue22400@psf.upfronthosting.co.za> New submission from Jeremy Kloth: When using any of the PyUnicode_* functions in an extension module compiled with Py_LIMITED_API defined, the resulting module cannot be imported due to: ImportError: DLL load failed: The specified procedure could not be found. Upon investigation, the error is in the EXPORTS for PC\python3.def. The PyUnicode_* functions still refer to the old PyUnicodeUCS2_* variants. ---------- components: Extension Modules, Windows messages: 226856 nosy: jkloth, loewis, steve.dower, zach.ware priority: normal severity: normal status: open title: Stable API broken on Windows for PyUnicode_* versions: Python 3.3, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 07:46:38 2014 From: report at bugs.python.org (paul j3) Date: Sun, 14 Sep 2014 05:46:38 +0000 Subject: [New-bugs-announce] [issue22401] argparse: 'resolve' conflict handler damages the actions of the parent parser Message-ID: <1410673598.78.0.274465754789.issue22401@psf.upfronthosting.co.za> New submission from paul j3: When there's a conflict involving an argument that was added via 'parents', and the conflict handler is 'resolve', the action in the parent parser may be damaged, rendering that parent unsuitable for further use. In this example, 2 parents have the same '--config' argument: parent1 = argparse.ArgumentParser(add_help=False) parent1.add_argument('--config') parent2 = argparse.ArgumentParser(add_help=False) parent2.add_argument('--config') parser = argparse.ArgumentParser(parents=[parent1,parent2], conflict_handler='resolve') The actions of the 3 parsers are (from the ._actions list): (id, dest, option_strings) parent1: [(3077384012L, 'config', [])] # empty option_strings parent2: [(3076863628L, 'config', ['--config'])] parser: [(3076864428L, 'help', ['-h', '--help']), (3076863628L, 'config', ['--config'])] # same id The 'config' Action from 'parent1' is first copied to 'parser' by reference (this is important). When 'config' from 'parent2' is copied, there's a conflict. '_handle_conflict_resolve()' attempts to remove the first Action, so it can add the second. But in the process it ends up deleting the 'option_strings' values from the original action. So now 'parent1' has an action in its 'optionals' argument group with an empty option_strings list. It would display as an 'optionals' but parse as a 'positionals'. 'parent1' can no longer be safely used as a parent for another (sub)parser, nor used as a parser itself. The same sort of thing would happen, if, as suggested in the documentation: "Sometimes (e.g. when using parents_) it may be useful to simply override any older arguments with the same option string." In test_argparse.py, 'resolve' is only tested once, with a simple case of two 'add_argument' statements. The 'parents' class tests a couple of cases of conflicting actions (for positionals and optionals), but does nothing with the 'resolve' handler. ------------------------------ Possible fixes: - change the documentation to warn against reusing such a parent parser - test the 'resolve' conflict handler more thoroughly - rewrite this conflict handler so it does not modify the action in the parent - possibly change the 'parents' mechanism so it does a deep copy of actions. References: http://stackoverflow.com/questions/25818651/argparse-conflict-resolver-for-options-in-subcommands-turns-keyword-argument-int http://bugs.python.org/issue15271 argparse: repeatedly specifying the same argument ignores the previous ones http://bugs.python.org/issue19462 Add remove_argument() method to argparse.ArgumentParser http://bugs.python.org/issue15428 add "Name Collision" section to argparse docs ---------- assignee: docs at python components: Documentation, Library (Lib), Tests messages: 226862 nosy: docs at python, paul.j3 priority: normal severity: normal status: open title: argparse: 'resolve' conflict handler damages the actions of the parent parser type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 15:46:54 2014 From: report at bugs.python.org (Martin Panter) Date: Sun, 14 Sep 2014 13:46:54 +0000 Subject: [New-bugs-announce] [issue22406] uu-codec trailing garbage workaround is Python 2 code Message-ID: <1410702414.35.0.316849103636.issue22406@psf.upfronthosting.co.za> New submission from Martin Panter: The handler for the ?Trailing garbage? error for ?uu-codec? uses Python 2 code, while the copy in the "uu? module has the correct Python 3 code. Please change the line at https://hg.python.org/cpython/file/775453a7b85d/Lib/encodings/uu_codec.py#l57 to look like https://hg.python.org/cpython/file/775453a7b85d/Lib/uu.py#l148 In particular, drop ord() and use floor division. Better yet, maybe the code could be reused so that there is less duplication! Demonstration: >>> codecs.decode(b"begin 666 \n!,___\n \nend\n", "uu-codec") Traceback (most recent call last): File "/usr/lib/python3.4/encodings/uu_codec.py", line 54, in uu_decode data = binascii.a2b_uu(s) binascii.Error: Trailing garbage During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.4/encodings/uu_codec.py", line 57, in uu_decode nbytes = (((ord(s[0])-32) & 63) * 4 + 5) / 3 TypeError: ord() expected string of length 1, but int found The above exception was the direct cause of the following exception: Traceback (most recent call last): File "", line 1, in TypeError: decoding with 'uu-codec' codec failed (TypeError: ord() expected string of length 1, but int found) >>> codecs.decode(b"begin 666 \n!,P \n \nend\n", "uu-codec") b'3' # Expected output for both cases ---------- components: Library (Lib) messages: 226870 nosy: vadmium priority: normal severity: normal status: open title: uu-codec trailing garbage workaround is Python 2 code type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 17:43:18 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 14 Sep 2014 15:43:18 +0000 Subject: [New-bugs-announce] [issue22407] re.LOCALE is nonsensical for Unicode Message-ID: <1410709398.9.0.134852560063.issue22407@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Current implementation of re.LOCALE support for Unicode strings is nonsensical. It correctly works only on Latin1 locales (because Unicode string interpreted as Latin1 decoded bytes string. all characters outside UCS1 range considered as non-words), on other locales it got strange and useless results. >>> import re, locale >>> locale.setlocale(locale.LC_CTYPE, 'ru_RU.cp1251') 'ru_RU.cp1251' >>> re.match(br'\w', '?'.encode('cp1251'), re.L) <_sre.SRE_Match object; span=(0, 1), match=b'\xb5'> >>> re.match(r'\w', '?', re.L) <_sre.SRE_Match object; span=(0, 1), match='?'> >>> re.match(br'\w', '?'.encode('cp1251'), re.L) <_sre.SRE_Match object; span=(0, 1), match=b'\xb8'> >>> re.match(r'\w', '?', re.L) Proposed patch fixes re.LOCALE support for Unicode strings. It uses the wide-character equivalents of C characters functions (towlower(), iswalpha(), etc). The problem is that these functions are not exists in C89, they are introduced only in C99. Gcc understand them, we should check other compilers. However these functions are already used on FreeBSD and MacOS. ---------- components: Extension Modules, Library (Lib), Regular Expressions files: re_unicode_locale.patch keywords: patch messages: 226871 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: re.LOCALE is nonsensical for Unicode type: behavior versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36615/re_unicode_locale.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 17:44:55 2014 From: report at bugs.python.org (Drekin) Date: Sun, 14 Sep 2014 15:44:55 +0000 Subject: [New-bugs-announce] [issue22408] Tkinter doesn't handle Unicode key events on Windows Message-ID: <1410709495.77.0.324762298735.issue22408@psf.upfronthosting.co.za> New submission from Drekin: Key events produced on Windows handles Unicode incorrectly when Unicode character is produced by dead-key combination. On my keyboard, (AltGr + M, a) produces several key events, last of which contains char=="a", however, it should contain "?". Also dead-key sequence (\, a) should produce event.char=="?", however contains "?". ---------- components: Tkinter, Unicode, Windows messages: 226872 nosy: Drekin, ezio.melotti, haypo priority: normal severity: normal status: open title: Tkinter doesn't handle Unicode key events on Windows versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 17:55:46 2014 From: report at bugs.python.org (=?utf-8?q?Brynjar_Sm=C3=A1ri_Bjarnason?=) Date: Sun, 14 Sep 2014 15:55:46 +0000 Subject: [New-bugs-announce] [issue22409] namedtuples bug between 3.3.2 and 3.4.1 Message-ID: <1410710146.94.0.736559829031.issue22409@psf.upfronthosting.co.za> New submission from Brynjar Sm?ri Bjarnason: In Python 3.4.1 installed with Anaconda. I tried the following (expecting an OrderedDict as result): >>>from collections import namedtuple >>>NT = namedtuple("NT",["a","b"]) >>>nt = NT(1,2) >>>print(vars(nt)) {} so the result is an empty dict. In Python 3.3.2 (downgraded in the same Anaconda environment) results in: >>>print(vars(nt)) OrderedDict([('a', 1), ('b', 2)]) ---------- components: Distutils messages: 226873 nosy: binnisb, dstufft, eric.araujo priority: normal severity: normal status: open title: namedtuples bug between 3.3.2 and 3.4.1 type: behavior versions: Python 3.3, Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 18:23:23 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 14 Sep 2014 16:23:23 +0000 Subject: [New-bugs-announce] [issue22410] Locale dependent regexps on different locales Message-ID: <1410711803.67.0.827066928752.issue22410@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Locale-specific case-insensitive regular expression matching works only when the pattern was compiled on the same locale as used for matching. Due to caching this can cause unexpected result. Attached script demonstrates this (it requires two locales: ru_RU.koi8-r and ru_RU.cp1251). The output is: locale ru_RU.koi8-r b'1\xa3' ('1?') matches b'1\xb3' ('1?') b'1\xa3' ('1?') doesn't match b'1\xbc' ('1?') locale ru_RU.cp1251 b'1\xa3' ('1?') doesn't match b'1\xb3' ('1?') b'1\xa3' ('1?') matches b'1\xbc' ('1?') locale ru_RU.cp1251 b'2\xa3' ('2?') doesn't match b'2\xb3' ('2?') b'2\xa3' ('2?') matches b'2\xbc' ('2?') locale ru_RU.koi8-r b'2\xa3' ('2?') doesn't match b'2\xb3' ('2?') b'2\xa3' ('2?') matches b'2\xbc' ('2?') b'\xa3' matches b'\xb3' on KOI8-R locale if the pattern was compiled on KOI8-R locale and matches b'\xb3' if the pattern was compiled on CP1251 locale. I see three possible ways to solve this issue: 1. Avoid caching of locale-depending case-insensitive patterns. This definitely will decrease performance of the use of locale-depending case-insensitive regexps (if user don't use own caching) and may be slightly decrease performance of the use of other regexps. 2. Clear precompiled regexps cache on every locale change. This can look simpler, but is vulnerable to locale changes from extensions. 3. Do not lowercase characters at compile time (in locale-depending case-insensitive patterns). This needs to introduce new opcode for case-insensitivity matching or at least rewriting implementation of current opcodes (less efficient). On other way, this is more correct implementation than current one. The problem is that this is incompatible with those distributions which updates only Python library but not statically linked binary (e.g. Vim with Python support). May be there are some workarounds. ---------- components: Extension Modules, Library (Lib), Regular Expressions files: re_locale_caching.py messages: 226874 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: Locale dependent regexps on different locales type: behavior versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36616/re_locale_caching.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 19:57:04 2014 From: report at bugs.python.org (Joakim Karlsson) Date: Sun, 14 Sep 2014 17:57:04 +0000 Subject: [New-bugs-announce] [issue22411] Embedding Python on Windows Message-ID: <1410717424.69.0.586099783725.issue22411@psf.upfronthosting.co.za> New submission from Joakim Karlsson: When I embed Python 3.4 in an existing app, I run in to a few issues when our app is built in debug mode. I build against the headers, libs and dlls that I get when installing python, I don't build python myself. 1. When I include python.h it will, via pyconfig.h, automatically attempt to link to python34_d.lib. This lib, along with its accompanying dll is not included in the python installation. I'd rather explicitly link to the release lib than having header files implicitly selecting libs based on the _DEBUG flag. I ended up killing the #pragma comment(lib...) statements in pyconfig.h, and I can now link with the lib included. 2. Now when I build, I get a linker error telling me that '__imp__Py_RefTotal' and '__imp_Py_NegativeRefCount' cannot be found. This seems to be caused by Py_DEBUG being defined automatically in pyconfig.h as a result of _DEBUG being defined for my application. This causes the reference counting macros to use functionality that is only present in the debug version of python34.dll. I ended up including pyconfig.h first, undefed Py_DEBUG, and then including python.h. This seems rather clunky. Keeping with "explicit is better than implicit", wouldn't it be better to having to explicitly link to the desired lib from the embedding app, and explicitly set Py_DEBUG without having it inferred from _DEBUG? That way the provided headers and libs would work right out of the box while still leaving me the option to get the debug behaviour if I need to. ---------- components: Windows messages: 226882 nosy: Joakim.Karlsson priority: normal severity: normal status: open title: Embedding Python on Windows type: compile error versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 14 23:15:35 2014 From: report at bugs.python.org (Martin Teichmann) Date: Sun, 14 Sep 2014 21:15:35 +0000 Subject: [New-bugs-announce] [issue22412] Towards an asyncio-enabled command line Message-ID: <1410729335.06.0.363715018256.issue22412@psf.upfronthosting.co.za> New submission from Martin Teichmann: This patch is supposed to facilitate using the asyncio package on the command line. It contains two things: First, a coroutine version of builtin.input, so that it can be called while a asyncio event loop is running. Secondly, it adds a new flag to builtin.compile which allows to use the yield and yield from statements on the module level, making compile always return a generator. The latter part will enable us to run commands like the following on the command line: >>> from asyncio import sleep >>> yield from sleep(3) (This has been discussed on python-ideas, https://mail.python.org/pipermail/python-ideas/2014-September/029293.html) ---------- components: Interpreter Core, asyncio files: patch messages: 226887 nosy: Martin.Teichmann, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: Towards an asyncio-enabled command line type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36617/patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 05:04:03 2014 From: report at bugs.python.org (Martin Panter) Date: Mon, 15 Sep 2014 03:04:03 +0000 Subject: [New-bugs-announce] [issue22413] Bizarre StringIO(newline="\r\n") translation Message-ID: <1410750243.44.0.979805676788.issue22413@psf.upfronthosting.co.za> New submission from Martin Panter: I noticed that the newline translation in the io.StringIO class does not behave as I would expect: >>> text = "NL\n" "CRLF\r\n" "CR\r" "EOF" >>> s = StringIO(text, newline="\r\n") >>> s.getvalue() 'NL\r\nCRLF\r\r\nCR\rEOF' # Why is this not just equal to ?text?? >>> tuple(s) ('NL\r\n', 'CRLF\r\r\n', 'CR\rEOF') # Too many lines, butchered EOL sequence >>> tuple(TextIOWrapper(BytesIO(text.encode("ascii")), "ascii", newline="\r\n")) ('NL\nCRLF\r\n', 'CR\rEOF') # This seems more reasonable Although I have never had a use for newline="\r", it also seems broken: >>> tuple(StringIO(text, newline="\r")) ('NL\r', 'CRLF\r', '\r', 'CR\r', 'EOF') # Way too many lines >>> tuple(TextIOWrapper(BytesIO(text.encode("ascii")), "ascii", newline="\r")) ('NL\nCRLF\r', '\nCR\r', 'EOF') The other newline options ("\n", "", and None) seem to behave correctly though. There seem to be quite a few bug reports to do with newline translation in StringIO, but I couldn?t see anything specifically about this one. However the issue was mentioned at . I noticed there are test cases which appear to bless the current behaviour, as seen in the patch for Issue 20498. IMO these tests are wrong. ---------- components: IO messages: 226895 nosy: vadmium priority: normal severity: normal status: open title: Bizarre StringIO(newline="\r\n") translation type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 08:11:27 2014 From: report at bugs.python.org (VUIUI) Date: Mon, 15 Sep 2014 06:11:27 +0000 Subject: [New-bugs-announce] [issue22414] I'd like to add you to my professional network on LinkedIn Message-ID: <1295080853.6845761.1410761481019.JavaMail.app@ela4-app2681.prod> New submission from VUIUI: Hi, I'd like to add you to my professional network on LinkedIn. - S?i Accept: http://www.linkedin.com/blink?simpleRedirect=38RdP0VczsRe3gQd38ScjsNejkZh4BKrSBQonhFtCVF9z5KjmdKiiQJfnBBiShBsC5EsOpQsSlRpRZBt6BSrCAZqSkConhzbmlQqnpKqiRQsSlRpORIrmkZpSVFqSdxsDgCpnhFtCV9pSlipn9Mfm4CsPgJc3ByumkPc6AJcSlKoT4PbjRBfP9SbSkLrmZzbCVFp6lHrCBIbDtTtOYLeDdMt7hE&msgID=I7888360485_1&markAsRead= You received an invitation to connect. LinkedIn will use your email address to make suggestions to our members in features like People You May Know. Unsubscribe here: http://www.linkedin.com/blink?simpleRedirect=pT9Lhj8BrCZEt7BMhj8BsStRoz0Q9nhOrT1BszRIqm5JpipQs64MczxGcTdSpP9IcDoZp6Bx9z4Sc30OfmhF9z4JfmhFripPd2QMem9VpjcMqiQPpmVzsjcJfmhFpipQsSlRpRZBt6BSrCAZqSkCkjoPp4l7q5p6sCR6kk4ZrClHrRhAqmQCsSVRfngCsPgJc3ByumkPc6AJcSlKoT4PbjRBfP9SbSkLrmZzbCVFp6lHrCBIbDtTtOYLeDdMt7hE&msgID=I7888360485_1&markAsRead= Learn why we included this at the following link: http://www.linkedin.com/blink?simpleRedirect=e3wTd3RAimlIoSBQsC4Ct7dBtmtvpnhFtCVFfmJB9CNOlmlzqnpOpldOpmRLt7dRoPRx9DcQbj0VoDBBcP1FbjdBrCdNcOQZpjYOtyZBbSRLoOVKqmhBqSVFr2VTtTsLbPFMt7hE&msgID=I7888360485_1&markAsRead= © 2014, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, USA ---------- messages: 226897 nosy: VUIUI priority: normal severity: normal status: open title: I'd like to add you to my professional network on LinkedIn _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 10:07:14 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 15 Sep 2014 08:07:14 +0000 Subject: [New-bugs-announce] [issue22415] Fix re debugging output Message-ID: <1410768434.94.0.831625656606.issue22415@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch fixes some issues in debugging output of the compiling of regular expression with the re.DEBUG flag. 1. Fixed the handling of the GROUPREF_EXISTS opcode. Example: >>> re.compile(r'(ab)(?(1)cd|ef)', re.DEBUG) Before patch ("yes" and "no" branches are not separated): subpattern 1 literal 97 literal 98 subpattern None groupref_exists 1 literal 99 literal 100 literal 101 literal 102 After patch: subpattern 1 literal 97 literal 98 subpattern None groupref_exists 1 literal 99 literal 100 or literal 101 literal 102 2. Got rid of trailing spaces in Python 3. 3. Used named opcode constants instead of inlined strings. 4. Simplified and modernized the code. 5. Updated test to cover more code. ---------- components: Library (Lib), Regular Expressions files: re_debug.patch keywords: patch messages: 226903 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Fix re debugging output type: behavior versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36620/re_debug.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 10:42:37 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 15 Sep 2014 08:42:37 +0000 Subject: [New-bugs-announce] [issue22416] Pickling compiled re patterns Message-ID: <1410770557.9.0.129599945661.issue22416@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Compiled re._compile() is used to reconstruct compiled regular expression pattern. re._compile() is private function and can be removed in (long-term) future. I propose to use re.compile() instead. ---------- components: Library (Lib), Regular Expressions keywords: easy messages: 226905 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: Pickling compiled re patterns type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 14:34:29 2014 From: report at bugs.python.org (Nick Coghlan) Date: Mon, 15 Sep 2014 12:34:29 +0000 Subject: [New-bugs-announce] [issue22417] PEP 476: verify HTTPS certificates by default Message-ID: <1410784469.95.0.475937020841.issue22417@psf.upfronthosting.co.za> New submission from Nick Coghlan: Attached minimal patch updates http.client.HTTPSConnection to validate certs by default and adjusts test.test_httplib accordingly. It doesn't currently include any docs changes, or changes to urllib. The process wide "revert to the old behaviour" hook is to monkeypatch the ssl module: ssl._create_default_https_context = ssl._create_unverified_context To monkeypatch the stdlib to validate *everything* (this one isn't new, just noting it for the record): ssl._create_stdlib_context = ssl.create_default_context ---------- files: pep476_minimal_implementation.diff keywords: patch messages: 226912 nosy: alex, larry, ncoghlan priority: high severity: normal status: open title: PEP 476: verify HTTPS certificates by default type: enhancement Added file: http://bugs.python.org/file36624/pep476_minimal_implementation.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 15:36:07 2014 From: report at bugs.python.org (Jason Nadeau) Date: Mon, 15 Sep 2014 13:36:07 +0000 Subject: [New-bugs-announce] [issue22418] ipaddress.py new IPv6 Method for Solicited Multicast Address Message-ID: <1410788167.34.0.688511704657.issue22418@psf.upfronthosting.co.za> New submission from Jason Nadeau: I found it was useful to me to calculate and return an IPv6Address instance of the solicited multicast address for a particular IPv6Address. So I have created patch to do so. I am pretty new to programming in general so critiques are totally welcome. ---------- components: Library (Lib) files: solicitedMulticastAddress.patch keywords: patch messages: 226916 nosy: Jason.Nadeau priority: normal severity: normal status: open title: ipaddress.py new IPv6 Method for Solicited Multicast Address type: enhancement versions: Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36625/solicitedMulticastAddress.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 20:06:26 2014 From: report at bugs.python.org (Devin Cook) Date: Mon, 15 Sep 2014 18:06:26 +0000 Subject: [New-bugs-announce] [issue22419] wsgiref request length Message-ID: <1410804386.26.0.756558523053.issue22419@psf.upfronthosting.co.za> New submission from Devin Cook: BaseHTTPRequestHandler limits request length to prevent DoS. WSGIRequestHandler should probably do the same. See: http://bugs.python.org/issue10714 ---------- components: Library (Lib) files: wsgiref_request_length.patch keywords: patch messages: 226931 nosy: devin priority: normal severity: normal status: open title: wsgiref request length type: security Added file: http://bugs.python.org/file36626/wsgiref_request_length.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 21:15:20 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 15 Sep 2014 19:15:20 +0000 Subject: [New-bugs-announce] [issue22420] Use print(file=sys.stderr) instead of sys.stderr.write() in IDLE Message-ID: <1410808520.06.0.74767353905.issue22420@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch replaces print(file=sys.stderr) by sys.stderr.write() in IDLE for same reason as in issue22384. May be this will eliminate some "crashes" when IDLE run with pythonw.exe. ---------- components: IDLE files: idle_print_stderr.patch keywords: patch messages: 226934 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Use print(file=sys.stderr) instead of sys.stderr.write() in IDLE versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36627/idle_print_stderr.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 15 21:16:58 2014 From: report at bugs.python.org (Devin Cook) Date: Mon, 15 Sep 2014 19:16:58 +0000 Subject: [New-bugs-announce] [issue22421] securing pydoc server Message-ID: <1410808618.63.0.0937681366512.issue22421@psf.upfronthosting.co.za> New submission from Devin Cook: Several years ago a patch was applied to set the default binding of the pydoc server to "localhost" instead of "0.0.0.0". It appears that the issue was reintroduced in a5a3ae9be1fb. See previous issue: http://bugs.python.org/issue672656 $ ./python -m pydoc -b Server ready at http://localhost:35593/ Server commands: [b]rowser, [q]uit server> --- $ netstat -lnp | grep python tcp 0 0 0.0.0.0:35593 0.0.0.0:* LISTEN 2780/python As a sidenote, I'm not sure why the localhost lookup breaks the test case on my linux machine, but it does. ---------- components: Library (Lib) files: pydoc_server_addr.patch keywords: patch messages: 226935 nosy: devin priority: normal severity: normal status: open title: securing pydoc server type: security Added file: http://bugs.python.org/file36628/pydoc_server_addr.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 16 09:51:50 2014 From: report at bugs.python.org (Brunel) Date: Tue, 16 Sep 2014 07:51:50 +0000 Subject: [New-bugs-announce] [issue22422] IDLE closes all when in dropdown menu Message-ID: <1410853910.37.0.784487772035.issue22422@psf.upfronthosting.co.za> New submission from Brunel: While python shell is running if the user puts a period after a statement and waits for the dropdown list to pop up, then closes python shell the following will happen: IDLE will close all active windows immediately, regardless of any unsaved progress (which is lost.) ---------- components: IDLE messages: 226942 nosy: mandolout priority: normal severity: normal status: open title: IDLE closes all when in dropdown menu type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 16 10:37:10 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 16 Sep 2014 08:37:10 +0000 Subject: [New-bugs-announce] [issue22423] Errors in printing exceptions raised in a thread Message-ID: <1410856630.14.0.274476260629.issue22423@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch fixes some bugs in printing exceptions in the threading module. 1. Fixed names of private variables in initialization. This caused unhandled AttributeError. The regression was introduced in changeset e71c3223810f. This part of the patch shouldn't be applied to 2.7. 2. Handled the case when sys.stderr is None. Perhaps this caused a crash when Python program run with pythonw.exe. 3. Added missed test. ---------- components: Library (Lib) files: threading_print_exception.patch keywords: patch messages: 226944 nosy: pitrou, serhiy.storchaka, terry.reedy priority: normal severity: normal stage: patch review status: open title: Errors in printing exceptions raised in a thread type: behavior versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36629/threading_print_exception.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 16 16:05:42 2014 From: report at bugs.python.org (bono wang) Date: Tue, 16 Sep 2014 14:05:42 +0000 Subject: [New-bugs-announce] [issue22424] make: *** [Objects/unicodeobject.o] Error 1 Message-ID: <1410876342.53.0.936022553795.issue22424@psf.upfronthosting.co.za> New submission from bono wang: when i was upgrade from py 2.6 to py 3.3 the error is show up # make && make install gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/acceler.o Parser/acceler.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/grammar1.o Parser/grammar1.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/listnode.o Parser/listnode.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/node.o Parser/node.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/parser.o Parser/parser.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/bitset.o Parser/bitset.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/metagrammar.o Parser/metagrammar.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/firstsets.o Parser/firstsets.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/grammar.o Parser/grammar.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/pgen.o Parser/pgen.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/myreadline.o Parser/myreadline.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/parsetok.o Parser/parsetok.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Parser/tokenizer.o Parser/tokenizer.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/abstract.o Objects/abstract.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/accu.o Objects/accu.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/boolobject.o Objects/boolobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/bytes_methods.o Objects/bytes_methods.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/bytearrayobject.o Objects/bytearrayobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/bytesobject.o Objects/bytesobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/cellobject.o Objects/cellobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/classobject.o Objects/classobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/codeobject.o Objects/codeobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/complexobject.o Objects/complexobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/descrobject.o Objects/descrobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/enumobject.o Objects/enumobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/exceptions.o Objects/exceptions.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/genobject.o Objects/genobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/fileobject.o Objects/fileobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/floatobject.o Objects/floatobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/frameobject.o Objects/frameobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/funcobject.o Objects/funcobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/iterobject.o Objects/iterobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/listobject.o Objects/listobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/longobject.o Objects/longobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/dictobject.o Objects/dictobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/memoryobject.o Objects/memoryobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/methodobject.o Objects/methodobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/moduleobject.o Objects/moduleobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/namespaceobject.o Objects/namespaceobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/object.o Objects/object.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/obmalloc.o Objects/obmalloc.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/capsule.o Objects/capsule.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/rangeobject.o Objects/rangeobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/setobject.o Objects/setobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/sliceobject.o Objects/sliceobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/structseq.o Objects/structseq.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/tupleobject.o Objects/tupleobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/typeobject.o Objects/typeobject.c gcc -pthread -c -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Objects/unicodeobject.o Objects/unicodeobject.c gcc: Internal error: Killed (program cc1) Please submit a full bug report. See for instructions. make: *** [Objects/unicodeobject.o] Error 1 i really dont know how to fix it ---------- messages: 226952 nosy: 165559672 priority: normal severity: normal status: open title: make: *** [Objects/unicodeobject.o] Error 1 versions: Python 3.3 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 16 23:00:27 2014 From: report at bugs.python.org (Simon Weber) Date: Tue, 16 Sep 2014 21:00:27 +0000 Subject: [New-bugs-announce] [issue22425] 2to3 import fixer writes dotted_as_names into import_as_names Message-ID: <1410901227.13.0.545813784004.issue22425@psf.upfronthosting.co.za> New submission from Simon Weber: When dealing with implicit relative imports of the form "import as...", the 2to3 import fixer will rewrite them as "from . import as...". This is invalid syntax. Here's an example: $ tree package/ package/ ??? __init__.py ??? rootmod.py ??? subpackage ??? __init__.py ??? mod.py 1 directory, 4 files $ cat package/rootmod.py # this is the only nonempty file import subpackage.mod as my_name $ python package/rootmod.py $ 2to3 -w -f import package/ RefactoringTool: Refactored package/rootmod.py --- package/rootmod.py (original) +++ package/rootmod.py (refactored) @@ -1 +1 @@ -import subpackage.mod as my_name +from . import subpackage.mod as my_name RefactoringTool: Files that were modified: RefactoringTool: package/rootmod.py $ python package/rootmod.py File "package/rootmod.py", line 1 from . import subpackage.mod as my_name ^ SyntaxError: invalid syntax Probably the easiest way to rewrite this is "from .subpackage import mod as my_name". ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 226965 nosy: simonmweber priority: normal severity: normal status: open title: 2to3 import fixer writes dotted_as_names into import_as_names type: behavior versions: Python 2.7, Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 16 23:59:06 2014 From: report at bugs.python.org (Akira Li) Date: Tue, 16 Sep 2014 21:59:06 +0000 Subject: [New-bugs-announce] [issue22426] strptime accepts the wrong '2010-06-01 MSK' string but rejects the right '2010-06-01 MSD' Message-ID: <1410904746.97.0.914535199618.issue22426@psf.upfronthosting.co.za> New submission from Akira Li: >>> import os >>> import time >>> os.environ['TZ'] = 'Europe/Moscow' >>> time.tzset() >>> time.strptime('2010-06-01 MSK', '%Y-%m-%d %Z') time.struct_time(tm_year=2010, tm_mon=6, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=152, tm_isdst=0) >>> time.strptime('2010-06-01 MSD', '%Y-%m-%d %Z') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/_strptime.py", line 467, in _strptime_time return _strptime(data_string, format)[0] File "/usr/lib/python2.7/_strptime.py", line 325, in _strptime (data_string, format)) ValueError: time data '2010-06-01 MSD' does not match format '%Y-%m-%d %Z' datetime.strptime() and Python 3 behavior is exactly the same. The correct name is MSD: >>> from datetime import datetime, timezone >>> dt = datetime(2010, 5, 31, 21, tzinfo=timezone.utc).astimezone() >>> dt.strftime('%Y-%m-%d %Z') '2010-06-01 MSD' strptime() uses the current (wrong for the past date) time.tzname names despite the correct name being known to the system (as the example above demonstrates). In general, it is impossible to validate a time zone abbreviation even if the time zone database is available: - tzname may be ambiguous -- multiple zoneinfo matches (around one third of tznames the tz database correspond to multiple UTC offsets (at the same or different times) -- it is not unusual) i.e., any scheme that assumes that tzname is enough to get UTC offset such as Lib/email/_parsedate.py is wrong. - and even if zoneinfo is known, it may be misleading e.g., e.g., HAST (Hawaii-Aleutian Standard Time) might be rejected because Pacific/Honolulu zoneinfo uses HST. HAST corresponds to America/Adak (US/Aleutian) in tzdata (UTC offset may be the same). It might be too rare to care. Related: issue22377 ---------- components: Library (Lib) messages: 226966 nosy: akira priority: normal severity: normal status: open title: strptime accepts the wrong '2010-06-01 MSK' string but rejects the right '2010-06-01 MSD' type: behavior versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 17 07:18:39 2014 From: report at bugs.python.org (Jack O'Connor) Date: Wed, 17 Sep 2014 05:18:39 +0000 Subject: [New-bugs-announce] [issue22427] TemporaryDirectory attempts to clean up twice Message-ID: <1410931119.87.0.05617613507.issue22427@psf.upfronthosting.co.za> New submission from Jack O'Connor: The following little script prints (but ignores) a FileNotFoundError. import tempfile def generator(): with tempfile.TemporaryDirectory(): yield g = generator() next(g) Exception ignored in: Traceback (most recent call last): File "gen.py", line 6, in generator File "/usr/lib/python3.4/tempfile.py", line 691, in __exit__ File "/usr/lib/python3.4/tempfile.py", line 697, in cleanup File "/usr/lib/python3.4/shutil.py", line 454, in rmtree File "/usr/lib/python3.4/shutil.py", line 452, in rmtree FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp7wek4xhy' Putting print statements in the TemporaryDirectory class shows what's happening (confirming Guido's theory from https://groups.google.com/forum/#!topic/python-tulip/QXgWH32P2uM): As the program exits, the TemporaryDirectory object is finalized. This actually rmtree's the directory. After that, the generator is finalized, which raises a GeneratorExit inside of it. That exception causes the with statement to call __exit__ on the already-finalized TemporaryDirectory, which tries to rmtree again and throws the FileNotFoundError. The main downside of this bug is just garbage on stderr. I suppose in exceptionally unlikely circumstances, a new temp dir by the same name could be created between the two calls, and we might actually delete something we shouldn't. The simple fix would be to store a _was_cleaned flag or something on the object. Is there any black magic in finalizers or multithreading that demands something more complicated? ---------- components: Library (Lib) messages: 226979 nosy: oconnor663 priority: normal severity: normal status: open title: TemporaryDirectory attempts to clean up twice type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 17 09:36:10 2014 From: report at bugs.python.org (Jack O'Connor) Date: Wed, 17 Sep 2014 07:36:10 +0000 Subject: [New-bugs-announce] [issue22428] KeyboardInterrupt inside a coroutine causes AttributeError Message-ID: <1410939370.36.0.183370883396.issue22428@psf.upfronthosting.co.za> New submission from Jack O'Connor: The following test script prints a KeyboardInterrupt traceback (expected), but also an AttributeError traceback (unexpected): import asyncio @asyncio.coroutine def main(): raise KeyboardInterrupt asyncio.get_event_loop().run_until_complete(main()) Traceback (most recent call last): File "test.py", line 9, in asyncio.get_event_loop().run_until_complete(main()) File "/usr/lib/python3.4/asyncio/base_events.py", line 203, in run_until_complete self.run_forever() File "/usr/lib/python3.4/asyncio/base_events.py", line 184, in run_forever self._run_once() File "/usr/lib/python3.4/asyncio/base_events.py", line 817, in _run_once handle._run() File "/usr/lib/python3.4/asyncio/events.py", line 39, in _run self._callback(*self._args) File "/usr/lib/python3.4/asyncio/tasks.py", line 321, in _step result = next(coro) File "/usr/lib/python3.4/asyncio/tasks.py", line 103, in coro res = func(*args, **kw) File "test.py", line 6, in main raise KeyboardInterrupt KeyboardInterrupt --- Logging error --- Traceback (most recent call last): --- Logging error --- Traceback (most recent call last): Exception ignored in: )> Traceback (most recent call last): File "/usr/lib/python3.4/asyncio/futures.py", line 184, in __del__ File "/usr/lib/python3.4/asyncio/base_events.py", line 725, in call_exception_handler File "/usr/lib/python3.4/logging/__init__.py", line 1296, in error File "/usr/lib/python3.4/logging/__init__.py", line 1402, in _log File "/usr/lib/python3.4/logging/__init__.py", line 1412, in handle File "/usr/lib/python3.4/logging/__init__.py", line 1482, in callHandlers File "/usr/lib/python3.4/logging/__init__.py", line 846, in handle File "/usr/lib/python3.4/logging/__init__.py", line 977, in emit File "/usr/lib/python3.4/logging/__init__.py", line 899, in handleError File "/usr/lib/python3.4/traceback.py", line 169, in print_exception File "/usr/lib/python3.4/traceback.py", line 153, in _format_exception_iter File "/usr/lib/python3.4/traceback.py", line 18, in _format_list_iter File "/usr/lib/python3.4/traceback.py", line 65, in _extract_tb_or_stack_iter File "/usr/lib/python3.4/linecache.py", line 15, in getline File "/usr/lib/python3.4/linecache.py", line 41, in getlines File "/usr/lib/python3.4/linecache.py", line 126, in updatecache File "/usr/lib/python3.4/tokenize.py", line 437, in open AttributeError: 'module' object has no attribute 'open' The issue is that Task._step() calls self.set_exception() for both Exceptions and BaseExceptions, but it reraises BaseExceptions. That means that the BaseEventLoop never gets to call future.result() to clear the exception when it's a BaseException. Future.__del__ eventually tries to log this uncleared exception, but something about the circumstances of program exit (caused by the same exception) has already cleaned up the builtins needed for logging. Is that __del__ violating some best practices for finalizers by calling such a complicated function? Either way, I'm not sure this case should be considered an "unretrieved exception" at all. It's propagating outside the event loop after all. Should Task._step() be setting it in the first place, if it's going to reraise it? Should it be set without _log_traceback=True somehow? ---------- components: asyncio messages: 226985 nosy: gvanrossum, haypo, oconnor663, yselivanov priority: normal severity: normal status: open title: KeyboardInterrupt inside a coroutine causes AttributeError type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 17 13:51:54 2014 From: report at bugs.python.org (STINNER Victor) Date: Wed, 17 Sep 2014 11:51:54 +0000 Subject: [New-bugs-announce] [issue22429] asyncio: pending call to loop.stop() if a coroutine raises a BaseException Message-ID: <1410954714.23.0.464914894828.issue22429@psf.upfronthosting.co.za> New submission from STINNER Victor: Attached script stops immediatly whereas I would expect that the second call to run_forever() keeps running. If you execute the script, you will see: deque([]) The first call to run_forever() keeps a pending call to loop.stop() (to _rase_stop_error() in fact). I don't know if it should be called a bug or if this surprising behaviour should be documented. See also the issue #22428. ---------- components: asyncio files: pending_stop.py messages: 226994 nosy: gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: asyncio: pending call to loop.stop() if a coroutine raises a BaseException versions: Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36637/pending_stop.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 17 15:33:36 2014 From: report at bugs.python.org (diff 812) Date: Wed, 17 Sep 2014 13:33:36 +0000 Subject: [New-bugs-announce] [issue22430] Build failure if configure flags --prefix or --exec-prefix is set Message-ID: <1410960816.78.0.977499217062.issue22430@psf.upfronthosting.co.za> New submission from diff 812: only ./configure --prefix=/usr is running ---------- components: Build files: configure.log messages: 226997 nosy: diff.812 priority: normal severity: normal status: open title: Build failure if configure flags --prefix or --exec-prefix is set versions: Python 2.7 Added file: http://bugs.python.org/file36638/configure.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 17 15:44:38 2014 From: report at bugs.python.org (Philipp Metzler) Date: Wed, 17 Sep 2014 13:44:38 +0000 Subject: [New-bugs-announce] [issue22431] Change format of test runner output Message-ID: <1410961478.22.0.708066565645.issue22431@psf.upfronthosting.co.za> New submission from Philipp Metzler: Can I change the test runner output format to main.tests.tests.TimeSlotTestCase.test_creation ... ERROR instead of test_creation (main.tests.tests.TimeSlotTestCase) ... ERROR so that I can easily just copy&paste that line and run the test again without having to remove the brackets () and having to copy the name of the test behind the class and add a .? ---------- components: Tests messages: 227000 nosy: googol priority: normal severity: normal status: open title: Change format of test runner output type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 17 16:11:23 2014 From: report at bugs.python.org (Mark Wieczorek) Date: Wed, 17 Sep 2014 14:11:23 +0000 Subject: [New-bugs-announce] [issue22432] function crypt not working on OSX Message-ID: <1410963083.91.0.127109997194.issue22432@psf.upfronthosting.co.za> New submission from Mark Wieczorek: Hi, I just wanted to let you know of a bug that is related to the function "crypt" in osx (10.9.4). I have tried the default osx version of python (2.75), the brew version (2.7.8), and the macports version (2.7.7). In short, the command python -c 'import crypt; print crypt.crypt("test12345", "$6$salt")' produces the output : $6n8sgZJBMh4U The correct output should be $6$salt$omHbK1V1Alwa0VKXqLaW38vdS1uUhfx8GTj7XXCJcNUJxABwmYe8Vhpyt2tnnaAFC6UTI7PNk9sTtSGWGnYop. I have no idea what the problem is. ---------- assignee: ronaldoussoren components: Macintosh messages: 227002 nosy: lunokhod, ronaldoussoren priority: normal severity: normal status: open title: function crypt not working on OSX type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 17 17:57:56 2014 From: report at bugs.python.org (=?utf-8?b?0JTQtdC90LjRgSDQmtC+0YDQtdC90LXQstGB0LrQuNC5?=) Date: Wed, 17 Sep 2014 15:57:56 +0000 Subject: [New-bugs-announce] [issue22433] Argparse considers unknown optional arguments with spaces as a known positional argument Message-ID: <1410969476.15.0.703805216421.issue22433@psf.upfronthosting.co.za> New submission from ????? ???????????: Argparse version 1.1 consider ANY unknown argument string containig ' ' (space character) as positional argument. As a result it can use such unknown optional argument as a value of known positional argument. Demonstration code: import argparse parser = argparse.ArgumentParser() parser.add_argument("--known-optional-arg", "-k", action="store_true") parser.add_argument("known_positional", action="store", type=str) parser.parse_known_args(["--known-optional-arg", "--unknown-optional-arg=with spaces", "known positional arg"]) parser.parse_known_args(["--known-optional-arg", "--unknown-optional-arg=without_spaces", "known positional arg"]) Bugfix is attached to issue and affects ArgumentParser._parse_optional() method body. Sorry, if it is a better way to report (or, possibly, fix) a bug than this place. It is my first python bug report. Thanks. ---------- components: Extension Modules files: argparse.py.patch keywords: patch messages: 227007 nosy: DenKoren priority: normal severity: normal status: open title: Argparse considers unknown optional arguments with spaces as a known positional argument type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file36641/argparse.py.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 17 18:09:48 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 17 Sep 2014 16:09:48 +0000 Subject: [New-bugs-announce] [issue22434] Use named constants internally in the re module Message-ID: <1410970188.53.0.105353417029.issue22434@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Regular expression parser parses a pattern to a tree, marking nodes by string identifiers. Regular expression compiler converts this three into plain list of integers. Node's identifiers are transformed to sequential integers. Resulting list is not human readable. Proposed patch converts string constants in the sre_constants module to named integer constants. These constants doesn't need converting to integers, because they are already integers, and when printed they looks human-friendly. Now intermediate result of regular expression compiler looks much more readable. Example. >>> import re, sre_compile, sre_parse >>> sre_compile._code(sre_parse.parse('[a-z_][a-z_0-9]+', re.I), re.I) Before patch: [17, 4, 0, 2, 2147483647, 16, 7, 27, 97, 122, 19, 95, 0, 29, 16, 1, 2147483647, 16, 11, 10, 0, 67043328, 2147483648, 134217726, 0, 0, 0, 0, 0, 1, 1] After patch: [INFO, 4, 0, 2, MAXREPEAT, IN_IGNORE, 7, RANGE, 97, 122, LITERAL, 95, FAILURE, REPEAT_ONE, 16, 1, MAXREPEAT, IN_IGNORE, 11, CHARSET, 0, 67043328, 2147483648, 134217726, 0, 0, 0, 0, FAILURE, SUCCESS, SUCCESS] This patch also affects debugging output when regular expression is compiled with re.DEBUG (identifiers are uppercased and MAXREPEAT is displayed instead of 2147483647 in repeat statements). Besides debugging output these changes are invisible for ordinal user. They are needed only for developing and debugging the re module itself. The patch doesn't affect performance and almost not affects memory consumption. ---------- components: Regular Expressions files: re_named_consts.patch keywords: patch messages: 227008 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Use named constants internally in the re module type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36642/re_named_consts.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 18 02:25:22 2014 From: report at bugs.python.org (Martin Panter) Date: Thu, 18 Sep 2014 00:25:22 +0000 Subject: [New-bugs-announce] [issue22435] socketserver.TCPSocket leaks socket to garbage collector if server_bind() fails Message-ID: <1410999922.66.0.432841070426.issue22435@psf.upfronthosting.co.za> New submission from Martin Panter: Bind method may easily fail on Unix if there is no permission to bind to a privileged port: >>> try: TCPServer(("", 80), ...) ... except Exception as err: err ... PermissionError(13, 'Permission denied') >>> gc.collect() __main__:1: ResourceWarning: unclosed 0 This problem is inherited by HTTPServer and WSGIServer. My current workaround includes this code in a BaseServer fixup mixin, invoking server_close() if __init__() fails: class Server(BaseServer, Context): def __init__(self, ...): try: super().__init__((host, port), RequestHandlerClass) except: # Workaround for socketserver.TCPServer leaking socket self.close() raise def close(self): return self.server_close() ---------- components: Library (Lib) messages: 227017 nosy: vadmium priority: normal severity: normal status: open title: socketserver.TCPSocket leaks socket to garbage collector if server_bind() fails type: resource usage versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 18 16:26:28 2014 From: report at bugs.python.org (R. David Murray) Date: Thu, 18 Sep 2014 14:26:28 +0000 Subject: [New-bugs-announce] [issue22436] logging geteffectivelevel does not document its return value Message-ID: <1411050388.19.0.961403060223.issue22436@psf.upfronthosting.co.za> New submission from R. David Murray: https://docs.python.org/3/library/logging.html#logging.Logger.getEffectiveLevel This says the logging level is returned, but it doesn't mention that what is returned is an integer, nor does it link to whatever method is needed to convert the integer return value into the symbolic name that the user has been using elsewhere in interacting with logging. Indeed, the section that shows the mapping between names and numbers implies that the user of the library never needs to worry about the numbers unless they are creating a new level, but if they want to use getEffectiveLevel, this is not true. ---------- assignee: docs at python components: Documentation messages: 227047 nosy: docs at python, r.david.murray, vinay.sajip priority: normal severity: normal status: open title: logging geteffectivelevel does not document its return value type: behavior versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 18 19:39:42 2014 From: report at bugs.python.org (Yury Selivanov) Date: Thu, 18 Sep 2014 17:39:42 +0000 Subject: [New-bugs-announce] [issue22437] re module: number of named groups is limited to 100 max Message-ID: <1411061982.07.0.646375842297.issue22437@psf.upfronthosting.co.za> New submission from Yury Selivanov: While writing a lexer for javascript language, I managed to hit the limit of named groups in one regexp, it's 100. The check is in sre_compile.py:compile() function, and there is even an XXX comment on this. Unfortunately, I'm not an expert in this module, so I'm not sure if this check can be lifted, or at least if the number can be bumped to 200 or 500 (why is 100 btw?) Please share your thoughts. ---------- components: Library (Lib), Regular Expressions messages: 227055 nosy: ezio.melotti, haypo, mrabarnett, pitrou, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: re module: number of named groups is limited to 100 max type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 18 23:21:12 2014 From: report at bugs.python.org (Alex Gaynor) Date: Thu, 18 Sep 2014 21:21:12 +0000 Subject: [New-bugs-announce] [issue22438] eventlet broke by python 2.7.x Message-ID: <1411075272.39.0.133162978572.issue22438@psf.upfronthosting.co.za> New submission from Alex Gaynor: https://github.com/eventlet/eventlet/issues/135 ---------- components: Library (Lib) messages: 227067 nosy: alex, benjamin.peterson, christian.heimes, dstufft, giampaolo.rodola, janssen, pitrou priority: normal severity: normal status: open title: eventlet broke by python 2.7.x versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 01:33:01 2014 From: report at bugs.python.org (Sworddragon) Date: Thu, 18 Sep 2014 23:33:01 +0000 Subject: [New-bugs-announce] [issue22439] subprocess.PIPE.stdin.flush() causes to hang while subprocess.PIPE.stdin.close() not Message-ID: <1411083181.34.0.264014648396.issue22439@psf.upfronthosting.co.za> New submission from Sworddragon: On sending something to stdin of a process that was called with subprocess (for example diff) I have figured out that all is working fine if stdin is closed but flushing stdin will cause a hang (the same as nothing would be done). In the attachments is a testcase that shows this problem. If executed the application will hang but if #pipe.stdin.close() will be uncommented (and optionally pipe.stdin.flush() commented out) all is working fine. ---------- components: Library (Lib) files: test.py messages: 227076 nosy: Sworddragon priority: normal severity: normal status: open title: subprocess.PIPE.stdin.flush() causes to hang while subprocess.PIPE.stdin.close() not type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file36655/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 09:53:10 2014 From: report at bugs.python.org (Senthil Kumaran) Date: Fri, 19 Sep 2014 07:53:10 +0000 Subject: [New-bugs-announce] [issue22440] Setting SSLContext object's check_hostname manually might accidentally skip hostname verification Message-ID: <1411113190.48.0.396840225305.issue22440@psf.upfronthosting.co.za> New submission from Senthil Kumaran: While working on issue22366, I found a tricky bit of code in: https://hg.python.org/cpython/file/ca0aa0d89273/Lib/http/client.py#l1295 https://hg.python.org/cpython/rev/1a945fb875bf/ The statement is if not self._context.check_hostname and self._check_hostname: The context object's check_hostname (created by ssl._create_stdlib_context() - note private ) is False by default and the statement holds good and acts only on self._check_hostname But if the context is constructed manually and the context object's check_hostname is set to True (with correct intentions), that statement will lead to skipping of matching hostname! Is my analysis right here? ---------- messages: 227082 nosy: alex, christian.heimes, dstufft, orsenthil, pitrou priority: normal severity: normal status: open title: Setting SSLContext object's check_hostname manually might accidentally skip hostname verification versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 14:27:06 2014 From: report at bugs.python.org (Sworddragon) Date: Fri, 19 Sep 2014 12:27:06 +0000 Subject: [New-bugs-announce] [issue22441] Not all attributes of the console for a subprocess with creationflags=0 are inherited Message-ID: <1411129626.26.0.561604681349.issue22441@psf.upfronthosting.co.za> New submission from Sworddragon: The application apt-get on Linux does scale its output dependent of the size of the terminal but I have noticed that there are differences if I'm calling apt-get directly or with a subprocess without shell and creationflags set (so that creationflags should be 0). From the documentation subprocess.CREATE_NEW_CONSOLE is not set so that apt-get does inherit the console and normally I would assume that this counts also for all attributes. In the attachments is a testcase for this issue. Also here are 2 screenshots to compare the results: Direct call: http://picload.org/image/crlrapg/normal.png Subprocess: http://picload.org/image/crlrapd/subprocess.png ---------- components: Library (Lib) files: test.py messages: 227090 nosy: Sworddragon priority: normal severity: normal status: open title: Not all attributes of the console for a subprocess with creationflags=0 are inherited type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file36660/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 15:56:39 2014 From: report at bugs.python.org (juj) Date: Fri, 19 Sep 2014 13:56:39 +0000 Subject: [New-bugs-announce] [issue22442] subprocess.check_call hangs on large PIPEd data. Message-ID: <1411134999.3.0.739689982343.issue22442@psf.upfronthosting.co.za> New submission from juj: On Windows, write a.py: import subprocess def ccall(cmdline, stdout, stderr): proc = subprocess.Popen(['python', 'b.py'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.communicate() if proc.returncode != 0: raise subprocess.CalledProcessError(proc.returncode, cmdline) return 0 # To fix subprocess.check_call, uncomment the following, which is functionally equivalent: # subprocess.check_call = ccall subprocess.check_call(['python', 'b.py'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) print 'Finished!' Then write b.py: import sys str = 'aaa' for i in range(0,16): str = str + str for i in range(0,2): print >> sys.stderr, str for i in range(0,2): print str Finally, run 'python a.py'. The application will hang. Uncomment the specicied line to fix the execution. This is a documented failure on the python subprocess page, but why not just fix it up directly in python itself? One can think that modifying stdout or stderr is not the intent for subprocess.check_call, but python certainly should not hang because of that. ---------- components: Library (Lib) messages: 227095 nosy: juj priority: normal severity: normal status: open title: subprocess.check_call hangs on large PIPEd data. versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 16:08:08 2014 From: report at bugs.python.org (Sworddragon) Date: Fri, 19 Sep 2014 14:08:08 +0000 Subject: [New-bugs-announce] [issue22443] read(1) blocks on unflushed output Message-ID: <1411135688.43.0.0648130638254.issue22443@psf.upfronthosting.co.za> New submission from Sworddragon: On reading the output of an application (for example "apt-get download firefox") that dynamically changes a line (possibly with the terminal control character \r) I have noticed that read(1) does not read the output until it has finished with a newline. This happens even with disabled buffering. In the attachments is a testcase for this problem. Also here are 2 screenshots to compare the results: Direct call: http://picload.org/image/crldgri/normal.png Subprocess: http://picload.org/image/crldgrw/subprocess.png ---------- components: Library (Lib) files: test.py messages: 227097 nosy: Sworddragon priority: normal severity: normal status: open title: read(1) blocks on unflushed output type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file36661/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 18:56:55 2014 From: report at bugs.python.org (Alexander Belopolsky) Date: Fri, 19 Sep 2014 16:56:55 +0000 Subject: [New-bugs-announce] [issue22444] Floor divide should return int Message-ID: <1411145815.79.0.695099073811.issue22444@psf.upfronthosting.co.za> New submission from Alexander Belopolsky: PEP 3141 defines floor division as floor(x/y) and specifies that floor() should return int type. Builtin float type has been made part of the PEP 3141 numerical tower, but floor division of two floats still results in a float. See also: * #1656 - Make math.{floor,ceil}(float) return ints per PEP 3141 * #1623 - Implement PEP-3141 for Decimal * https://mail.python.org/pipermail/python-ideas/2014-September/029392.html ---------- components: Interpreter Core messages: 227107 nosy: belopolsky priority: normal severity: normal stage: needs patch status: open title: Floor divide should return int type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 21:00:50 2014 From: report at bugs.python.org (Sebastian Berg) Date: Fri, 19 Sep 2014 19:00:50 +0000 Subject: [New-bugs-announce] [issue22445] Memoryviews require more strict contiguous checks then necessary Message-ID: <1411153250.02.0.529032306838.issue22445@psf.upfronthosting.co.za> New submission from Sebastian Berg: In NumPy we decided some time ago that if you have a multi dimensional buffer, shaped for example 1x10, then this buffer should be considered both C- and F-contiguous. Currently, some buffers which can be used validly in a contiguous fashion are rejected. CPython does not support this currently possibly creating smaller nuisance if/once we change it fully in NumPy, see for example https://github.com/numpy/numpy/issues/5085 I have attached a patch which should (sorry I did not test this at all yet) relax the checks as much as possible. I think this is right, but we did some subtle breaks in user code (mostly cython code) when we first tried changing it, and while numpy arrays may be more prominently C/F-contiguous, compatibility issues with libraries checking for contiguity explicitly and then requesting a strided buffer are very possible. If someone could give me a hint about adding tests, that would be great. Also I would like to add a small note to the PEP in any case regarding this subtlety, in the hope that more code will take care about such subtleties. ---------- components: Library (Lib) files: relaxed-strides-checking.patch keywords: patch messages: 227113 nosy: seberg priority: normal severity: normal status: open title: Memoryviews require more strict contiguous checks then necessary type: enhancement Added file: http://bugs.python.org/file36663/relaxed-strides-checking.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 22:30:20 2014 From: report at bugs.python.org (Ram Rachum) Date: Fri, 19 Sep 2014 20:30:20 +0000 Subject: [New-bugs-announce] [issue22446] Shortening code in abc.py Message-ID: <1411158620.58.0.416699245015.issue22446@psf.upfronthosting.co.za> New submission from Ram Rachum: Can't this code: class Sequence(Sized, Iterable, Container): # ... def __contains__(self, value): for v in self: if v == value: return True return False Be shortened into this: class Sequence(Sized, Iterable, Container): # ... def __contains__(self, value): return any(item == value for value in self) Which can even fit on one line with a lambda: class Sequence(Sized, Iterable, Container): # ... __contains__ = lambda self: any(item == value for value in self) ---------- components: Library (Lib) messages: 227117 nosy: cool-RR priority: normal severity: normal status: open title: Shortening code in abc.py versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 19 23:06:21 2014 From: report at bugs.python.org (Ed Sesek) Date: Fri, 19 Sep 2014 21:06:21 +0000 Subject: [New-bugs-announce] [issue22447] logging.config.fileConfig attempts to write to file even when none configured Message-ID: <1411160781.56.0.728660343296.issue22447@psf.upfronthosting.co.za> New submission from Ed Sesek: See the attached config file. logging.config.fileConfig() is attempting to write to the file specified in the file_handler section even though that handler is not configured for use in this config. If its going to write to the file, it should only do so if the file is configured to be used. In the case where it cannot write to the file it throws an exception with a blank message. ---------- components: Library (Lib) files: pixcli_template.ini messages: 227122 nosy: esesek priority: normal severity: normal status: open title: logging.config.fileConfig attempts to write to file even when none configured type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file36664/pixcli_template.ini _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 20 02:14:00 2014 From: report at bugs.python.org (Joshua Moore-Oliva) Date: Sat, 20 Sep 2014 00:14:00 +0000 Subject: [New-bugs-announce] [issue22448] call_at/call_later with Timer cancellation can result in (practically) unbounded memory usage. Message-ID: <1411172040.41.0.974691283228.issue22448@psf.upfronthosting.co.za> New submission from Joshua Moore-Oliva: The core issue stems from the implementation of Timer cancellation. (which features like asyncio.wait_for build upon). BaseEventLoop stores scheduled events in an array backed heapq named _scheduled. Once an event has been scheduled with call_at, cancelling the event only marks the event as cancelled, it does not remove it from the array backed heap. It is only removed once the cancelled event is at the next scheduled event for the loop. In a system where many events are run (and then cancelled) that may have long timeout periods, and there always exists at least one event that is scheduled for an earlier time, memory use is practically unbounded. The attached program wait_for.py demonstrates a trivial example where memory use is practically unbounded for an hour of time. This is the case even though the program only ever has two "uncancelled" events and two coroutines at any given time in its execution. This could be fixed in a variety of ways: a) Timer cancellation could result in the object being removed from the heap like in the sched module. This would be at least O(N) where N is the number of scheduled events. b) Timer cancellation could trigger a callback that tracks the number of cancelled events in the _scheduled list. Once this number exceeds a threshold ( 50% ? ), the list could be cleared of all cancelled events and then be re-heapified. c) A balanced tree structure could be used to implement the scheduled events O(log N) time complexity (current module is O(log N) for heappop anyways). Given python's lack of a balanced tree structure in the standard library, I assume option c) is a non-starter. I would prefer option b) over option a) as when there are a lot of scheduled events in the system (upwards of 50,000 - 100,000 in some of my use cases) the amortized complexity for cancelling an event trends towards O(1) (N/2 cancellations are handled by a single O(N) event) at the cost of slightly more, but bounded relative to the amount of events, memory. I would be willing to take a shot at implementing this patch with the most agreeable option. Please let me know if that would be appreciated, or if someone else would rather tackle this issue. (First time bug report for python, not sure of the politics/protocols involved). Disclaimer that I by no means an asyncio expert, my understanding of the code base is based on my reading of it debugging this memory leak. ---------- components: asyncio files: wait_for.py messages: 227136 nosy: chatgris, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: call_at/call_later with Timer cancellation can result in (practically) unbounded memory usage. type: resource usage versions: Python 3.4 Added file: http://bugs.python.org/file36666/wait_for.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 20 12:35:39 2014 From: report at bugs.python.org (Christian Heimes) Date: Sat, 20 Sep 2014 10:35:39 +0000 Subject: [New-bugs-announce] [issue22449] SSLContext.load_verify_locations behavior on Windows and OSX Message-ID: <1411209339.34.0.91149685596.issue22449@psf.upfronthosting.co.za> New submission from Christian Heimes: The behavior of SSLContext.load_verify_locations is rather inconsistent across platforms: On most POSIX platforms (Linux, BSD, non-Apple builds of OpenSSL) it loads certificates from predefined locations. The locations are defined during compile time and usually differ between vendors and platforms. My WiP "Improve TLS/SSL support" PEP lists all common locations and the packages that offer the certs. On these platforms SSL_CERT_DIR and SSL_CERT_FILE overwrite the location. On Windows SSL_CERT_DIR and SSL_CERT_FILE are never taken into account by SSLContext.load_verify_locations because it doesn't call SSLContext.set_default_verify_paths(). The attached patch is a semi-fix for the problem. With the patch certs from SSL_CERT_DIR and SSL_CERT_FILE are only *added* to trusted root CA certs. The certs from Windows' cert store 'CA' and 'ROOT' are still loaded. On OSX with Apple's custom build of OpenSSL SSL_CERT_DIR and SSL_CERT_FILE take effect. But there is a twist! In case a root CA cert is not found Apple's Trust Evaluation Agent (TEA) kicks in and looks up certs from Apple's keychain. It's almost the same situation as on Windows but more magical. In order to disable TEA one has to set the env var OPENSSL_X509_TEA_DISABLE=1 *before* the first cert is validated. After that the env var has no effect as the value is cached. Hynek has documted it in his blog: https://hynek.me/articles/apple-openssl-verification-surprises/ ---------- components: Extension Modules, Library (Lib) files: win32_load_SSL_CERT_env.patch keywords: patch messages: 227150 nosy: alex, christian.heimes, dstufft, giampaolo.rodola, hynek, janssen, ncoghlan, pitrou priority: normal severity: normal stage: needs patch status: open title: SSLContext.load_verify_locations behavior on Windows and OSX type: behavior versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36668/win32_load_SSL_CERT_env.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 21 01:50:32 2014 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 20 Sep 2014 23:50:32 +0000 Subject: [New-bugs-announce] [issue22450] urllib doesn't put Accept: */* in the headers Message-ID: <1411257032.38.0.526589346468.issue22450@psf.upfronthosting.co.za> New submission from Raymond Hettinger: The use of urllib for REST APIs is impaired in the absence of a "Accept: */*" header such as that added automatically by the requests package or by the CURL command-line tool. # Example that gets an incorrect result due to the missing header import urllib print urllib.urlopen('http://graph.facebook.com/raymondh').headers['Content-Type'] # Equivalent call using CURL $ curl -v http://graph.facebook.com/raymondh ... * Connected to graph.facebook.com (31.13.75.1) port 80 (#0) > GET /raymondh HTTP/1.1 > User-Agent: curl/7.30.0 > Host: graph.facebook.com > Accept: */* > ---------- files: accept.diff keywords: patch messages: 227194 nosy: rhettinger priority: normal severity: normal stage: patch review status: open title: urllib doesn't put Accept: */* in the headers type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file36673/accept.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 21 03:04:35 2014 From: report at bugs.python.org (Joshua Landau) Date: Sun, 21 Sep 2014 01:04:35 +0000 Subject: [New-bugs-announce] [issue22451] filtertuple, filterstring and filterunicode don't have optimization for PyBool_Type Message-ID: <1411261475.31.0.855934223682.issue22451@psf.upfronthosting.co.za> New submission from Joshua Landau: All code referred to is from bltinmodule.c, Python 2.7.8: https://github.com/python/cpython/blob/2.7/Python/bltinmodule.c filter implements and optimization for PyBool_Type, making it equivalent to PyNone: # Line 303 if (func == (PyObject *)&PyBool_Type || func == Py_None) The specializations for tuples, byte strings and unicode don't have this: # Lines 2776, 2827, 2956, 2976 if (func == Py_None) This is a damper against recommending `filter(bool, ...)`. --- Python 3 of course does not have these specializations, so has no bug. ---------- components: Library (Lib) messages: 227199 nosy: Joshua.Landau priority: normal severity: normal status: open title: filtertuple, filterstring and filterunicode don't have optimization for PyBool_Type type: performance versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 21 13:12:08 2014 From: report at bugs.python.org (Simon Zack) Date: Sun, 21 Sep 2014 11:12:08 +0000 Subject: [New-bugs-announce] [issue22452] addTypeEqualityFunc is not used in assertListEqual Message-ID: <1411297928.75.0.716793613355.issue22452@psf.upfronthosting.co.za> New submission from Simon Zack: Functions added by addTypeEqualityFunc is not used for comparing list elements in assertListEqual, and only used in assertEqual. It would be nice to have assertListEqual use functions added by addTypeEqualityFunc for comparisons of list elements. I think this provides more flexibility, and we get nicely formatted error messages for nested list compares for free. ---------- components: Library (Lib) messages: 227210 nosy: simonzack priority: normal severity: normal status: open title: addTypeEqualityFunc is not used in assertListEqual _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 21 18:26:08 2014 From: report at bugs.python.org (Chris Colbert) Date: Sun, 21 Sep 2014 16:26:08 +0000 Subject: [New-bugs-announce] [issue22453] PyObject_REPR macro causes refcount leak Message-ID: <1411316768.43.0.141557856004.issue22453@psf.upfronthosting.co.za> New submission from Chris Colbert: This is how the macro is defined in object.h: 2.7 /* Helper for passing objects to printf and the like */ #define PyObject_REPR(obj) PyString_AS_STRING(PyObject_Repr(obj)) 3.4 /* Helper for passing objects to printf and the like */ #define PyObject_REPR(obj) _PyUnicode_AsString(PyObject_Repr(obj)) PyObject_Repr returns a new reference, which is not released by the macro. This macro only seems to be used internally for error reporting in compile.c, so it's unlikely to be causing any pressing issues for the interpreter, but it may be biting some extension modules. ---------- components: Extension Modules, Interpreter Core messages: 227219 nosy: Chris.Colbert priority: normal severity: normal status: open title: PyObject_REPR macro causes refcount leak type: behavior versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 21 21:35:49 2014 From: report at bugs.python.org (Sworddragon) Date: Sun, 21 Sep 2014 19:35:49 +0000 Subject: [New-bugs-announce] [issue22454] Adding the opposite function of shlex.split() Message-ID: <1411328149.26.0.00522128245951.issue22454@psf.upfronthosting.co.za> New submission from Sworddragon: There is currently shlex.split() that is for example useful to split a command string and pass it to subprocess.Popen with shell=False. But I'm missing a function that does the opposite: Building the command string from a list that could for example then be used in subprocess.Popen with shell=True. ---------- components: Library (Lib) messages: 227228 nosy: Sworddragon priority: normal severity: normal status: open title: Adding the opposite function of shlex.split() type: enhancement versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 00:09:35 2014 From: report at bugs.python.org (Buck Golemon) Date: Sun, 21 Sep 2014 22:09:35 +0000 Subject: [New-bugs-announce] [issue22455] idna/punycode give wrong results on narrow builds Message-ID: <1411337375.42.0.97875155095.issue22455@psf.upfronthosting.co.za> New submission from Buck Golemon: I have "fixed" the issue in my branch here: https://github.com/bukzor/cpython/commit/013e689731ba32319f05a62a602f01dd7d7f2e83 I don't propose it as a patch, but as a proof of concept and point of discussion. If there's no chance of shipping a fix in 2.7.9, feel free to close. ---------- messages: 227240 nosy: bukzor priority: normal severity: normal status: open title: idna/punycode give wrong results on narrow builds versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 00:54:27 2014 From: report at bugs.python.org (Arfrever Frehtes Taifersar Arahesis) Date: Sun, 21 Sep 2014 22:54:27 +0000 Subject: [New-bugs-announce] [issue22456] __base__ undocumented Message-ID: <1411340067.19.0.0232279983568.issue22456@psf.upfronthosting.co.za> New submission from Arfrever Frehtes Taifersar Arahesis: __bases__ is documented, but __base__ is not. $ grep -r __base__ Doc $ grep -r __bases__ Doc Doc/c-api/object.rst:are different objects, *B*'s :attr:`~class.__bases__` attribute is searched in Doc/c-api/object.rst:a depth-first fashion for *A* --- the presence of the :attr:`~class.__bases__` Doc/extending/newtypes.rst: in its :attr:`~class.__bases__`, or else it will not be able to call your type's Doc/library/email.headerregistry.rst: class's ``__bases__`` list. Doc/library/functions.rst: tuple itemizes the base classes and becomes the :attr:`~class.__bases__` Doc/library/stdtypes.rst:.. attribute:: class.__bases__ Doc/reference/datamodel.rst: single: __bases__ (class attribute) Doc/reference/datamodel.rst: dictionary containing the class's namespace; :attr:`~class.__bases__` is a Doc/whatsnew/2.3.rst: removed: you can now assign to the :attr:`__name__` and :attr:`__bases__` Doc/whatsnew/2.3.rst: assigned to :attr:`__bases__` along the lines of those relating to assigning to ---------- assignee: docs at python components: Documentation keywords: easy messages: 227241 nosy: Arfrever, docs at python priority: normal severity: normal status: open title: __base__ undocumented type: enhancement versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 05:47:54 2014 From: report at bugs.python.org (Robert Collins) Date: Mon, 22 Sep 2014 03:47:54 +0000 Subject: [New-bugs-announce] [issue22457] load_tests not invoked in root __init__.py when start=package root Message-ID: <1411357674.43.0.0204872118136.issue22457@psf.upfronthosting.co.za> New submission from Robert Collins: python -m unittest discover -t . foo where foo is a package will not trigger load_tests in foo/__init__.py. To reproduce: mkdir -p demo/tests cd demo cat < tests/__init__.py import sys import os def load_tests(loader, tests, pattern): print("HI WE ARE LOADING!") this_dir = os.path.dirname(__file__) tests.addTest(loader.discover(start_dir=this_dir, pattern=pattern)) return tests EOF python -m unittest discover -t . tests ---------- messages: 227250 nosy: rbcollins priority: normal severity: normal status: open title: load_tests not invoked in root __init__.py when start=package root _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 09:10:30 2014 From: report at bugs.python.org (Stefan Behnel) Date: Mon, 22 Sep 2014 07:10:30 +0000 Subject: [New-bugs-announce] [issue22458] Add fractions benchmark Message-ID: <1411369830.75.0.216203072207.issue22458@psf.upfronthosting.co.za> New submission from Stefan Behnel: Fractions are great for all sorts of exact computations (including money/currency calculations), but are quite slow due to the need for normalisation at instantiation time. I adapted the existing telco benchmark to use Fraction instead of Decimal to make this problem more visible. One change I made was to take the data reading out of the measured loop. I/O is not part of what the benchmark should measure. Please consider adding it to the benchmark suite. ---------- components: Benchmarks files: telco_fractions.py messages: 227252 nosy: scoder priority: normal severity: normal status: open title: Add fractions benchmark type: enhancement Added file: http://bugs.python.org/file36684/telco_fractions.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 10:32:10 2014 From: report at bugs.python.org (SebKL) Date: Mon, 22 Sep 2014 08:32:10 +0000 Subject: [New-bugs-announce] [issue22459] str.strip() documentation: wrong example Message-ID: <1411374730.07.0.0715795458179.issue22459@psf.upfronthosting.co.za> New submission from SebKL: The following example is wrong: https://docs.python.org/3.4/library/stdtypes.html?highlight=split#str.split >>> '1,2,3'.split(',', maxsplit=1) ['1', '2 3'] Is actually returning (note the missing , ): >>> '1,2,3'.split(',', maxsplit=1) ['1', '2,3'] ---------- assignee: docs at python components: Documentation messages: 227257 nosy: SebKL, docs at python priority: normal severity: normal status: open title: str.strip() documentation: wrong example type: enhancement versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 12:28:55 2014 From: report at bugs.python.org (bagrat lazaryan) Date: Mon, 22 Sep 2014 10:28:55 +0000 Subject: [New-bugs-announce] [issue22460] idle editor: replace all in selection Message-ID: <1411381735.15.0.967867369025.issue22460@psf.upfronthosting.co.za> New submission from bagrat lazaryan: say, for renaming a variable in a block of code, or in a function, or renaming a method name in a class, etc. nothing fancy here, a button in the replace dialog will do. i think the proposed functionality is needed much more often than the currently implemented replacing within the whole file. ---------- components: IDLE messages: 227260 nosy: bagratte priority: normal severity: normal status: open title: idle editor: replace all in selection type: enhancement versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 15:14:17 2014 From: report at bugs.python.org (Larry Hastings) Date: Mon, 22 Sep 2014 13:14:17 +0000 Subject: [New-bugs-announce] [issue22461] Test failure: Lib/test/test_pydoc.py line 851, "topic?key=def" Message-ID: <1411391657.7.0.939727555746.issue22461@psf.upfronthosting.co.za> New submission from Larry Hastings: I get a test failure in the regression test suite. This appears to be the important bit: Traceback (most recent call last): File "/tmp/Python-3.4.2rc1/Lib/test/test_pydoc.py", line 851, in test_url_requests self.assertEqual(result, title, text) AssertionError: 'Pydoc: Error - topic?key=def' != 'Pydoc: KEYWORD def' - Pydoc: Error - topic?key=def + Pydoc: KEYWORD def I can ship 3.4.2rc1 like this, but I'd really like this fixed before 3.4.2 final. Does anybody own pydoc? There's no "expert" listed on the Python Experts page. (Adding you, Georg, because you're the DE.) ---------- messages: 227267 nosy: georg.brandl, larry priority: deferred blocker severity: normal stage: needs patch status: open title: Test failure: Lib/test/test_pydoc.py line 851, "topic?key=def" type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 17:51:23 2014 From: report at bugs.python.org (Mark Shannon) Date: Mon, 22 Sep 2014 15:51:23 +0000 Subject: [New-bugs-announce] [issue22462] Modules/pyexpat.c violates PEP 384 Message-ID: <1411401083.5.0.108064936526.issue22462@psf.upfronthosting.co.za> New submission from Mark Shannon: Modules/pyexpat.c includes some archaic code to create temporary frames so that, in even of an exception being raised, expat appears in the traceback. The way this is implemented is a problem for three reasons: 1. It violates PEP 384. 2. It is incorrect, see http://bugs.python.org/issue6359. 3. It is inefficient, as a frame is generated for each call, regardless of whether an exception is raised or not. The attached patch fixes these issues. ---------- components: Library (Lib) files: expat.patch keywords: patch messages: 227278 nosy: Mark.Shannon priority: normal severity: normal status: open title: Modules/pyexpat.c violates PEP 384 versions: Python 3.5 Added file: http://bugs.python.org/file36686/expat.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 19:32:49 2014 From: report at bugs.python.org (=?utf-8?q?Julien_=C3=89LIE?=) Date: Mon, 22 Sep 2014 17:32:49 +0000 Subject: [New-bugs-announce] [issue22463] Warnings when building on AIX Message-ID: <1411407169.47.0.841951681699.issue22463@psf.upfronthosting.co.za> New submission from Julien ?LIE: Building Python 2.7.8 on AIX 7.1 gives the following warnings: Parser/pgen.c:282:9: warning: variable 'i' set but not used [-Wunused-but-set-variable] Include/objimpl.h:164:66: warning: right-hand operand of comma expression has no effect [-Wunused-value] /home/iulius/autobuild/src/Python-2.7.8/Modules/cPickle.c:4495:8: warning: assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Wstrict-overflow] /home/iulius/autobuild/src/Python-2.7.8/Modules/cPickle.c:202:8: warning: assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Wstrict-overflow] /home/iulius/autobuild/src/Python-2.7.8/Modules/readline.c:777:1: warning: 'on_completion_display_matches_hook' defined but not used [-Wunused-function] ./pyconfig.h:1182:0: warning: "_POSIX_C_SOURCE" redefined ./pyconfig.h:1204:0: warning: "_XOPEN_SOURCE" redefined /home/iulius/autobuild/src/Python-2.7.8/Modules/tkappinit.c:29:15: warning: variable 'main_window' set but not used [-Wunused-but-set-variable] /home/iulius/autobuild/src/Python-2.7.8/Modules/_ctypes/ctypes.h:456:13: warning: 'capsule_destructor_CTYPES_CAPSULE_WCHAR_T' defined but not used [-Wunused-function] /home/iulius/autobuild/src/Python-2.7.8/Modules/_ctypes/cfield.c:50:29: warning: variable 'length' set but not used [-Wunused-but-set-variable] /home/iulius/autobuild/src/Python-2.7.8/Modules/_ctypes/ctypes.h:456:13: warning: 'capsule_destructor_CTYPES_CAPSULE_WCHAR_T' defined but not used [-Wunused-function] ---------- components: Build messages: 227283 nosy: jelie priority: normal severity: normal status: open title: Warnings when building on AIX type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 19:37:14 2014 From: report at bugs.python.org (Stefan Behnel) Date: Mon, 22 Sep 2014 17:37:14 +0000 Subject: [New-bugs-announce] [issue22464] Speed up fractions implementation Message-ID: <1411407434.75.0.285038516488.issue22464@psf.upfronthosting.co.za> New submission from Stefan Behnel: Fractions are an excellent way to do exact money calculations and largely beat Decimal in terms of simplicity, accuracy and safety. Clearly not in terms of speed, though. The current implementation does some heavy type checking and dispatching in __new__() and a simplistic gcd based normalisation. Here is a profiling run from the benchmark proposed in issue 22458 (which matches more or less the results with my own code): 6969671 function calls (6969278 primitive calls) in 4.835 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 519644 1.324 0.000 2.928 0.000 fractions.py:73(__new__) 519632 0.654 0.000 0.654 0.000 fractions.py:17(gcd) 319744 0.637 0.000 2.694 0.000 fractions.py:400(_add) 1039260 0.507 0.000 0.950 0.000 abc.py:178(__instancecheck__) 4 0.459 0.115 4.821 1.205 bm_telco_fractions.py:38(run) 1039300 0.443 0.000 0.443 0.000 _weakrefset.py:70(__contains__) 519616 0.301 0.000 4.329 0.000 fractions.py:373(forward) 199872 0.272 0.000 1.335 0.000 fractions.py:416(_mul) 1598720 0.117 0.000 0.117 0.000 fractions.py:278(denominator) 959232 0.074 0.000 0.074 0.000 fractions.py:274(numerator) The instantiation itself takes twice as much time as the gcd calculations, and both dominate the overall runtime of the benchmark by about 60%. Improving the instantiation time would thus bring a substantial benefit. ---------- components: Library (Lib) messages: 227284 nosy: scoder priority: normal severity: normal status: open title: Speed up fractions implementation versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 22 21:53:33 2014 From: report at bugs.python.org (Pau Amma) Date: Mon, 22 Sep 2014 19:53:33 +0000 Subject: [New-bugs-announce] [issue22465] Number agreement error in section 3.2 of web docs Message-ID: <1411415613.69.0.707690037263.issue22465@psf.upfronthosting.co.za> New submission from Pau Amma: In https://docs.python.org/2/reference/datamodel.html#the-standard-type-hierarchy, under "numbers.Real (float)", in sentence "the savings in processor and memory usage that are usually the reason for using these is dwarfed by the overhead of using objects in Python" either "is dwarfed" should be "are dwarfed" (preferred) or "are usually" should be "is usually" ---------- assignee: docs at python components: Documentation messages: 227304 nosy: docs at python, pauamma priority: normal severity: normal status: open title: Number agreement error in section 3.2 of web docs type: enhancement versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 23 03:47:55 2014 From: report at bugs.python.org (Khalid) Date: Tue, 23 Sep 2014 01:47:55 +0000 Subject: [New-bugs-announce] [issue22466] problem with installing python 2.7.8 Message-ID: <1411436875.43.0.487980102228.issue22466@psf.upfronthosting.co.za> New submission from Khalid: when I'm installing python 2.7.8 I get error "there is a problem with this windows installer package. a DLL required for this install to complete could not be run". I'm using windows 8.1 64bit ---------- components: Installation files: Capture.JPG messages: 227319 nosy: elctr0 priority: normal severity: normal status: open title: problem with installing python 2.7.8 type: compile error versions: Python 2.7 Added file: http://bugs.python.org/file36693/Capture.JPG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 23 07:20:15 2014 From: report at bugs.python.org (DS6) Date: Tue, 23 Sep 2014 05:20:15 +0000 Subject: [New-bugs-announce] [issue22467] Lib/http/server.py, inconsistent header casing Message-ID: <1411449615.3.0.851782925938.issue22467@psf.upfronthosting.co.za> New submission from DS6: Inconsistent casing, such as "Content-type" vs "Content-Type", "Content-Length" vs "Content-length", while technically not breaking any RFC or other HTTP-related rules (headers are case-insensitive, after all), can occasionally cause problems when attempting to retrieve already-set headers from http.server.BaseHTTPRequestHandler._headers_buffer (in my situation specifically, trying to retrie the Content-Type header in the sendfile method in an extended BaseHTTPRequestHandler class). This happens a lot in the file and I wouldn't be surprised if the problem were to crop up in other places as well. I'm a new user of Python, so despite having searched for an answer to this problem, if there's a case-insensitive way to obtain items from a list and I'm just daft, please feel free to point me in the right direction, though I feel that the casing should be corrected regardless for consistency and optimization sake. (Aside: I would try to publish a patch along with this issue report with the casing issues fixed, but I'm not too knowledgeable about versioning and stuff and would have no idea where to start.) ---------- components: Library (Lib) messages: 227324 nosy: DS6 priority: normal severity: normal status: open title: Lib/http/server.py, inconsistent header casing versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 23 10:49:52 2014 From: report at bugs.python.org (Bart Olsthoorn) Date: Tue, 23 Sep 2014 08:49:52 +0000 Subject: [New-bugs-announce] [issue22468] Tarfile using fstat on GZip file object Message-ID: <1411462192.65.0.458428981621.issue22468@psf.upfronthosting.co.za> New submission from Bart Olsthoorn: CPython tarfile `gettarinfo` method uses fstat to determine the size of a file (using its fileobject). When that file object is actually created with Gzip.open (so a GZipfile), it will get the compressed size of the file. The addfile method will then continue to read the uncompressed data of the gzipped file, but will read too few bytes, resulting in a tar of incomplete files. I suggest checking the file object class before using fstat to determine the size, and raise a warning if it's a gzip file. To clarify, this only happens when adding a GZip file object to tar. I know that it's not a really common scenario, and the problem is really that GZip file size can only properly be determined by uncompressing and reading it entirely, but I think it's nice to not fail without warning. So this is an example that is failing: ``` import tarfile c = io.BytesIO() with tarfile.open(mode='w', fileobj=c) as tar: for textfile in ['1.txt.gz', '2.txt.gz']: with gzip.open(textfile) as f: tarinfo = tar.gettarinfo(fileobj=f) tar.addfile(tarinfo=tarinfo, fileobj=f) data = c.getvalue() return data ``` Instead this reads the proper filesize and writes the files to a tar: ``` import tarfile c = io.BytesIO() with tarfile.open(mode='w', fileobj=c) as tar: for textfile in ['1.txt.gz', '2.txt.gz']: with gzip.open(textfile) as f: buff = f.read() tarinfo = tarfile.TarInfo(name=f.name) tarinfo.size = len(buff) tar.addfile(tarinfo=tarinfo, fileobj=io.BytesIO(buff)) data = c.getvalue() return data ``` ---------- messages: 227328 nosy: bartolsthoorn priority: normal severity: normal status: open title: Tarfile using fstat on GZip file object type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 23 14:22:22 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 23 Sep 2014 12:22:22 +0000 Subject: [New-bugs-announce] [issue22469] Allow the "backslashreplace" error handler support decoding Message-ID: <1411474942.45.0.419383976358.issue22469@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Proposed patch allows the "backslashreplace" error handler to be used not only in encoding, but in decoding and translating. ---------- components: Extension Modules files: backslashreplace_decode.patch keywords: patch messages: 227349 nosy: doerwalter, lemburg, ncoghlan, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Allow the "backslashreplace" error handler support decoding type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36696/backslashreplace_decode.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 23 19:35:51 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 23 Sep 2014 17:35:51 +0000 Subject: [New-bugs-announce] [issue22470] Possible integer overflow in error handlers Message-ID: <1411493751.43.0.689535434801.issue22470@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: There are potential integer overflows in error handlers. Here is simple patch which fixes them. ---------- assignee: serhiy.storchaka components: Extension Modules files: codecs_error_hadlers_overflow.patch keywords: patch messages: 227370 nosy: serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Possible integer overflow in error handlers type: behavior versions: Python 2.7, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36701/codecs_error_hadlers_overflow.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 23 21:39:40 2014 From: report at bugs.python.org (Todd Thomas) Date: Tue, 23 Sep 2014 19:39:40 +0000 Subject: [New-bugs-announce] [issue22471] Python build problems via Homebrew on Mac OS X when GNU core/find utils are default Message-ID: <1411501180.24.0.340452667993.issue22471@psf.upfronthosting.co.za> New submission from Todd Thomas: Installing Python via Homebrew on Mac OS X has build issues if the GNU core/find utils are set as defaults on the system. OS X is very common, on it Homebrew is very common, via Homebrew GNU utilities are among the first installations; EG: http://goo.gl/OodjHI GNU core/find utils are likely the most common tools use on POSIX systems but Mac wants to keep rolling with UNIX tools. The Makefile is flexible. If it discovers the rm program in: /usr/local/opt/coreutils/libexec/gnubin/rm (where Homebrew would install it) the build 'could' break. Testing is ad hoc but seen by many and confirmed as "likely" by ned_deily; he adds: "that particular problem is simple to fix: just change the Makefile to /usr/bin/rm." This is the work-around for now. Props to ned_deily; this is an old/annoying problem, now solved. ---------- assignee: ronaldoussoren components: Macintosh messages: 227379 nosy: ronaldoussoren, todd_dsm priority: normal severity: normal status: open title: Python build problems via Homebrew on Mac OS X when GNU core/find utils are default type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 23 22:34:45 2014 From: report at bugs.python.org (R. David Murray) Date: Tue, 23 Sep 2014 20:34:45 +0000 Subject: [New-bugs-announce] [issue22472] OSErrors should use str and not repr on paths Message-ID: <1411504485.58.0.497867466453.issue22472@psf.upfronthosting.co.za> New submission from R. David Murray: >>> open(r'c:\bad\path') Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: 'c:\\bad\\path' ---------- components: Windows messages: 227389 nosy: r.david.murray priority: normal severity: normal stage: needs patch status: open title: OSErrors should use str and not repr on paths versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 01:46:15 2014 From: report at bugs.python.org (R. David Murray) Date: Tue, 23 Sep 2014 23:46:15 +0000 Subject: [New-bugs-announce] [issue22473] The gloss on asyncio "future with run_forever" example is confusing Message-ID: <1411515975.42.0.282827755318.issue22473@psf.upfronthosting.co.za> New submission from R. David Murray: In https://docs.python.org/3/library/asyncio-task.html#example-future-with-run-forever we have the sentence In this example, the future is responsible to display the result and to stop the loop. We could dune up the English by rewriting it: In this example, the future is responsible for displaying the result and stopping the loop. But that isn't quite true. It is the callback associated with the future that is displaying the result and stopping the loop. So, perhaps the correct gloss is: In this example, the got_result callback is responsible for displaying the result and stopping the loop. ---------- assignee: docs at python components: Documentation messages: 227398 nosy: docs at python, r.david.murray priority: normal severity: normal status: open title: The gloss on asyncio "future with run_forever" example is confusing type: behavior versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 02:02:23 2014 From: report at bugs.python.org (R. David Murray) Date: Wed, 24 Sep 2014 00:02:23 +0000 Subject: [New-bugs-announce] [issue22474] No explanation of how a task gets destroyed in asyncio 'task' documentation Message-ID: <1411516943.46.0.304192335953.issue22474@psf.upfronthosting.co.za> New submission from R. David Murray: In https://docs.python.org/3/library/asyncio-task.html#task, there is a note about a warning being logged if a pending task is destroyed. The section does not explain or link to an explanation of how a task might get destroyed. Nor does it define pending, but that seems reasonably clear from context (ie: the future has not completed). The example linked to does not show how the pending task got destroyed, it only shows an example of the resulting logging, with not enough information to really understand what the final line of the error message is reporting (is kill_me an asyncio API, or the name of the task, or the future it is wrapping?) ---------- assignee: docs at python components: Documentation messages: 227400 nosy: docs at python, r.david.murray priority: normal severity: normal status: open title: No explanation of how a task gets destroyed in asyncio 'task' documentation type: behavior versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 02:51:49 2014 From: report at bugs.python.org (R. David Murray) Date: Wed, 24 Sep 2014 00:51:49 +0000 Subject: [New-bugs-announce] [issue22475] asyncio task get_stack documentation seems to contradict itself Message-ID: <1411519909.22.0.512722932689.issue22475@psf.upfronthosting.co.za> New submission from R. David Murray: The get_stack method docs say: "If the coroutine is active, returns the stack where it was suspended." I presume this means something more like "if the coroutine is not done...", since in English you can't be both active *and* suspended. Or does it mean "last suspended"? Then it talks about limit and how stacks limit returns newest frames while with tracebacks it is oldest frames...but the last sentence says that for suspended coroutines only one frame is returned. So there is a definite lack of clarity here: can we get multiple frames if the coroutine is not suspended (in which case that first sentence seems likely to be the 'last suspended' interpretation), or for non-tracebacks can we only ever get one frame, in which case why talk about limit returning the newest frames for stacks? ---------- assignee: docs at python components: Documentation messages: 227401 nosy: docs at python, r.david.murray priority: normal severity: normal status: open title: asyncio task get_stack documentation seems to contradict itself type: behavior versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 03:34:04 2014 From: report at bugs.python.org (R. David Murray) Date: Wed, 24 Sep 2014 01:34:04 +0000 Subject: [New-bugs-announce] [issue22476] asyncio task chapter confusion about 'task', 'future', and 'schedule' Message-ID: <1411522444.59.0.775206973129.issue22476@psf.upfronthosting.co.za> New submission from R. David Murray: Sorry for creating all these doc issues, but I'm reading these docs for the first time, and I figure it is good to capture my confusion now in the hopes that they can be clarified for other people's first readthroughs. In the task chapter, we have Futures introduced. (As an aside, it is interesting that the link to the concurrent.futures.Future docs start by saying "you should never instantiate this class directly", but the asyncio examples show direct instantiation of a future, but this difference is not mentioned in the asyncio docs when discussing the differences between them). We then see an example that appears to include scheduling a future via asyncio.async, because there is a specific mention of how we can attach a callback after it is scheduled but before the event loop is started. Then tasks are introduced with the text: Schedule the execution of a coroutine: wrap it in a future. A task is a subclass of Future. Now, having read a bit more elsewhere in the docs, I see that a Future is scheduled by its creation, and the call to async on the future in the previous example is a NOOP. But that is *not* the implication of the text (see below as well). So, there are several points of dissonance here. One is that the implication is we pass Task a coroutine and it *wraps* it in a Future. But a Task *is* a Future. So do we have a Future using another Future to wrap a coroutine, or what? Another point of dissonance is that this phrasing implies that the *act* of wrapping the coroutine in a Future is what schedules it, yet when we introduced Future we apparently had to schedule it explicitly. So I think there needs to be a mention of scheduling in the Future section. But this then brings up the question of what exactly it is that Task does that differs from what a regular Future does, a question that is only partially answered by the documentation, because what 'scheduled' means for a regular Future isn't spelled out in the Future section. Which I think means that what we really need is an overview document that puts all these concepts together into a conceptual framework, so that the documentation of the individual pieces makes sense. As a followon point, the gloss on example of parallel execution of tasks says "A task is automatically scheduled for execution when it is created. The event loop stops when all tasks are done." This reads very strangely to me. The first sentence seems to be pointing out the difference between this example and the previous one with Future where we had to explicitly schedule it (by calling async which creates a task...). But it seems that that isn't true...so why is that sentence there? The second sentence presumably refers to the fact that run_until_complete runs the event loop until all scheduled tasks are complete..because they are wrapped in 'wait'. But wait is not cross referenced or mentioned in the gloss, so I had to think about it carefully and look up wait to conclude that that was what it meant. Of course, I could be more confused than necessary because I started reading with the 'task' chapter, but I don't see an overview chapter in the doc tree. ---------- assignee: docs at python components: Documentation messages: 227405 nosy: docs at python, r.david.murray priority: normal severity: normal status: open title: asyncio task chapter confusion about 'task', 'future', and 'schedule' type: behavior versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 09:25:02 2014 From: report at bugs.python.org (Brian Gladman) Date: Wed, 24 Sep 2014 07:25:02 +0000 Subject: [New-bugs-announce] [issue22477] GCD in Fractions Message-ID: <1411543502.18.0.966419857361.issue22477@psf.upfronthosting.co.za> New submission from Brian Gladman: There is a discussion of this issue on comp.lang.python The function known as 'the greatest common divisor' has a number of well defined mathematical properties for both positive and negative integers (see, for example, Elementary Number Theory by Kenneth Rosen). In particular gcd(a, b) = gcd(|a|, |b|). But the Python version of this function in the fractions module doesn't conform to these properties since it returns a negative value when its second parameter is negative. This behaviour is documented but I think it is undesirable to provide a function that has the well known and widely understood name 'gcd', but one that doesn't match the behaviour normally associated with this function. I hence believe that consideration should be given to changing the behaviour of the Python greatest common divisor function to match the mathematical properties expected of this function. If necessary a local function in the fractions module could maintain the current behaviour. ---------- components: Library (Lib) messages: 227410 nosy: brg at gladman.plus.com priority: normal severity: normal status: open title: GCD in Fractions type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 09:49:29 2014 From: report at bugs.python.org (karl) Date: Wed, 24 Sep 2014 07:49:29 +0000 Subject: [New-bugs-announce] [issue22478] tests for urllib2net are in bad shapes Message-ID: <1411544969.4.0.411539289792.issue22478@psf.upfronthosting.co.za> New submission from karl: ? ./python.exe -V Python 3.4.2rc1+ ? hg tip changeset: 92532:6dcc96fa3970 tag: tip parent: 92530:ad45c2707006 parent: 92531:8eb4eec8626c user: Benjamin Peterson date: Mon Sep 22 22:44:21 2014 -0400 summary: merge 3.4 (#22459) When working on issue #5550, I realized that some tests are currently failing. Here the log of running: ? ./python.exe -m unittest -v Lib/test/test_urllib2net.py test_close (Lib.test.test_urllib2net.CloseSocketTest) ... ok test_custom_headers (Lib.test.test_urllib2net.OtherNetworkTests) ... FAIL test_file (Lib.test.test_urllib2net.OtherNetworkTests) ... test_ftp (Lib.test.test_urllib2net.OtherNetworkTests) ... skipped "Resource 'ftp://gatekeeper.research.compaq.com/pub/DEC/SRC/research-reports/00README-Legal-Rules-Regs' is not available" test_redirect_url_withfrag (Lib.test.test_urllib2net.OtherNetworkTests) ... ok test_sites_no_connection_close (Lib.test.test_urllib2net.OtherNetworkTests) ... ok test_urlwithfrag (Lib.test.test_urllib2net.OtherNetworkTests) ... ok test_ftp_basic (Lib.test.test_urllib2net.TimeoutTest) ... ok test_ftp_default_timeout (Lib.test.test_urllib2net.TimeoutTest) ... ok test_ftp_no_timeout (Lib.test.test_urllib2net.TimeoutTest) ... ok test_ftp_timeout (Lib.test.test_urllib2net.TimeoutTest) ... ok test_http_basic (Lib.test.test_urllib2net.TimeoutTest) ... ok test_http_default_timeout (Lib.test.test_urllib2net.TimeoutTest) ... ok test_http_no_timeout (Lib.test.test_urllib2net.TimeoutTest) ... ok test_http_timeout (Lib.test.test_urllib2net.TimeoutTest) ... ok ====================================================================== ERROR: test_file (Lib.test.test_urllib2net.OtherNetworkTests) (url='file:/Users/karl/code/cpython/%40test_61795_tmp') ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/karl/code/cpython/Lib/test/test_urllib2net.py", line 243, in _test_urls f = urlopen(url, req, TIMEOUT) File "/Users/karl/code/cpython/Lib/test/test_urllib2net.py", line 33, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/Users/karl/code/cpython/Lib/test/test_urllib2net.py", line 23, in _retry_thrice return func(*args, **kwargs) File "/Users/karl/code/cpython/Lib/urllib/request.py", line 447, in open req = Request(fullurl, data) File "/Users/karl/code/cpython/Lib/urllib/request.py", line 267, in __init__ origin_req_host = request_host(self) File "/Users/karl/code/cpython/Lib/urllib/request.py", line 250, in request_host host = _cut_port_re.sub("", host, 1) TypeError: expected string or buffer ====================================================================== ERROR: test_file (Lib.test.test_urllib2net.OtherNetworkTests) (url=('file:///nonsensename/etc/passwd', None, )) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/karl/code/cpython/Lib/test/test_urllib2net.py", line 243, in _test_urls f = urlopen(url, req, TIMEOUT) File "/Users/karl/code/cpython/Lib/test/test_urllib2net.py", line 33, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/Users/karl/code/cpython/Lib/test/test_urllib2net.py", line 23, in _retry_thrice return func(*args, **kwargs) File "/Users/karl/code/cpython/Lib/urllib/request.py", line 447, in open req = Request(fullurl, data) File "/Users/karl/code/cpython/Lib/urllib/request.py", line 267, in __init__ origin_req_host = request_host(self) File "/Users/karl/code/cpython/Lib/urllib/request.py", line 250, in request_host host = _cut_port_re.sub("", host, 1) TypeError: expected string or buffer ====================================================================== FAIL: test_custom_headers (Lib.test.test_urllib2net.OtherNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/karl/code/cpython/Lib/test/test_urllib2net.py", line 186, in test_custom_headers self.assertEqual(request.get_header('User-agent'), 'Test-Agent') AssertionError: 'Python-urllib/3.4' != 'Test-Agent' - Python-urllib/3.4 + Test-Agent ---------------------------------------------------------------------- Ran 16 tests in 124.879s FAILED (failures=1, errors=2, skipped=1) ---------- components: Tests messages: 227417 nosy: karlcow priority: normal severity: normal status: open title: tests for urllib2net are in bad shapes versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 11:21:10 2014 From: report at bugs.python.org (Bekket McClane) Date: Wed, 24 Sep 2014 09:21:10 +0000 Subject: [New-bugs-announce] [issue22479] strange behavior of importing random module Message-ID: <1411550470.24.0.951916107217.issue22479@psf.upfronthosting.co.za> New submission from Bekket McClane: When I import the random module using "import random" in the command line, it worked fine. But if I import it in the file, the program will throw a strange error, but sometimes it just exit right away after the import statement with no error. Moreover, if I execute the python command line again and "import random" after that, the command line python exit right away too(with exit code 0). Only if I restarted the whole terminal program did the strange behavior gone ---------- components: Library (Lib) files: Screenshot from 2014-09-24 17:04:58.png messages: 227428 nosy: Bekket.McClane priority: normal severity: normal status: open title: strange behavior of importing random module type: crash versions: Python 2.7 Added file: http://bugs.python.org/file36705/Screenshot from 2014-09-24 17:04:58.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 13:22:12 2014 From: report at bugs.python.org (chrysn) Date: Wed, 24 Sep 2014 11:22:12 +0000 Subject: [New-bugs-announce] [issue22480] SystemExit out of run_until_complete causes AttributeError when using python3 -m Message-ID: <1411557732.26.0.928658035687.issue22480@psf.upfronthosting.co.za> New submission from chrysn: the attached test.py snipplet, which runs an asyncio main loop to the completion of a coroutine raising SystemExit, runs cleanly when invoked using `python3 test.py`, but shows a logging error from the Task.__del__ method when invoked using `python3 -m test`. the error message (attached as test.err) indicates that the builtins module has already been emptied by the time the Task's destructor is run. i could reproduce the problem with an easier test case without asyncio (destructoretest.py), but then again, there the issue is slightly more obvious (one could argue a "don't do that, then"), and it occurs no matter how the program is run. i'm leaving this initially assigned to asyncio, because (to the best of my knowledge) test.py does not do bad things by itself, and the behavior is inconsistent only there. ---------- components: asyncio messages: 227440 nosy: chrysn, gvanrossum, haypo, yselivanov priority: normal severity: normal status: open title: SystemExit out of run_until_complete causes AttributeError when using python3 -m versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 16:20:48 2014 From: report at bugs.python.org (=?utf-8?b?0JLQu9Cw0LTQuNC80LjRgCDQotGL0YDQuNC9?=) Date: Wed, 24 Sep 2014 14:20:48 +0000 Subject: [New-bugs-announce] [issue22481] Lists within tuples mutability issue Message-ID: <1411568448.91.0.544050885168.issue22481@psf.upfronthosting.co.za> New submission from ???????? ?????: This behavior seems to be very strange. >>> l = [1, 2, 3] >>> t = ('a', l) >>> t ('a', [1, 2, 3]) >>> t[1] += [4] Traceback (most recent call last): File "", line 1, in TypeError: 'tuple' object does not support item assignment >>> t ('a', [1, 2, 3, 4]) ---------- messages: 227451 nosy: ????????.????? priority: normal severity: normal status: open title: Lists within tuples mutability issue type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 17:59:18 2014 From: report at bugs.python.org (Dom Zippilli) Date: Wed, 24 Sep 2014 15:59:18 +0000 Subject: [New-bugs-announce] [issue22482] logging: fileConfig doesn't support formatter styles Message-ID: <1411574358.31.0.567150146561.issue22482@psf.upfronthosting.co.za> New submission from Dom Zippilli: In the logging module's config.py, see the _create_formatters(cp) method used by the fileConfig() method. Note that it pulls "format" and "datefmt" and submits these in the formatter constructor: f = c(fs, dfs) However, the Formatter constructor has a third argument for formatting style: def __init__(self, fmt=None, datefmt=None, style='%') Since the argument is not passed, ConfigParser-format logging configs must use %-style logging format masks. We'd prefer to use curlies. Note that the code for the dictionary configurator does this correctly: fmt = config.get('format', None) dfmt = config.get('datefmt', None) style = config.get('style', '%') result = logging.Formatter(fmt, dfmt, style) ---------- messages: 227460 nosy: domzippilli priority: normal severity: normal status: open title: logging: fileConfig doesn't support formatter styles type: enhancement versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 18:12:48 2014 From: report at bugs.python.org (Stefan Krah) Date: Wed, 24 Sep 2014 16:12:48 +0000 Subject: [New-bugs-announce] [issue22483] Copyright infringement on PyPI Message-ID: <1411575168.32.0.637271338488.issue22483@psf.upfronthosting.co.za> New submission from Stefan Krah: The following URL contains copyrighted verbatim text from bytereef.org: https://pypi.python.org/pypi/m3-cdecimal I'm not surprised, since the ongoing Walmartization of Open Source has little regard for authors. ---------- messages: 227461 nosy: skrah priority: normal severity: normal status: open title: Copyright infringement on PyPI _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 20:15:08 2014 From: report at bugs.python.org (Berker Peksag) Date: Wed, 24 Sep 2014 18:15:08 +0000 Subject: [New-bugs-announce] [issue22484] Build doc archives for RC versions Message-ID: <1411582508.1.0.277862654649.issue22484@psf.upfronthosting.co.za> New submission from Berker Peksag: The attached patch partly reverts https://hg.python.org/cpython/rev/48033d90c61d#l2.126 since it breaks building doc archives for RC versions: - https://mail.python.org/pipermail/docs/2014-September/020211.html - https://mail.python.org/pipermail/docs/2014-September/020214.html - https://mail.python.org/pipermail/docs/2014-September/020215.html See https://hg.python.org/cpython/rev/ecfb6f8a7bcf for the original changeset. ---------- assignee: docs at python components: Documentation files: autobuild.diff keywords: patch messages: 227479 nosy: benjamin.peterson, berker.peksag, docs at python priority: high severity: normal stage: patch review status: open title: Build doc archives for RC versions type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file36711/autobuild.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 20:18:21 2014 From: report at bugs.python.org (Carol Willing) Date: Wed, 24 Sep 2014 18:18:21 +0000 Subject: [New-bugs-announce] [issue22485] Documentation download links (3.4.2 Message-ID: <1411582701.15.0.026465273673.issue22485@psf.upfronthosting.co.za> New submission from Carol Willing: As reported by a couple of users on the python-docs mailing list: Python 3.4.2rc1 docs are giving a 404 Not Found when clicking on the links to download (https://docs.python.org/3.4/download.html). Python 3.5.0a0 links download docs correctly (https://docs.python.org/3.5/download.html). If I option-click on a Mac, I can download Python 3.4.2rc1 docs; however, the direct links 404. ---------- assignee: docs at python components: Documentation messages: 227480 nosy: docs at python, willingc priority: normal severity: normal status: open title: Documentation download links (3.4.2 type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 22:40:35 2014 From: report at bugs.python.org (Stefan Behnel) Date: Wed, 24 Sep 2014 20:40:35 +0000 Subject: [New-bugs-announce] [issue22486] Speed up fractions.gcd() Message-ID: <1411591235.17.0.168367966743.issue22486@psf.upfronthosting.co.za> New submission from Stefan Behnel: fractions.gcd() is required for normalising numerator and denominator of the Fraction data type. Some speed improvements were applied to Fraction in issue 22464, now the gcd() function takes up about half of the instantiation time in the benchmark in issue 22458, which makes it quite a heavy part of the overall Fraction computation time. The current implementation is def gcd(a, b): while b: a, b = b, a%b return a Reimplementing it in C would provide for much faster calculations. Here is a Cython version that simply drops the calculation loop into C as soon as the numbers are small enough to fit into a C long long int: def _gcd(a, b): # Try doing all computation in C space. If the numbers are too large # at the beginning, retry until they are small enough. cdef long long ai, bi while b: try: ai, bi = a, b except OverflowError: pass else: # switch to C loop while bi: ai, bi = bi, ai%bi return ai a, b = b, a%b return a It's substantially faster already because the values will either be small enough right from the start or quickly become so after a few iterations with Python objects. Further improvements should be possible with a dedicated PyLong implementation based on Lehmer's GCD algorithm: https://en.wikipedia.org/wiki/Lehmer_GCD_algorithm ---------- components: Library (Lib) messages: 227487 nosy: scoder priority: normal severity: normal status: open title: Speed up fractions.gcd() type: performance versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 24 23:37:14 2014 From: report at bugs.python.org (Ryan McCampbell) Date: Wed, 24 Sep 2014 21:37:14 +0000 Subject: [New-bugs-announce] [issue22487] ABC register doesn't check abstract methods Message-ID: <1411594634.83.0.710173363253.issue22487@psf.upfronthosting.co.za> New submission from Ryan McCampbell: Is there a reason register() doesn't check for abstract methods, like subclassing does? Would it fail for some builtin classes? It seems that this would be a better guarantee that, say, something really is iterable when you check isinstance(Collections.Iterable, o), since someone could have called Collections.Iterable.register(o.__class__) without adding an __iter__ method to their class. ---------- messages: 227489 nosy: rmccampbell7 priority: normal severity: normal status: open title: ABC register doesn't check abstract methods type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 01:24:11 2014 From: report at bugs.python.org (Senthil Kumaran) Date: Wed, 24 Sep 2014 23:24:11 +0000 Subject: [New-bugs-announce] [issue22488] 3.4 rc2 docs download link broken Message-ID: <1411601051.4.0.512661670113.issue22488@psf.upfronthosting.co.za> New submission from Senthil Kumaran: Reported by John Jeffers on docs mailing list. https://docs.python.org/3.4/download.html (3.4.2rc1) Return Error 404 (Your other pages are fine)! ---------- messages: 227493 nosy: larry, orsenthil priority: normal severity: normal status: open title: 3.4 rc2 docs download link broken _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 03:04:56 2014 From: report at bugs.python.org (Robert Collins) Date: Thu, 25 Sep 2014 01:04:56 +0000 Subject: [New-bugs-announce] [issue22489] .gitignore file Message-ID: <1411607096.09.0.0618020522514.issue22489@psf.upfronthosting.co.za> New submission from Robert Collins: The .gitignore file was missing some build products on windows. The attached patch makes the tree be clean after doing a debug build. ---------- files: windows-git-ignore.diff keywords: patch messages: 227498 nosy: rbcollins priority: normal severity: normal status: open title: .gitignore file Added file: http://bugs.python.org/file36715/windows-git-ignore.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 07:30:45 2014 From: report at bugs.python.org (Tim Smith) Date: Thu, 25 Sep 2014 05:30:45 +0000 Subject: [New-bugs-announce] [issue22490] Using realpath for __PYVENV_LAUNCHER__ makes Homebrew installs fragile Message-ID: <1411623045.52.0.589017779939.issue22490@psf.upfronthosting.co.za> New submission from Tim Smith: Homebrew, the OS X package manager, distributes python3 as a framework build. We like to be able to control the shebang that gets written to scripts installed with pip. [1] The path we prefer for invoking the python3 interpreter is like /usr/local/opt/python3/bin/python3.4, which is symlinked to the framework stub launcher at /usr/local/Cellar/python3/3.4.1_1/Frameworks/Python.framework/Versions/3.4/bin/python3.4. For Python 2.x, we discovered that assigning "/usr/local/opt/python/bin/python2.7" to sys.executable in sitecustomize.py resulted in correct shebangs for scripts installed by pip. The same approach doesn't work with Python 3. A very helpful conversation with Vinay Sajip [2] led us to consider how the python3 stub launcher sets __PYVENV_LAUNCHER__, which distlib uses in preference to sys.executable to discover the intended interpreter when pip writes shebangs. Roughly, __PYVENV_LAUNCHER__ is set from argv[0], so it mimics the invocation of the stub, except that symlinks in the directory component of the path to the identified interpreter are resolved to a "real" path. For us, this means that __PYVENV_LAUNCHER__ (and therefore the shebangs of installed scripts) always points to the Cellar path, not the preferred opt path, even when python is invoked via the opt path. Avoiding this symlink resolution would allow us to control pip's shebang (which sets the shebangs of all pip-installed scripts) by controlling the way we invoke python3 when we use ensurepip during installation. Building python3 with the attached diff removes the symlink resolution. [1] This is important to Homebrew because packages are physically installed to versioned prefixes, like /usr/local/Cellar/python3/3.4.1_1/. References to these real paths are fragile and break when the version number changes or when the revision number ("_1") changes, when can happen when e.g. openssl is released and Python needs to be recompiled against the new library. To avoid this breakage, Homebrew maintains a version-independent symlink to each package, like /usr/local/opt/python3, which points to the .../Cellar/python3/3.4.1_1/ location. [2] https://github.com/pypa/pip/issues/2031 ---------- assignee: ronaldoussoren components: Macintosh files: dont-realpath-venv-dirname.diff keywords: patch messages: 227505 nosy: ned.deily, ronaldoussoren, tdsmith, vinay.sajip priority: normal severity: normal status: open title: Using realpath for __PYVENV_LAUNCHER__ makes Homebrew installs fragile type: behavior versions: Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36718/dont-realpath-venv-dirname.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 08:56:27 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 25 Sep 2014 06:56:27 +0000 Subject: [New-bugs-announce] [issue22491] Support Unicode line boundaries in regular expression Message-ID: <1411628187.04.0.177569242065.issue22491@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Currently regular expressions support on '\n' as line boundary. To meet Unicode standard requirement RL1.6 [1] all Unicode line separators should be supported: '\n', '\r', '\v', '\f', '\x85', '\u2028', '\u2029' and two-character '\r\n'. Also it is recommended that '.' in "dotall" mode matches '\r\n'. Also strongly recommended to support the '\R' pattern which matches all line separators (equivalent to '(?:\\r\n|(?!\r\n)[\n\v\f\r\x85\u2028\u2029]'). >>> [m.start() for m in re.finditer('$', '\r\n\n\r', re.M)] [1, 2, 4] # should be [0, 2, 3, 4] >>> [m.start() for m in re.finditer('^', '\r\n\n\r', re.M)] [0, 2, 3] # should be [0, 2, 3, 4] >>> [m.group() for m in re.finditer('.', '\r\n\n\r', re.M|re.S)] ['\r', '\n', '\n', '\r'] # should be ['\r\n', '\n', '\r'] >>> [m.group() for m in re.finditer(r'\R', '\r\n\n\r')] [] # should be ['\r\n', '\n', '\r'] [1] http://www.unicode.org/reports/tr18/#RL1.6 ---------- components: Extension Modules, Regular Expressions messages: 227508 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: Support Unicode line boundaries in regular expression type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 09:02:28 2014 From: report at bugs.python.org (Georg Brandl) Date: Thu, 25 Sep 2014 07:02:28 +0000 Subject: [New-bugs-announce] [issue22492] small addition to print() docs: no binary streams. Message-ID: <1411628548.86.0.942026388969.issue22492@psf.upfronthosting.co.za> New submission from Georg Brandl: This is implicit in the "converts arguments to strings", but people might reasonably expect that print(x, file=y) is the same as y.write(x) for strings and bytes. This paragraph makes it clear. ---------- files: print_binary.diff keywords: patch messages: 227509 nosy: georg.brandl priority: normal severity: normal status: open title: small addition to print() docs: no binary streams. Added file: http://bugs.python.org/file36719/print_binary.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 12:29:00 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 25 Sep 2014 10:29:00 +0000 Subject: [New-bugs-announce] [issue22493] Deprecate the use of flags not at the start of regular expression Message-ID: <1411640940.81.0.804351058131.issue22493@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: The meaning of inline flags not at the start of regular expression is ambiguous. Current re implementation and regex in the V0 mode enlarge the scope to all expression. In V1 mode in regex they affect only the end of the expression. I propose to deprecate (and then forbid in 3.7) the use of inline flags not at the start of regular expression. This will help to change the meaning of inline flags in the middle of the expression in future (in 3.8 or later). ---------- components: Library (Lib), Regular Expressions files: re_deprecate_nonstart_flags.patch keywords: patch messages: 227520 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Deprecate the use of flags not at the start of regular expression type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36721/re_deprecate_nonstart_flags.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 12:42:32 2014 From: report at bugs.python.org (Sean Dague) Date: Thu, 25 Sep 2014 10:42:32 +0000 Subject: [New-bugs-announce] [issue22494] default logging time string is not localized Message-ID: <1411641752.05.0.384907346066.issue22494@psf.upfronthosting.co.za> New submission from Sean Dague: The default time string is not localized for using locale specific formatting, but is instead hardcoded to a ','. https://hg.python.org/cpython/file/c87e00a6258d/Lib/logging/__init__.py#l483 demonstrates this. Instead I think we should set that to the value of: locale.localeconv()['decimal_point'] While this clearly a very minor issue, I stare at enough logging output data that falls back to default formats (due to testing environments) that would love for this to be locale aware. ---------- components: Library (Lib) messages: 227521 nosy: sdague priority: normal severity: normal status: open title: default logging time string is not localized type: behavior versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 14:27:44 2014 From: report at bugs.python.org (Wolfgang Maier) Date: Thu, 25 Sep 2014 12:27:44 +0000 Subject: [New-bugs-announce] [issue22495] merge large parts of test_binop.py and test_fractions.py Message-ID: <1411648064.39.0.863179207441.issue22495@psf.upfronthosting.co.za> New submission from Wolfgang Maier: test_binop.py says that it tests binary operators on subtypes of built-in types, but in fact largely focuses on testing its own class Rat, which simply inherits from object and is, essentially, just a simple implementation of fractions.Fraction. Instead of doing mostly redundant tests here and there it might be better to merge this part (up to line 305) of test_binop.py into test_fractions.py, then maybe add tests of subtypes of built-in types other than just object to test_binop.py. This requires quite a bit of work though for a relatively minor improvement so do you think it's worth the effort ? ---------- components: Tests messages: 227530 nosy: wolma priority: normal severity: normal status: open title: merge large parts of test_binop.py and test_fractions.py type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 17:48:51 2014 From: report at bugs.python.org (Mathieu Dupuy) Date: Thu, 25 Sep 2014 15:48:51 +0000 Subject: [New-bugs-announce] [issue22496] urllib2 fails against IIS (urllib2 can't parse 401 reply www-authenticate headers) Message-ID: <1411660131.7.0.581562777837.issue22496@psf.upfronthosting.co.za> New submission from Mathieu Dupuy: When connecting to a IIS server, it replies that: Unauthorized Server: Microsoft-IIS/7.5 WWW-Authenticate: Digest qop="auth",algorithm=MD5-sess,nonce="+Upgraded+v1fe2ba746797cfd974e85f9f6dbdd6e514ec45becd2d8cf0112c764c676ad4a00f98517bb166e467dcad4b942254bd9b71d447e3529c509d2",charset=utf-8,realm="Digest" WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Thu, 25 Sep 2014 15:11:03 GMT Connection: close Content-Length: 0 which blew python 2.7 utllib2 like this: File "tut2.py", line 23, in response = opener.open('https://exca010.encara.local.ads/ews/Services.wsdl') File "/usr/lib64/python2.7/urllib2.py", line 410, in open response = meth(req, response) File "/usr/lib64/python2.7/urllib2.py", line 524, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib64/python2.7/urllib2.py", line 442, in error result = self._call_chain(*args) File "/usr/lib64/python2.7/urllib2.py", line 382, in _call_chain result = func(*args) File "/usr/lib64/python2.7/urllib2.py", line 1090, in http_error_401 host, req, headers) File "/usr/lib64/python2.7/urllib2.py", line 973, in http_error_auth_reqed return self.retry_http_digest_auth(req, authreq) File "/usr/lib64/python2.7/urllib2.py", line 977, in retry_http_digest_auth chal = parse_keqv_list(parse_http_list(challenge)) File "/usr/lib64/python2.7/urllib2.py", line 1259, in parse_keqv_list k, v = elt.split('=', 1) ValueError: need more than 1 value to unpack urllib2 seems to assume that every www-authenticate header value will be a list of equal-signe-separated tuple. On python3, the error is different and trigger this http://bugs.python.org/issue2202 (which is soon-to-be-fixed) ---------- components: Library (Lib) messages: 227543 nosy: deronnax priority: normal severity: normal status: open title: urllib2 fails against IIS (urllib2 can't parse 401 reply www-authenticate headers) type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 18:36:49 2014 From: report at bugs.python.org (Katherine Dykes) Date: Thu, 25 Sep 2014 16:36:49 +0000 Subject: [New-bugs-announce] [issue22497] msiexec not creating msvcr90.dll with python -2.7.6.msi Message-ID: <1411663009.27.0.792160557257.issue22497@psf.upfronthosting.co.za> New submission from Katherine Dykes: This is a new issue meant to resurrect Issue 5459. When Python 2.7.x (and 2.6.x before that) are installed for all users, then 'msvcr90.dll' is not created in the installation directory. It does if you install for a single user. However, many Windows users install Python for all users by default. This causes problems later when trying to install software that combines any sort of C or Fortran code. ---------- components: Windows messages: 227550 nosy: dykesk priority: normal severity: normal status: open title: msiexec not creating msvcr90.dll with python -2.7.6.msi versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 25 19:26:56 2014 From: report at bugs.python.org (James Paget) Date: Thu, 25 Sep 2014 17:26:56 +0000 Subject: [New-bugs-announce] [issue22498] frozenset allows modification via -= operator Message-ID: <1411666016.57.0.993512040715.issue22498@psf.upfronthosting.co.za> New submission from James Paget: The operator -= modifies a frozenset (this should not be possible), instead of signaling a TypeError. Contrast with the += operator. >>> f=frozenset([1,2]) >>> f frozenset([1, 2]) >>> f -= frozenset([1]) >>> f frozenset([2]) >>> f -= frozenset([2]) >>> f frozenset([]) >>> f += frozenset([2]) Traceback (most recent call last): File "", line 1, in TypeError: unsupported operand type(s) for +=: 'frozenset' and 'frozenset' >>> ---------- components: Interpreter Core messages: 227557 nosy: James.Paget priority: normal severity: normal status: open title: frozenset allows modification via -= operator type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 26 03:24:30 2014 From: report at bugs.python.org (Nikolaus Rath) Date: Fri, 26 Sep 2014 01:24:30 +0000 Subject: [New-bugs-announce] [issue22499] [SSL: BAD_WRITE_RETRY] bad write retry in _ssl.c:1636 Message-ID: <1411694670.5.0.645076106925.issue22499@psf.upfronthosting.co.za> New submission from Nikolaus Rath: I received a bugreport due to a crash when calling SSLObject.send(). The traceback ends with: [...] File "/usr/local/lib/python3.4/dist-packages/dugong-3.2-py3.4.egg/dugong/__init__.py", line 584, in _co_send len_ = self._sock.send(buf) File "/usr/lib/python3.4/ssl.py", line 679, in send v = self._sslobj.write(data) ssl.SSLError: [SSL: BAD_WRITE_RETRY] bad write retry (_ssl.c:1636) At first I thought that this is an exception that my application should catch and handle. However, when trying to figure out what exactly BAD_WRITE_RETRY means I get the impression that the fault is actually in Python's _ssl.c. The only places where this error is returned by OpenSSL are ssl/s2_pkt.c:480 and ssl/s3_pkt.c:1179, and in each case the problem seems to be with the caller supplying an invalid buffer after an initial write request failed to complete due to non-blocking IO. This does not seem to be something that could be caused by whatever Python code, so I think there is a problem in _ssl.c. ---------- components: Library (Lib) messages: 227582 nosy: alex, christian.heimes, dstufft, giampaolo.rodola, janssen, nikratio, pitrou priority: normal severity: normal status: open title: [SSL: BAD_WRITE_RETRY] bad write retry in _ssl.c:1636 type: crash versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 26 04:13:11 2014 From: report at bugs.python.org (Tristan Fisher) Date: Fri, 26 Sep 2014 02:13:11 +0000 Subject: [New-bugs-announce] [issue22500] Argparse always stores True for positional arguments Message-ID: <1411697591.54.0.644462520859.issue22500@psf.upfronthosting.co.za> New submission from Tristan Fisher: It's my understanding that giving the action="store_true" to an argument in argparse defaults to False. When using non-double-dashed/positional arguments, the argument resorts to True (even if explicitly marked default=False). I've attached a minimal example, but, for clarity, the relevant line is as such: parser.add_argument("meow", action="store_true", default=False) I realize that this might strike some as an odd usage, and I always have the option of using "--meow," but I found it odd that a positional argument is always True, even if not specified in sys.argv. ---------- components: Library (Lib) files: argparse_always_true.py messages: 227584 nosy: Tristan.Fisher priority: normal severity: normal status: open title: Argparse always stores True for positional arguments type: behavior versions: Python 3.4 Added file: http://bugs.python.org/file36727/argparse_always_true.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 26 09:52:31 2014 From: report at bugs.python.org (Stefan Behnel) Date: Fri, 26 Sep 2014 07:52:31 +0000 Subject: [New-bugs-announce] [issue22501] Optimise PyLong division by 1 or -1 Message-ID: <1411717951.94.0.775815371352.issue22501@psf.upfronthosting.co.za> New submission from Stefan Behnel: The attached patch adds fast paths for PyLong division by 1 and -1, as well as dividing 0 by something. This was found helpful for fractions normalisation, as the GCD that is divided by can often be |1|, but firing up the whole division machinery for this eats a lot of CPU cycles for nothing. There are currently two test failures in test_long.py because dividing a huge number by 1 or -1 no longer raises an OverflowError. This is a behavioural change, but I find it acceptable. If others agree, I'll fix the tests and submit a new patch. ---------- components: Interpreter Core files: div_by_1_fast_path.patch keywords: patch messages: 227590 nosy: mark.dickinson, pitrou, scoder, serhiy.storchaka priority: normal severity: normal status: open title: Optimise PyLong division by 1 or -1 type: performance versions: Python 3.5 Added file: http://bugs.python.org/file36729/div_by_1_fast_path.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 27 10:40:39 2014 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 27 Sep 2014 08:40:39 +0000 Subject: [New-bugs-announce] [issue22502] after continue in pdb stops in signal.py Message-ID: <1411807239.7.0.969118354208.issue22502@psf.upfronthosting.co.za> New submission from Xavier de Gaye: With the following script: import time def foo(): import pdb; pdb.set_trace() while 1: time.sleep(.5) foo() Hitting ^C after continue gives: $ ./python foo.py > foo.py(5)foo() -> while 1: (Pdb) continue ^C Program interrupted. (Use 'cont' to resume). --Call-- > Lib/signal.py(51)signal() -> @_wraps(_signal.signal) (Pdb) This is fixed with the following change: diff --git a/Lib/pdb.py b/Lib/pdb.py --- a/Lib/pdb.py +++ b/Lib/pdb.py @@ -186,9 +186,9 @@ raise KeyboardInterrupt self.message("\nProgram interrupted. (Use 'cont' to resume).") self.set_step() - self.set_trace(frame) # restore previous signal handler signal.signal(signal.SIGINT, self._previous_sigint_handler) + self.set_trace(frame) def reset(self): bdb.Bdb.reset(self) ---------- components: Library (Lib) messages: 227666 nosy: georg.brandl, xdegaye priority: normal severity: normal status: open title: after continue in pdb stops in signal.py type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 27 14:41:37 2014 From: report at bugs.python.org (Andreas Schwab) Date: Sat, 27 Sep 2014 12:41:37 +0000 Subject: [New-bugs-announce] [issue22503] Signal stack overflow in faulthandler_user Message-ID: <1411821697.8.0.131937255622.issue22503@psf.upfronthosting.co.za> New submission from Andreas Schwab: test_register_chain fails on aarch64 due to signal stack overflow, when re-raising the signal in faulthandler_user. The problem is that the signal stack can only handle a single signal frame, but faulthandler_user adds a second one. _Py_Faulthandler_Init should allocate twice the amount of stack to cater for the two signal frames. ====================================================================== FAIL: test_register_chain (test.test_faulthandler.FaultHandlerTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/abuild/rpmbuild/BUILD/Python-3.4.1/Lib/test/test_faulthandler.py", line 592, in test_register_chain self.check_register(chain=True) File "/home/abuild/rpmbuild/BUILD/Python-3.4.1/Lib/test/test_faulthandler.py", line 576, in check_register self.assertEqual(exitcode, 0) AssertionError: -11 != 0 ---------------------------------------------------------------------- ---------- components: Extension Modules messages: 227667 nosy: schwab priority: normal severity: normal status: open title: Signal stack overflow in faulthandler_user type: resource usage versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 27 16:28:23 2014 From: report at bugs.python.org (Ram Rachum) Date: Sat, 27 Sep 2014 14:28:23 +0000 Subject: [New-bugs-announce] [issue22504] Add ordering between `Enum` objects Message-ID: <1411828103.4.0.73202336425.issue22504@psf.upfronthosting.co.za> New submission from Ram Rachum: I suggest making Enum members orderable, according to their order in the enum type. Currently trying to order them raises an exception: >>> import enum >>> class Number(enum.Enum): ... one = 1 ... two = 2 ... three = 3 >>> sorted((Number.one, Number.two)) Traceback (most recent call last): File "", line 1, in sorted((Number.one, Number.two)) TypeError: unorderable types: Number() < Number() If there's agreement from core developers that this is a good feature to add, I'll write a patch. ---------- components: Library (Lib) messages: 227678 nosy: cool-RR priority: normal severity: normal status: open title: Add ordering between `Enum` objects type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 27 16:40:46 2014 From: report at bugs.python.org (Ram Rachum) Date: Sat, 27 Sep 2014 14:40:46 +0000 Subject: [New-bugs-announce] [issue22505] Expose an Enum object's serial number Message-ID: <1411828846.19.0.205599343734.issue22505@psf.upfronthosting.co.za> New submission from Ram Rachum: I'd like Enum objects to expose their serial numbers. Currently it seems the only way to get this is `MyEnum._member_names_.index(my_enum.name)`, which is not cool because it's cumbersome and involves private variables. Perhaps we can use `int(my_enum) == 7`? Or `my_enum.number == 7`? I'll be happy to make a patch if there's agreement about this. ---------- components: Library (Lib) messages: 227679 nosy: cool-RR priority: normal severity: normal status: open title: Expose an Enum object's serial number type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 27 16:57:25 2014 From: report at bugs.python.org (Ram Rachum) Date: Sat, 27 Sep 2014 14:57:25 +0000 Subject: [New-bugs-announce] [issue22506] `dir` on Enum subclass doesn't expose parent class attributes Message-ID: <1411829845.48.0.985714520453.issue22506@psf.upfronthosting.co.za> New submission from Ram Rachum: Calling `dir` on an enum subclass shows only the contents of that class, not its parent classes. In normal classes, you can do this: Python 3.4.0 (v3.4.0:04f714765c13, Mar 16 2014, 19:25:23) [MSC v.1600 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> class A: ... x = lambda self: 7 ... >>> class B(a): pass ... >>> assert 'x' in dir(B) But in enum subclasses, it fails: >>> import enum >>> class A(enum.Enum): ... x = lambda self: 7 ... >>> class B(A): ... pass ... >>> assert 'x' in dir(B) Traceback (most recent call last): File "", line 1, in AssertionError >>> Looks like the `__dir__` implementation needs to be tweaked. ---------- components: Library (Lib) messages: 227681 nosy: cool-RR priority: normal severity: normal status: open title: `dir` on Enum subclass doesn't expose parent class attributes type: behavior versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From barry at python.org Sat Sep 27 16:58:04 2014 From: barry at python.org (Barry Warsaw) Date: Sat, 27 Sep 2014 10:58:04 -0400 Subject: [New-bugs-announce] [issue22505] Expose an Enum object's serial number In-Reply-To: <1411828846.19.0.205599343734.issue22505@psf.upfronthosting.co.za> References: <1411828846.19.0.205599343734.issue22505@psf.upfronthosting.co.za> Message-ID: <20140927105804.101eec40@anarchist.wooz.org> On Sep 27, 2014, at 02:40 PM, Ram Rachum wrote: >I'd like Enum objects to expose their serial numbers. Can you please provide some motivating use cases? From report at bugs.python.org Sat Sep 27 20:23:51 2014 From: report at bugs.python.org (Maries Ionel Cristian) Date: Sat, 27 Sep 2014 18:23:51 +0000 Subject: [New-bugs-announce] [issue22507] PyType_IsSubtype doesn't call __subclasscheck__ Message-ID: <1411842231.97.0.146200630679.issue22507@psf.upfronthosting.co.za> New submission from Maries Ionel Cristian: It appears it just does a reference check: https://hg.python.org/cpython/file/3.4/Objects/typeobject.c#l1300 It appears it's the same in 2.7: https://hg.python.org/cpython/file/2.7/Objects/typeobject.c#l1161 But this is not the intended behaviour right? ---------- components: Interpreter Core messages: 227706 nosy: ionel.mc priority: normal severity: normal status: open title: PyType_IsSubtype doesn't call __subclasscheck__ versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 27 23:05:22 2014 From: report at bugs.python.org (R. David Murray) Date: Sat, 27 Sep 2014 21:05:22 +0000 Subject: [New-bugs-announce] [issue22508] Remove __version__ string from email Message-ID: <1411851922.47.0.151418243457.issue22508@psf.upfronthosting.co.za> New submission from R. David Murray: There is no longer a concept of a separate 'email' release from the stdlib release. The __version__ string didn't get updated in either 3.3 or 3.4 (my fault). I propose that we simply delete the __version__ variable from __init__.py (patch attached). Any objections? ---------- components: email files: remove_email_version.patch keywords: patch messages: 227731 nosy: barry, r.david.murray priority: normal severity: normal status: open title: Remove __version__ string from email versions: Python 3.5 Added file: http://bugs.python.org/file36747/remove_email_version.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 28 03:34:46 2014 From: report at bugs.python.org (Prof Oak) Date: Sun, 28 Sep 2014 01:34:46 +0000 Subject: [New-bugs-announce] [issue22509] Website incorrect link Message-ID: <1411868086.99.0.320325997195.issue22509@psf.upfronthosting.co.za> New submission from Prof Oak: On the python.org website, if you follow the link to PFS on the top, then click on the media tab (not any of the items in the dropdown menu), it takes you to this web page: https://www.python.org/inner/ It looks to be a sample page. ---------- assignee: docs at python components: Documentation messages: 227745 nosy: Prof.Oak, docs at python priority: normal severity: normal status: open title: Website incorrect link type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 28 15:43:11 2014 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 28 Sep 2014 13:43:11 +0000 Subject: [New-bugs-announce] [issue22510] Faster bypass re cache when DEBUG is passed Message-ID: <1411911791.72.0.231619321435.issue22510@psf.upfronthosting.co.za> New submission from Serhiy Storchaka: Here is a patch which gets rid of small performance regression introduced by issue20426 patch. No need to check flags before cache lookup because patterns with the DEBUG flag are newer cached. $ ./python -m timeit -s "import re" -- "re.match('', '')" Before patch: 9.08 usec per loop After patch: 8 usec per loop ---------- components: Library (Lib), Regular Expressions files: re_debug_cache_faster.patch keywords: patch messages: 227758 nosy: ezio.melotti, mrabarnett, pitrou, serhiy.storchaka priority: normal severity: normal stage: patch review status: open title: Faster bypass re cache when DEBUG is passed type: enhancement versions: Python 3.5 Added file: http://bugs.python.org/file36749/re_debug_cache_faster.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 28 17:26:49 2014 From: report at bugs.python.org (Mohammed Mustafa Al-Habshi) Date: Sun, 28 Sep 2014 15:26:49 +0000 Subject: [New-bugs-announce] [issue22511] Assignment Operators behavior within a user-defined function and arguments being passed by reference or value Message-ID: <1411918009.83.0.649870728612.issue22511@psf.upfronthosting.co.za> New submission from Mohammed Mustafa Al-Habshi: hello every one, I was trying to understand the behavior of passing arguments in a function to differentiate how we can pass argument by value. Though it is a technique matter. However, the behavior of assignment operator += when using it with a list is more to toward behavior to the "append" method of the list object, and not like a a normal assignment. This causes a confusing when teach python language concepts , especially the behavior of += with numerical data types is list normal assignment and the parameters are then passed by value. The issue is more related to data type mutability. and I believe assignment operator should be synthetically more compatible with normal assignment. ---- inline code example ----- def pass_(x): # x is list type print "Within the function" print " x was " , x #x = x + [50] # here x is passed by value x += [50] # here x is passed by reference. #x.append(50) # here x is passed by reference. print " x then is " , x return x = [12,32,12] pass_(x) print "\n x out of the function is " , x ---------- messages: 227761 nosy: alhabshi3k priority: normal severity: normal status: open title: Assignment Operators behavior within a user-defined function and arguments being passed by reference or value type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 28 23:26:41 2014 From: report at bugs.python.org (Francis MB) Date: Sun, 28 Sep 2014 21:26:41 +0000 Subject: [New-bugs-announce] [issue22512] 'test_distutils.test_bdist_rpm' causes creation of directory '.rpmdb' on home dir Message-ID: <1411939601.45.0.729692387398.issue22512@psf.upfronthosting.co.za> New submission from Francis MB: Running the test suite or 'test_distutils' triggers the creation of the directory '.rpmdb'. I noticed that because somehow that directory was bad formed and got errors while running the test suite: error: db5 error(-30969) from dbenv->open: BDB0091 DB_VERSION_MISMATCH: Database environment version mismatch error: cannot open Packages index using db5 - (-30969) error: cannot open Packages database in /home/ci/.rpmdb error: db5 error(-30969) from dbenv->open: BDB0091 DB_VERSION_MISMATCH: Database environment version mismatch error: cannot open Packages index using db5 - (-30969) error: cannot open Packages database in /home/ci/.rpmdb After moving that directory and running the suite again the directory reappeared (but that time, and since then, no errors occurred). It seems that 'test_distutils.test_bdist_rpm' triggers that behavior. This seems to be due 'rpm' having it so configured [1]. In my case: $ rpm -v --showrc | grep '.rpmdb' -14: _dbpath %(bash -c 'echo ~/.rpmdb') Here is a patch that confines the creation of this directory to the temporal test directory. Regards, francis ---- [1] https://bugs.launchpad.net/rpm/+bug/1069350 ---------- components: Distutils, Tests files: confine_hidden_rpmdb_dir_creation.patch keywords: patch messages: 227777 nosy: dstufft, eric.araujo, francismb priority: normal severity: normal status: open title: 'test_distutils.test_bdist_rpm' causes creation of directory '.rpmdb' on home dir type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file36750/confine_hidden_rpmdb_dir_creation.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 29 19:19:31 2014 From: report at bugs.python.org (Ethan Furman) Date: Mon, 29 Sep 2014 17:19:31 +0000 Subject: [New-bugs-announce] [issue22513] grp.struct_group is not hashable Message-ID: <1412011171.66.0.600285567527.issue22513@psf.upfronthosting.co.za> New submission from Ethan Furman: First, the behavior for pwd.struct_passwd: ----------------------------------------- --> pwd.getpwuid(1000) pwd.struct_passwd(pw_name='ethan', pw_passwd='x', pw_uid=1000, pw_gid=1000, pw_gecos='Ethan Furman,,,', pw_dir='/home/ethan', pw_shell='/bin/bash') --> set(pwd.getpwuid(1000)) set(['/bin/bash', 1000, 'Ethan Furman,,,', '/home/ethan', 'ethan', 'x']) --> set([pwd.getpwuid(1000)]) set([pwd.struct_passwd(pw_name='ethan', pw_passwd='x', pw_uid=1000, pw_gid=1000, pw_gecos='Ethan Furman,,,', pw_dir='/home/ethan', pw_shell='/bin/bash')]) Now, the behavior for grp.struct_group: -------------------------------------- --> grp.getgrgid(1000) grp.struct_group(gr_name='ethan', gr_passwd='x', gr_gid=1000, gr_mem=[]) --> set(grp.getgrgid(1000)) Traceback (most recent call last): File "", line 1, in TypeError: unhashable type: 'list' --> set([grp.getgrgid(1000)]) Traceback (most recent call last): File "", line 1, in TypeError: unhashable type: 'list' At the very least the error message is wrong (it's not a list), and at the most grp.struct_group should be hashable -- i.e. we should be able to have a set of groups. ---------- messages: 227811 nosy: ethan.furman priority: normal severity: normal status: open title: grp.struct_group is not hashable type: behavior versions: Python 2.7, Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 29 21:22:10 2014 From: report at bugs.python.org (Friedrich Spee von Langenfeld) Date: Mon, 29 Sep 2014 19:22:10 +0000 Subject: [New-bugs-announce] [issue22514] incomplete programming example on python.org Message-ID: <1412018530.64.0.603164389398.issue22514@psf.upfronthosting.co.za> New submission from Friedrich Spee von Langenfeld: When I open www.python.org, there are some examples to demonstrate the "look and feel" of Python. I?ve tested an example (example number 1). Online, the following is shown: # Python 3: Fibonacci series up to n >>> def fib(n): >>> a, b = 0, 1 >>> while a < n: >>> print(a, end=' ') >>> a, b = b, a+b >>> print() >>> fib(1000) 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 But then I have tested the (following) code with Python 3.1: >>> def fib(n): a, b = 0, 1 while a < n: print(a, end=" ") a, b = b, a+b print() >>> fib(1000) 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 As you can see, the last number(987)wasn?t shown online. Perhaps, this behavior is browser-dependent. I?ve used Mozilla Firefox 32.0.3 . I can?t estimate the priority of this issue, because I can?t imagine how many people are using or analysing the examples. Can you reproduce my findings? ---------- messages: 227818 nosy: Friedrich.Spee.von.Langenfeld priority: normal severity: normal status: open title: incomplete programming example on python.org _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 29 21:33:21 2014 From: report at bugs.python.org (Ram Rachum) Date: Mon, 29 Sep 2014 19:33:21 +0000 Subject: [New-bugs-announce] [issue22515] Implement partial order on Counter Message-ID: <1412019201.2.0.639073460867.issue22515@psf.upfronthosting.co.za> New submission from Ram Rachum: I suggest implementing `Counter.__lt__` which will be a partial order, similarly to `set.__lt__`. That is, one counter will be considered smaller-or-equal to another if for any item in the first counter, the second counter has an equal or bigger amount of that item. ---------- components: Library (Lib) messages: 227819 nosy: cool-RR priority: normal severity: normal status: open title: Implement partial order on Counter versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 29 22:36:47 2014 From: report at bugs.python.org (J. Morton) Date: Mon, 29 Sep 2014 20:36:47 +0000 Subject: [New-bugs-announce] [issue22516] Windows Installer won't - even when using "just for me"option Message-ID: <1412023007.96.0.578659583634.issue22516@psf.upfronthosting.co.za> New submission from J. Morton: Could not install 3.4.1 on Windows 7 Enterprise SP1 using the .MSI installer, even when using the ?just for me? option (our IM department has not given us the necessary rights to run the .MSI installer even in this mode). Please consider providing 3.4.1 (and all future releases) in a non ?.MSI? file so that ?admin? rights are not needed to do the install (a simple ZIP file? something similar to www.portableapps.com ?). Or as a source ?tarball? for Windows machines ? ideally one that is independent of compiler vendor (compileable using gcc, etc. instead of MSVC). ---------- components: Installation messages: 227832 nosy: NaCl, tim.golden priority: normal severity: normal status: open title: Windows Installer won't - even when using "just for me"option versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 29 22:59:02 2014 From: report at bugs.python.org (paul) Date: Mon, 29 Sep 2014 20:59:02 +0000 Subject: [New-bugs-announce] [issue22517] BufferedRWpair doesn't clear weakrefs Message-ID: <1412024342.18.0.174047226004.issue22517@psf.upfronthosting.co.za> New submission from paul: # static void # bufferedrwpair_dealloc(rwpair *self) # { # _PyObject_GC_UNTRACK(self); # Py_CLEAR(self->reader); # Py_CLEAR(self->writer); # Py_CLEAR(self->dict); # Py_TYPE(self)->tp_free((PyObject *) self); # } # # Weakrefs to this object contain stale pointer after BufferedRWPair is freed. ---------- files: poc_brwpair_weakref.py messages: 227835 nosy: pkt priority: normal severity: normal status: open title: BufferedRWpair doesn't clear weakrefs type: crash versions: Python 3.4 Added file: http://bugs.python.org/file36753/poc_brwpair_weakref.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 29 23:01:25 2014 From: report at bugs.python.org (paul) Date: Mon, 29 Sep 2014 21:01:25 +0000 Subject: [New-bugs-announce] [issue22518] integer overflow in encoding unicode Message-ID: <1412024485.39.0.908219343503.issue22518@psf.upfronthosting.co.za> New submission from paul: # static PyObject * # unicode_encode_ucs1(PyObject *unicode, # const char *errors, # unsigned int limit) # { # ... # while (pos < size) { # ... # case 4: /* xmlcharrefreplace */ # /* determine replacement size */ # for (i = collstart, repsize = 0; i < collend; ++i) { # Py_UCS4 ch = PyUnicode_READ(kind, data, i); # ... # else if (ch < 100000) # 1 repsize += 2+5+1; # ... # } # 2 requiredsize = respos+repsize+(size-collend); # if (requiredsize > ressize) { # ... # if (_PyBytes_Resize(&res, requiredsize)) # ... # } # /* generate replacement */ # for (i = collstart; i < collend; ++i) { # 3 str += sprintf(str, "&#%d;", PyUnicode_READ(kind, data, i)); # } # # 1. ch=0xffff<100000, so repsize = (number of unicode chars in string)*8 # =2^29*2^3=2^32 == 0 (mod 2^32) # 2. respos==0, collend==0, so requiredsize=repsize==0, so the destination buffer # isn't resized # 3. overwrite ---------- files: poc_encode_latin1.py messages: 227837 nosy: pkt priority: normal severity: normal status: open title: integer overflow in encoding unicode type: crash versions: Python 3.4 Added file: http://bugs.python.org/file36754/poc_encode_latin1.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 29 23:03:05 2014 From: report at bugs.python.org (paul) Date: Mon, 29 Sep 2014 21:03:05 +0000 Subject: [New-bugs-announce] [issue22519] integer overflow in computing byte's object representation Message-ID: <1412024585.59.0.0929794410107.issue22519@psf.upfronthosting.co.za> New submission from paul: # PyBytes_Repr(PyObject *obj, int smartquotes) # { # PyBytesObject* op = (PyBytesObject*) obj; # 1 Py_ssize_t i, length = Py_SIZE(op); # size_t newsize, squotes, dquotes; # ... # # /* Compute size of output string */ # newsize = 3; /* b'' */ # s = (unsigned char*)op->ob_sval; # for (i = 0; i < length; i++) { # ... # default: # if (s[i] < ' ' || s[i] >= 0x7f) # 2 newsize += 4; /* \xHH */ # else # newsize++; # } # } # ... # 3 if (newsize > (PY_SSIZE_T_MAX - sizeof(PyUnicodeObject) - 1)) { # PyErr_SetString(PyExc_OverflowError, # "bytes object is too large to make repr"); # return NULL; # } # 4 v = PyUnicode_New(newsize, 127); # ... # *p++ = 'b', *p++ = quote; # for (i = 0; i < length; i++) { # ... # 5 *p++ = c; # } # *p++ = quote; # 6 assert(_PyUnicode_CheckConsistency(v, 1)); # return v; # } # # 1. length=2^30+1=1073741825 # 2. newsize=length*4+3=7 (overflow) # 3. check is inefficient, because newsize=7 # 4. allocated buffer is too small # 5. buffer overwrite # 6. this assert will likely fail, since there is a good chance the allocated # buffer is just before the huge one, so the huge one will overwrite itself. ---------- files: poc_repr_bytes.py messages: 227838 nosy: pkt priority: normal severity: normal status: open title: integer overflow in computing byte's object representation type: crash versions: Python 3.4 Added file: http://bugs.python.org/file36755/poc_repr_bytes.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 29 23:04:19 2014 From: report at bugs.python.org (paul) Date: Mon, 29 Sep 2014 21:04:19 +0000 Subject: [New-bugs-announce] [issue22520] integer overflow in computing unicode's object representation Message-ID: <1412024659.14.0.152968090659.issue22520@psf.upfronthosting.co.za> New submission from paul: # unicode_repr(PyObject *unicode) # { # ... # 1 isize = PyUnicode_GET_LENGTH(unicode); # idata = PyUnicode_DATA(unicode); # # /* Compute length of output, quote characters, and # maximum character */ # osize = 0; # ... # for (i = 0; i < isize; i++) { # Py_UCS4 ch = PyUnicode_READ(ikind, idata, i); # switch (ch) { # ... # default: # /* Fast-path ASCII */ # if (ch < ' ' || ch == 0x7f) # 2 osize += 4; /* \xHH */ # ... # } # } # # ... # 3 repr = PyUnicode_New(osize, max); # ... # for (i = 0, o = 1; i < isize; i++) { # Py_UCS4 ch = PyUnicode_READ(ikind, idata, i); # ... # else { # 4 PyUnicode_WRITE(okind, odata, o++, ch); # } # } # } # } # /* Closing quote already added at the beginning */ # 5 assert(_PyUnicode_CheckConsistency(repr, 1)); # return repr; # } # # 1. isize=2^30+1 # 2. osize=isize*4=4 # 3. allocated buffer is too small # 4. heap overflow # 5. this assert will likely fail, since there is a good chance the allocated # buffer is just before the huge one, so the huge one will overwrite itself. ---------- files: poc_repr_unicode.py messages: 227839 nosy: pkt priority: normal severity: normal status: open title: integer overflow in computing unicode's object representation type: crash versions: Python 3.4 Added file: http://bugs.python.org/file36756/poc_repr_unicode.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 13:15:10 2014 From: report at bugs.python.org (STINNER Victor) Date: Tue, 30 Sep 2014 11:15:10 +0000 Subject: [New-bugs-announce] [issue22521] ctypes compilation fails on FreeBSD: Undefined symbol "ffi_call_win32" Message-ID: <1412075710.94.0.470127563443.issue22521@psf.upfronthosting.co.za> New submission from STINNER Victor: On buildbots FreeBSD 6.4 and 7.2, the compilation of the ctypes module fails because the function "ffi_call_win32" is missing. I don't understand why a "win32" function would be needed on FreeBSD!? http://buildbot.python.org/all/builders/x86%20FreeBSD%207.2%203.x/builds/5618/steps/test/logs/stdio http://buildbot.python.org/all/builders/x86%20FreeBSD%206.4%203.x/builds/5060/steps/compile/logs/stdio *** WARNING: renaming "_ctypes" since importing it failed: build/lib.freebsd-6.4-RELEASE-i386-3.5-pydebug/_ctypes.so: Undefined symbol "ffi_call_win32" ---------- messages: 227878 nosy: haypo, koobs priority: normal severity: normal status: open title: ctypes compilation fails on FreeBSD: Undefined symbol "ffi_call_win32" type: compile error versions: Python 3.4, Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 13:28:59 2014 From: report at bugs.python.org (Claudiu Popa) Date: Tue, 30 Sep 2014 11:28:59 +0000 Subject: [New-bugs-announce] [issue22522] sys.excepthook doesn't receive the traceback when called from code.InteractiveInterpreter Message-ID: <1412076539.24.0.326910714196.issue22522@psf.upfronthosting.co.za> New submission from Claudiu Popa: It seems that sys.excepthook doesn't receive the traceback when an error occurs during a code.InteractiveInterpreter run. The problem is here: https://hg.python.org/cpython/file/5ade1061fa3d/Lib/code.py#l168. last_tb was previously set to None right before. The attached patch passes sys.last_traceback to sys.excepthook. ---------- components: Library (Lib) files: code_excepthook_traceback.patch keywords: patch messages: 227881 nosy: Claudiu.Popa, r.david.murray priority: normal severity: normal status: open title: sys.excepthook doesn't receive the traceback when called from code.InteractiveInterpreter type: behavior versions: Python 3.5 Added file: http://bugs.python.org/file36759/code_excepthook_traceback.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 14:50:56 2014 From: report at bugs.python.org (Matthias Klose) Date: Tue, 30 Sep 2014 12:50:56 +0000 Subject: [New-bugs-announce] [issue22523] [regression] Lib/ssl.py still references _ssl.sslwrap Message-ID: <1412081456.12.0.536748620506.issue22523@psf.upfronthosting.co.za> New submission from Matthias Klose: the backport in issue #21308 caused this regression. _ssl.sslwrap is still referenced in some files. ---------- components: Library (Lib) messages: 227896 nosy: alex, benjamin.peterson, christian.heimes, doko, dstufft, giampaolo.rodola, janssen, pitrou priority: release blocker severity: normal status: open title: [regression] Lib/ssl.py still references _ssl.sslwrap versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 16:42:42 2014 From: report at bugs.python.org (Ben Hoyt) Date: Tue, 30 Sep 2014 14:42:42 +0000 Subject: [New-bugs-announce] [issue22524] PEP 471 implementation: os.scandir() directory scanning function Message-ID: <1412088162.24.0.84630531931.issue22524@psf.upfronthosting.co.za> New submission from Ben Hoyt: Opening this to track the implementation of PEP 471: os.scandir() [1]. This supercedes Issue #11406 (and possibly others)? The implementation is most of the way there, but not yet done as a CPythono 3.5 patch. Before I have a proper patch, it's available on GitHub [2] -- see posixmodule_scandir*.c, test/test_scandir.py, and os.rst. [1] http://legacy.python.org/dev/peps/pep-0471/ [2] https://github.com/benhoyt/scandir ---------- components: Library (Lib) messages: 227933 nosy: benhoyt, haypo, mmarkk, pitrou, tim.golden priority: normal severity: normal status: open title: PEP 471 implementation: os.scandir() directory scanning function type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 16:57:19 2014 From: report at bugs.python.org (Behdad Esfahbod) Date: Tue, 30 Sep 2014 14:57:19 +0000 Subject: [New-bugs-announce] [issue22525] ast.literal_eval() doesn't do what the documentation says Message-ID: <1412089039.94.0.626035759284.issue22525@psf.upfronthosting.co.za> New submission from Behdad Esfahbod: The documentation says: """ Safely evaluate an expression node or a string containing a Python expression. The string or node provided may only consist of the following Python literal structures: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None. This can be used for safely evaluating strings containing Python expressions from untrusted sources without the need to parse the values oneself. """ This makes me to believe that this is a useful replacement for eval() that is safe. However, it fails to make it clear that it parses **one literal**, NOT an expression. Ie. it can't handle "2*2". Weirdly enough, at least with my Python 3.2.3, it does handle "2+2" with no problem. This seriously limits the usefulness of this function. Is there really no equivalent that parses simple expressions of literals? ---------- messages: 227941 nosy: Behdad.Esfahbod priority: normal severity: normal status: open title: ast.literal_eval() doesn't do what the documentation says _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 18:10:51 2014 From: report at bugs.python.org (Jakub Mateusz Kowalski) Date: Tue, 30 Sep 2014 16:10:51 +0000 Subject: [New-bugs-announce] [issue22526] file iteration crashes for huge lines (2GiB+) Message-ID: <1412093451.75.0.0583026923325.issue22526@psf.upfronthosting.co.za> New submission from Jakub Mateusz Kowalski: File /tmp/2147483648zeros is 2^31 (2GiB) zero-bytes ('\0'). Readline method works fine: >>> fh = open('/tmp/2147483648zeros', 'rb') >>> line = fh.readline() >>> len(line) 2147483648 However when I try to iterate over the file: >>> fh = open('/tmp/2147483648zeros', 'rb') >>> for line in fh: ... print len(line) SystemError Traceback (most recent call last) /home/jkowalski/ in () ----> 1 for line in fh: 2 print len(line) 3 SystemError: Negative size passed to PyString_FromStringAndSize Same is for greater files (issue discovered for 2243973120 B). For a shorter file iteration works as expected. File /tmp/2147483647zeros is 2^31 - 1 (< 2GiB) zero-bytes. >>> fh = open('/tmp/2147483647zeros', 'rb') >>> for line in fh: ... print len(line) 2147483647 I guess the variable used for size is of 32bit signed type. I am using Python 2.7.3 (default, Feb 27 2014, 19:58:35) with IPython 0.12.1 on Ubuntu 12.04.5 LTS. ---------- components: IO messages: 227949 nosy: Jakub.Mateusz.Kowalski priority: normal severity: normal status: open title: file iteration crashes for huge lines (2GiB+) type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 21:01:41 2014 From: report at bugs.python.org (Antoine Pitrou) Date: Tue, 30 Sep 2014 19:01:41 +0000 Subject: [New-bugs-announce] [issue22527] Documentation warnings Message-ID: <1412103701.09.0.111850399137.issue22527@psf.upfronthosting.co.za> New submission from Antoine Pitrou: I get the following warnings when building the doc: $ make html sphinx-build -b html -d build/doctrees -D latex_paper_size= . build/html Running Sphinx v1.2.1 loading pickled environment... done building [html]: targets for 65 source files that are out of date updating environment: 1 added, 66 changed, 0 removed reading sources... [100%] whatsnew/changelog /home/antoine/cpython/default/Doc/library/compileall.rst:23: WARNING: Malformed option description u'directory ...', should look like "-opt args", "--opt args" or "/opt args" /home/antoine/cpython/default/Doc/library/compileall.rst:23: WARNING: Malformed option description u'file ...', should look like "-opt args", "--opt args" or "/opt args" /home/antoine/cpython/default/Doc/library/json.rst:593: WARNING: Malformed option description u'infile', should look like "-opt args", "--opt args" or "/opt args" /home/antoine/cpython/default/Doc/library/json.rst:611: WARNING: Malformed option description u'outfile', should look like "-opt args", "--opt args" or "/opt args" /home/antoine/cpython/default/Doc/tutorial/appendix.rst:69: WARNING: duplicate label tut-startup, other instance in /home/antoine/cpython/default/Doc/tutorial/interpreter.rst /home/antoine/cpython/default/Doc/tutorial/appendix.rst:16: WARNING: duplicate label tut-error, other instance in /home/antoine/cpython/default/Doc/tutorial/interpreter.rst /home/antoine/cpython/default/Doc/tutorial/appendix.rst:102: WARNING: duplicate label tut-customize, other instance in /home/antoine/cpython/default/Doc/tutorial/interpreter.rst /home/antoine/cpython/default/Doc/tutorial/appendix.rst:38: WARNING: duplicate label tut-scripts, other instance in /home/antoine/cpython/default/Doc/tutorial/interpreter.rst /home/antoine/cpython/default/Doc/using/cmdline.rst:167: WARNING: Malformed option description u'-?', should look like "-opt args", "--opt args" or "/opt args" ---------- assignee: docs at python components: Documentation messages: 227978 nosy: docs at python, pitrou priority: normal severity: normal status: open title: Documentation warnings type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 21:45:40 2014 From: report at bugs.python.org (Friedrich Spee von Langenfeld) Date: Tue, 30 Sep 2014 19:45:40 +0000 Subject: [New-bugs-announce] [issue22528] Missing hint to source code Message-ID: <1412106340.24.0.431957287455.issue22528@psf.upfronthosting.co.za> New submission from Friedrich Spee von Langenfeld: Nearly every module entry in the documentation has a headline with the pattern , followed (in the second line) by . In the entry concerning pdb (https://docs.python.org/3/library/pdb.html), there is no hint where the source code is located. This is especially annoying, because the user of the Python Debugger is explicitely invented to extend the module?s capabilities ("The debugger is extensible ? it is actually defined as the class Pdb. This is currently undocumented but easily understood by reading the source."). A link to the source code should be added as the second line. The same thing should done for symtable, compileall and perhaps some other modules, which I haven?t checked by now. ---------- assignee: docs at python components: Documentation messages: 227989 nosy: Friedrich.Spee.von.Langenfeld, docs at python priority: normal severity: normal status: open title: Missing hint to source code _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 30 21:53:28 2014 From: report at bugs.python.org (Friedrich Spee von Langenfeld) Date: Tue, 30 Sep 2014 19:53:28 +0000 Subject: [New-bugs-announce] [issue22529] Why copyright only 1990-2013 and not 2014 Message-ID: <1412106808.33.0.959231434867.issue22529@psf.upfronthosting.co.za> New submission from Friedrich Spee von Langenfeld: In the legal statements (https://www.python.org/about/legal) you can read the following sentence: "[...] the contents of this website are copyright ? 1990-2013, Python Software Foundation, [...]". Why is the year 2014 not covered? The message "Copyright ? 1990-2013" is shown at the bottom of all pages which I?ve read as yet in the Issue Tracker. Only on the main website (www.python.org), the website with the Legal Statements mentioned before and perhaps some other sides, the copyright notice includes the year 2014 ---------- messages: 227990 nosy: Friedrich.Spee.von.Langenfeld priority: normal severity: normal status: open title: Why copyright only 1990-2013 and not 2014 _______________________________________ Python tracker _______________________________________