From python-checkins at python.org Wed Apr 1 00:03:41 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 00:03:41 +0200 (CEST) Subject: [Python-checkins] r70905 - python/trunk/Doc/distutils/apiref.rst Message-ID: <20090331220341.373321E43C6@bag.python.org> Author: georg.brandl Date: Wed Apr 1 00:03:40 2009 New Revision: 70905 Log: #5563: more documentation for bdist_msi. Modified: python/trunk/Doc/distutils/apiref.rst Modified: python/trunk/Doc/distutils/apiref.rst ============================================================================== --- python/trunk/Doc/distutils/apiref.rst (original) +++ python/trunk/Doc/distutils/apiref.rst Wed Apr 1 00:03:40 2009 @@ -1758,8 +1758,16 @@ .. module:: distutils.command.bdist_msi :synopsis: Build a binary distribution as a Windows MSI file +.. class:: bdist_msi(Command) -.. % todo + Builds a `Windows Installer`_ (.msi) binary package. + + .. _Windows Installer: http://msdn.microsoft.com/en-us/library/cc185688(VS.85).aspx + + In most cases, the ``bdist_msi`` installer is a better choice than the + ``bdist_wininst`` installer, because it provides better support for + Win64 platforms, allows administrators to perform non-interactive + installations, and allows installation through group policies. :mod:`distutils.command.bdist_rpm` --- Build a binary distribution as a Redhat RPM and SRPM From python-checkins at python.org Wed Apr 1 00:11:54 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 00:11:54 +0200 (CEST) Subject: [Python-checkins] r70906 - in python/trunk/Lib: sgmllib.py test/test_sgmllib.py Message-ID: <20090331221154.1AC2E1E406D@bag.python.org> Author: georg.brandl Date: Wed Apr 1 00:11:53 2009 New Revision: 70906 Log: #1651995: fix _convert_ref for non-ASCII characters. Modified: python/trunk/Lib/sgmllib.py python/trunk/Lib/test/test_sgmllib.py Modified: python/trunk/Lib/sgmllib.py ============================================================================== --- python/trunk/Lib/sgmllib.py (original) +++ python/trunk/Lib/sgmllib.py Wed Apr 1 00:11:53 2009 @@ -396,7 +396,7 @@ n = int(name) except ValueError: return - if not 0 <= n <= 255: + if not 0 <= n <= 127: return return self.convert_codepoint(n) Modified: python/trunk/Lib/test/test_sgmllib.py ============================================================================== --- python/trunk/Lib/test/test_sgmllib.py (original) +++ python/trunk/Lib/test/test_sgmllib.py Wed Apr 1 00:11:53 2009 @@ -373,6 +373,15 @@ if len(data) != CHUNK: break + def test_only_decode_ascii(self): + # SF bug #1651995, make sure non-ascii character references are not decoded + s = '' + self.check_events(s, [ + ('starttag', 'signs', + [('exclamation', '!'), ('copyright', '©'), + ('quoteleft', '‘')]), + ]) + # XXX These tests have been disabled by prefixing their names with # an underscore. The first two exercise outstanding bugs in the # sgmllib module, and the third exhibits questionable behavior From python-checkins at python.org Wed Apr 1 00:18:20 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 00:18:20 +0200 (CEST) Subject: [Python-checkins] r70907 - python/trunk/Doc/library/urllib.rst Message-ID: <20090331221820.06EA61E4033@bag.python.org> Author: georg.brandl Date: Wed Apr 1 00:18:19 2009 New Revision: 70907 Log: #3427: document correct return type for urlopen().info(). Modified: python/trunk/Doc/library/urllib.rst Modified: python/trunk/Doc/library/urllib.rst ============================================================================== --- python/trunk/Doc/library/urllib.rst (original) +++ python/trunk/Doc/library/urllib.rst Wed Apr 1 00:18:19 2009 @@ -49,7 +49,7 @@ .. index:: module: mimetools The :meth:`info` method returns an instance of the class - :class:`mimetools.Message` containing meta-information associated with the + :class:`httplib.HTTPMessage` containing meta-information associated with the URL. When the method is HTTP, these headers are those returned by the server at the head of the retrieved HTML page (including Content-Length and Content-Type). When the method is FTP, a Content-Length header will be From buildbot at python.org Wed Apr 1 00:20:15 2009 From: buildbot at python.org (buildbot at python.org) Date: Tue, 31 Mar 2009 22:20:15 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 3.x Message-ID: <20090331222015.9A6AE1E4033@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 3.x. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%203.x/builds/407 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: georg.brandl,hirokazu.yamamoto,jeremy.hylton,jesse.noller,kristjan.jonsson,r.david.murray,raymond.hettinger BUILD FAILED: failed test Excerpt from the test logfile: 6 tests failed: test_distutils test_importlib test_memoryio test_posix test_urllib2net test_wait4 ====================================================================== ERROR: test_reg_class (distutils.tests.test_msvc9compiler.msvc9compilerTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\distutils\tests\test_msvc9compiler.py", line 51, in test_reg_class import _winreg ImportError: No module named _winreg ====================================================================== FAIL: test_package (importlib.test.source.test_abc_loader.PyLoaderTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\importlib\test\source\test_abc_loader.py", line 149, in test_package __loader__=mock) File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\importlib\test\source\test_abc_loader.py", line 126, in eq_attrs self.assertEqual(getattr(ob, attr), val) AssertionError: ['/path/to//__init__'] != ['/path/to/'] ====================================================================== FAIL: test_package (importlib.test.source.test_abc_loader.PyPycLoaderTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\importlib\test\source\test_abc_loader.py", line 264, in test_package mock, name = super().test_package() File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\importlib\test\source\test_abc_loader.py", line 149, in test_package __loader__=mock) File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\importlib\test\source\test_abc_loader.py", line 126, in eq_attrs self.assertEqual(getattr(ob, attr), val) AssertionError: ['/path/to//__init__'] != ['/path/to/'] ====================================================================== FAIL: test_issue5265 (test.test_memoryio.PyStringIOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_memoryio.py", line 516, in test_issue5265 self.assertEqual(memio.read(5), "a\nb\n") AssertionError: 'a\n\nb\n' != 'a\nb\n' ====================================================================== FAIL: test_newline_none (test.test_memoryio.PyStringIOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_memoryio.py", line 461, in test_newline_none self.assertEqual(list(memio), ["a\n", "b\n", "c\n", "d"]) AssertionError: ['a\n', 'b\n', '\n', 'c\n', 'd'] != ['a\n', 'b\n', 'c\n', 'd'] ====================================================================== FAIL: test_newlines_property (test.test_memoryio.PyStringIOTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_memoryio.py", line 431, in test_newlines_property self.assertEqual(memio.newlines, "\n") AssertionError: '\r\n' != '\n' Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_posix.py", line 6, in import posix ImportError: No module named posix Traceback (most recent call last): File "../lib/test/regrtest.py", line 612, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_posix.py", line 8, in raise unittest.SkipTest("posix is not available") NameError: name 'unittest' is not defined ====================================================================== ERROR: test_http_basic (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_urllib2net.py", line 198, in test_http_basic self.assertTrue(u.fp.fp.raw._sock.gettimeout() is None) AttributeError: '_io.BufferedReader' object has no attribute 'fp' ====================================================================== ERROR: test_http_default_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_urllib2net.py", line 207, in test_http_default_timeout self.assertEqual(u.fp.fp.raw._sock.gettimeout(), 60) AttributeError: '_io.BufferedReader' object has no attribute 'fp' ====================================================================== ERROR: test_http_no_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_urllib2net.py", line 216, in test_http_no_timeout self.assertTrue(u.fp.fp.raw._sock.gettimeout() is None) AttributeError: '_io.BufferedReader' object has no attribute 'fp' ====================================================================== ERROR: test_http_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_urllib2net.py", line 220, in test_http_timeout self.assertEqual(u.fp.fp.raw._sock.gettimeout(), 120) AttributeError: '_io.BufferedReader' object has no attribute 'fp' Traceback (most recent call last): File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_wait4.py", line 10, in os.fork AttributeError: 'module' object has no attribute 'fork' Traceback (most recent call last): File "../lib/test/regrtest.py", line 612, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "E:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_wait4.py", line 12, in raise unittest.SkipTest("os.fork not defined -- skipping test_wait4") NameError: name 'unittest' is not defined sincerely, -The Buildbot From python-checkins at python.org Wed Apr 1 00:20:36 2009 From: python-checkins at python.org (jesse.noller) Date: Wed, 1 Apr 2009 00:20:36 +0200 (CEST) Subject: [Python-checkins] r70908 - in python/trunk: Misc/ACKS Misc/NEWS Modules/_multiprocessing/win32_functions.c Python/pythonrun.c Message-ID: <20090331222036.34D921E4033@bag.python.org> Author: jesse.noller Date: Wed Apr 1 00:20:35 2009 New Revision: 70908 Log: Issue 5619: Pass MS CRT debug flags into subprocesses Modified: python/trunk/Misc/ACKS python/trunk/Misc/NEWS python/trunk/Modules/_multiprocessing/win32_functions.c python/trunk/Python/pythonrun.c Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Wed Apr 1 00:20:35 2009 @@ -685,6 +685,7 @@ Nathan Sullivan Mark Summerfield Hisao Suzuki +Andrew Svetlov Kalle Svensson Paul Swartz Thenault Sylvain Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 00:20:35 2009 @@ -199,6 +199,9 @@ Library ------- +- Issue #5619: Multiprocessing children disobey the debug flag and causes + popups on windows buildbots. Patch applied to work around this issue. + - Issue #5632: Thread.ident was None for the main thread and threads not created with the threading module. Modified: python/trunk/Modules/_multiprocessing/win32_functions.c ============================================================================== --- python/trunk/Modules/_multiprocessing/win32_functions.c (original) +++ python/trunk/Modules/_multiprocessing/win32_functions.c Wed Apr 1 00:20:35 2009 @@ -130,6 +130,12 @@ if (!PyArg_ParseTuple(args, "I", &uExitCode)) return NULL; + #if defined(Py_DEBUG) + SetErrorMode(SEM_FAILCRITICALERRORS|SEM_NOALIGNMENTFAULTEXCEPT|SEM_NOGPFAULTERRORBOX|SEM_NOOPENFILEERRORBOX); + _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_DEBUG); + #endif + + ExitProcess(uExitCode); return NULL; Modified: python/trunk/Python/pythonrun.c ============================================================================== --- python/trunk/Python/pythonrun.c (original) +++ python/trunk/Python/pythonrun.c Wed Apr 1 00:20:35 2009 @@ -1634,6 +1634,8 @@ Py_FatalError(const char *msg) { fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ + #ifdef MS_WINDOWS { size_t len = strlen(msg); From python-checkins at python.org Wed Apr 1 00:25:20 2009 From: python-checkins at python.org (jesse.noller) Date: Wed, 1 Apr 2009 00:25:20 +0200 (CEST) Subject: [Python-checkins] r70909 - in python/branches/release26-maint: Misc/ACKS Misc/NEWS Modules/_multiprocessing/win32_functions.c Python/pythonrun.c Message-ID: <20090331222520.6DCDF1E406D@bag.python.org> Author: jesse.noller Date: Wed Apr 1 00:25:20 2009 New Revision: 70909 Log: Merged revisions 70908 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70908 | jesse.noller | 2009-03-31 17:20:35 -0500 (Tue, 31 Mar 2009) | 1 line Issue 5619: Pass MS CRT debug flags into subprocesses ........ Modified: python/branches/release26-maint/ (props changed) python/branches/release26-maint/Misc/ACKS python/branches/release26-maint/Misc/NEWS python/branches/release26-maint/Modules/_multiprocessing/win32_functions.c python/branches/release26-maint/Python/pythonrun.c Modified: python/branches/release26-maint/Misc/ACKS ============================================================================== --- python/branches/release26-maint/Misc/ACKS (original) +++ python/branches/release26-maint/Misc/ACKS Wed Apr 1 00:25:20 2009 @@ -674,6 +674,7 @@ Nathan Sullivan Mark Summerfield Hisao Suzuki +Andrew Svetlov Kalle Svensson Paul Swartz Thenault Sylvain Modified: python/branches/release26-maint/Misc/NEWS ============================================================================== --- python/branches/release26-maint/Misc/NEWS (original) +++ python/branches/release26-maint/Misc/NEWS Wed Apr 1 00:25:20 2009 @@ -92,6 +92,9 @@ Library ------- +- Issue #5619: Multiprocessing children disobey the debug flag and causes + popups on windows buildbots. Patch applied to work around this issue. + - Issue #5632: Thread.ident was None for the main thread and threads not created with the threading module. Modified: python/branches/release26-maint/Modules/_multiprocessing/win32_functions.c ============================================================================== --- python/branches/release26-maint/Modules/_multiprocessing/win32_functions.c (original) +++ python/branches/release26-maint/Modules/_multiprocessing/win32_functions.c Wed Apr 1 00:25:20 2009 @@ -130,6 +130,12 @@ if (!PyArg_ParseTuple(args, "I", &uExitCode)) return NULL; + #if defined(Py_DEBUG) + SetErrorMode(SEM_FAILCRITICALERRORS|SEM_NOALIGNMENTFAULTEXCEPT|SEM_NOGPFAULTERRORBOX|SEM_NOOPENFILEERRORBOX); + _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_DEBUG); + #endif + + ExitProcess(uExitCode); return NULL; Modified: python/branches/release26-maint/Python/pythonrun.c ============================================================================== --- python/branches/release26-maint/Python/pythonrun.c (original) +++ python/branches/release26-maint/Python/pythonrun.c Wed Apr 1 00:25:20 2009 @@ -1631,6 +1631,8 @@ Py_FatalError(const char *msg) { fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ + #ifdef MS_WINDOWS { size_t len = strlen(msg); From python-checkins at python.org Wed Apr 1 00:27:24 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:27:24 +0200 (CEST) Subject: [Python-checkins] r70910 - in python/trunk: Doc/distutils/setupscript.rst Lib/distutils/command/build_ext.py Lib/distutils/extension.py Lib/distutils/tests/test_build_ext.py Misc/NEWS Message-ID: <20090331222724.3862A1E406D@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:27:23 2009 New Revision: 70910 Log: #5583 Added optional Extensions in Distutils Modified: python/trunk/Doc/distutils/setupscript.rst python/trunk/Lib/distutils/command/build_ext.py python/trunk/Lib/distutils/extension.py python/trunk/Lib/distutils/tests/test_build_ext.py python/trunk/Misc/NEWS Modified: python/trunk/Doc/distutils/setupscript.rst ============================================================================== --- python/trunk/Doc/distutils/setupscript.rst (original) +++ python/trunk/Doc/distutils/setupscript.rst Wed Apr 1 00:27:23 2009 @@ -334,6 +334,10 @@ There are still some other options which can be used to handle special cases. +The :option:`optional` option is a boolean; if it is true, that specifies that +a build failure in the extension should not abort the build process, but simply +not install the failing extension. + The :option:`extra_objects` option is a list of object files to be passed to the linker. These files must not have extensions, as the default extension for the compiler is used. Modified: python/trunk/Lib/distutils/command/build_ext.py ============================================================================== --- python/trunk/Lib/distutils/command/build_ext.py (original) +++ python/trunk/Lib/distutils/command/build_ext.py Wed Apr 1 00:27:23 2009 @@ -476,7 +476,13 @@ self.check_extensions_list(self.extensions) for ext in self.extensions: - self.build_extension(ext) + try: + self.build_extension(ext) + except (CCompilerError, DistutilsError), e: + if not ext.optional: + raise + self.warn('building extension "%s" failed: %s' % + (ext.name, e)) def build_extension(self, ext): sources = ext.sources Modified: python/trunk/Lib/distutils/extension.py ============================================================================== --- python/trunk/Lib/distutils/extension.py (original) +++ python/trunk/Lib/distutils/extension.py Wed Apr 1 00:27:23 2009 @@ -83,6 +83,9 @@ language : string extension language (i.e. "c", "c++", "objc"). Will be detected from the source extensions if not provided. + optional : boolean + specifies that a build failure in the extension should not abort the + build process, but simply not install the failing extension. """ # When adding arguments to this constructor, be sure to update @@ -101,6 +104,7 @@ swig_opts = None, depends=None, language=None, + optional=None, **kw # To catch unknown keywords ): assert type(name) is StringType, "'name' must be a string" @@ -123,6 +127,7 @@ self.swig_opts = swig_opts or [] self.depends = depends or [] self.language = language + self.optional = optional # If there are unknown keyword options, warn about them if len(kw): Modified: python/trunk/Lib/distutils/tests/test_build_ext.py ============================================================================== --- python/trunk/Lib/distutils/tests/test_build_ext.py (original) +++ python/trunk/Lib/distutils/tests/test_build_ext.py Wed Apr 1 00:27:23 2009 @@ -8,6 +8,8 @@ from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests import support +from distutils.extension import Extension +from distutils.errors import UnknownFileError import unittest from test import test_support @@ -20,7 +22,9 @@ srcdir = sysconfig.get_config_var('srcdir') return os.path.join(srcdir, 'Modules', 'xxmodule.c') -class BuildExtTestCase(support.TempdirManager, unittest.TestCase): +class BuildExtTestCase(support.TempdirManager, + support.LoggingSilencer, + unittest.TestCase): def setUp(self): # Create a simple test environment # Note that we're making changes to sys.path @@ -142,6 +146,22 @@ self.assert_(lib in cmd.library_dirs) self.assert_(incl in cmd.include_dirs) + def test_optional_extension(self): + + # this extension will fail, but let's ignore this failure + # with the optional argument. + modules = [Extension('foo', ['xxx'], optional=False)] + dist = Distribution({'name': 'xx', 'ext_modules': modules}) + cmd = build_ext(dist) + cmd.ensure_finalized() + self.assertRaises(UnknownFileError, cmd.run) # should raise an error + + modules = [Extension('foo', ['xxx'], optional=True)] + dist = Distribution({'name': 'xx', 'ext_modules': modules}) + cmd = build_ext(dist) + cmd.ensure_finalized() + cmd.run() # should pass + def test_suite(): src = _get_source_filename() if not os.path.exists(src): Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 00:27:23 2009 @@ -199,6 +199,9 @@ Library ------- +- Issue #5583: Added optional Extensions in Distutils. Initial patch by Georg + Brandl. + - Issue #5619: Multiprocessing children disobey the debug flag and causes popups on windows buildbots. Patch applied to work around this issue. From python-checkins at python.org Wed Apr 1 00:29:11 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:29:11 +0200 (CEST) Subject: [Python-checkins] r70911 - python/branches/release26-maint Message-ID: <20090331222911.298701E406D@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:29:10 2009 New Revision: 70911 Log: Blocked revisions 70910 via svnmerge ........ r70910 | tarek.ziade | 2009-03-31 17:27:23 -0500 (Tue, 31 Mar 2009) | 1 line #5583 Added optional Extensions in Distutils ........ Modified: python/branches/release26-maint/ (props changed) From python-checkins at python.org Wed Apr 1 00:35:46 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 00:35:46 +0200 (CEST) Subject: [Python-checkins] r70912 - python/trunk/Misc/gdbinit Message-ID: <20090331223546.D58851E406D@bag.python.org> Author: georg.brandl Date: Wed Apr 1 00:35:46 2009 New Revision: 70912 Log: #5617: add a handy function to print a unicode string to gdbinit. Modified: python/trunk/Misc/gdbinit Modified: python/trunk/Misc/gdbinit ============================================================================== --- python/trunk/Misc/gdbinit (original) +++ python/trunk/Misc/gdbinit Wed Apr 1 00:35:46 2009 @@ -138,3 +138,16 @@ end select-frame 0 end + +# generally useful macro to print a Unicode string +def pu + set $uni = $arg0 + set $i = 0 + while (*$uni && $i++<100) + if (*$uni < 0x80) + print *(char*)$uni++ + else + print /x *(short*)$uni++ + end + end +end From python-checkins at python.org Wed Apr 1 00:36:44 2009 From: python-checkins at python.org (jesse.noller) Date: Wed, 1 Apr 2009 00:36:44 +0200 (CEST) Subject: [Python-checkins] r70913 - in python/branches/py3k: Misc/ACKS Misc/NEWS Modules/_multiprocessing/win32_functions.c Python/pythonrun.c Message-ID: <20090331223644.DDCEA1E406D@bag.python.org> Author: jesse.noller Date: Wed Apr 1 00:36:44 2009 New Revision: 70913 Log: Merged revisions 70908 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70908 | jesse.noller | 2009-03-31 17:20:35 -0500 (Tue, 31 Mar 2009) | 1 line Issue 5619: Pass MS CRT debug flags into subprocesses ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Misc/ACKS python/branches/py3k/Misc/NEWS python/branches/py3k/Modules/_multiprocessing/win32_functions.c python/branches/py3k/Python/pythonrun.c Modified: python/branches/py3k/Misc/ACKS ============================================================================== --- python/branches/py3k/Misc/ACKS (original) +++ python/branches/py3k/Misc/ACKS Wed Apr 1 00:36:44 2009 @@ -689,6 +689,7 @@ Nathan Sullivan Mark Summerfield Hisao Suzuki +Andrew Svetlov Kalle Svensson Andrew Svetlov Paul Swartz Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 00:36:44 2009 @@ -53,6 +53,9 @@ Library ------- +- Issue #5619: Multiprocessing children disobey the debug flag and causes + popups on windows buildbots. Patch applied to work around this issue. + - Issue #5400: Added patch for multiprocessing on netbsd compilation/support - Issue #5387: Fixed mmap.move crash by integer overflow. Modified: python/branches/py3k/Modules/_multiprocessing/win32_functions.c ============================================================================== --- python/branches/py3k/Modules/_multiprocessing/win32_functions.c (original) +++ python/branches/py3k/Modules/_multiprocessing/win32_functions.c Wed Apr 1 00:36:44 2009 @@ -130,6 +130,12 @@ if (!PyArg_ParseTuple(args, "I", &uExitCode)) return NULL; + #if defined(Py_DEBUG) + SetErrorMode(SEM_FAILCRITICALERRORS|SEM_NOALIGNMENTFAULTEXCEPT|SEM_NOGPFAULTERRORBOX|SEM_NOOPENFILEERRORBOX); + _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_DEBUG); + #endif + + ExitProcess(uExitCode); return NULL; Modified: python/branches/py3k/Python/pythonrun.c ============================================================================== --- python/branches/py3k/Python/pythonrun.c (original) +++ python/branches/py3k/Python/pythonrun.c Wed Apr 1 00:36:44 2009 @@ -2006,6 +2006,7 @@ Py_FatalError(const char *msg) { fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ if (PyErr_Occurred()) { PyErr_Print(); } From python-checkins at python.org Wed Apr 1 00:37:55 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:37:55 +0200 (CEST) Subject: [Python-checkins] r70914 - in python/branches/py3k: Doc/distutils/setupscript.rst Lib/distutils/command/build_ext.py Lib/distutils/extension.py Lib/distutils/tests/test_build_ext.py Misc/NEWS Message-ID: <20090331223755.A49531E4133@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:37:55 2009 New Revision: 70914 Log: Merged revisions 70910 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70910 | tarek.ziade | 2009-03-31 17:27:23 -0500 (Tue, 31 Mar 2009) | 1 line #5583 Added optional Extensions in Distutils ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Doc/distutils/setupscript.rst python/branches/py3k/Lib/distutils/command/build_ext.py python/branches/py3k/Lib/distutils/extension.py python/branches/py3k/Lib/distutils/tests/test_build_ext.py python/branches/py3k/Misc/NEWS Modified: python/branches/py3k/Doc/distutils/setupscript.rst ============================================================================== --- python/branches/py3k/Doc/distutils/setupscript.rst (original) +++ python/branches/py3k/Doc/distutils/setupscript.rst Wed Apr 1 00:37:55 2009 @@ -334,6 +334,10 @@ There are still some other options which can be used to handle special cases. +The :option:`optional` option is a boolean; if it is true, that specifies that +a build failure in the extension should not abort the build process, but simply +not install the failing extension. + The :option:`extra_objects` option is a list of object files to be passed to the linker. These files must not have extensions, as the default extension for the compiler is used. Modified: python/branches/py3k/Lib/distutils/command/build_ext.py ============================================================================== --- python/branches/py3k/Lib/distutils/command/build_ext.py (original) +++ python/branches/py3k/Lib/distutils/command/build_ext.py Wed Apr 1 00:37:55 2009 @@ -455,7 +455,13 @@ self.check_extensions_list(self.extensions) for ext in self.extensions: - self.build_extension(ext) + try: + self.build_extension(ext) + except (CCompilerError, DistutilsError) as e: + if not ext.optional: + raise + self.warn('building extension "%s" failed: %s' % + (ext.name, e)) def build_extension(self, ext): sources = ext.sources Modified: python/branches/py3k/Lib/distutils/extension.py ============================================================================== --- python/branches/py3k/Lib/distutils/extension.py (original) +++ python/branches/py3k/Lib/distutils/extension.py Wed Apr 1 00:37:55 2009 @@ -82,6 +82,9 @@ language : string extension language (i.e. "c", "c++", "objc"). Will be detected from the source extensions if not provided. + optional : boolean + specifies that a build failure in the extension should not abort the + build process, but simply not install the failing extension. """ # When adding arguments to this constructor, be sure to update @@ -100,6 +103,7 @@ swig_opts = None, depends=None, language=None, + optional=None, **kw # To catch unknown keywords ): assert isinstance(name, str), "'name' must be a string" @@ -122,6 +126,7 @@ self.swig_opts = swig_opts or [] self.depends = depends or [] self.language = language + self.optional = optional # If there are unknown keyword options, warn about them if len(kw): Modified: python/branches/py3k/Lib/distutils/tests/test_build_ext.py ============================================================================== --- python/branches/py3k/Lib/distutils/tests/test_build_ext.py (original) +++ python/branches/py3k/Lib/distutils/tests/test_build_ext.py Wed Apr 1 00:37:55 2009 @@ -8,6 +8,9 @@ from distutils.command.build_ext import build_ext from distutils import sysconfig from distutils.tests.support import TempdirManager +from distutils.tests.support import LoggingSilencer +from distutils.extension import Extension +from distutils.errors import UnknownFileError import unittest from test import support @@ -20,7 +23,9 @@ srcdir = sysconfig.get_config_var('srcdir') return os.path.join(srcdir, 'Modules', 'xxmodule.c') -class BuildExtTestCase(TempdirManager, unittest.TestCase): +class BuildExtTestCase(TempdirManager, + LoggingSilencer, + unittest.TestCase): def setUp(self): # Create a simple test environment # Note that we're making changes to sys.path @@ -141,6 +146,22 @@ self.assert_(lib in cmd.library_dirs) self.assert_(incl in cmd.include_dirs) + def test_optional_extension(self): + + # this extension will fail, but let's ignore this failure + # with the optional argument. + modules = [Extension('foo', ['xxx'], optional=False)] + dist = Distribution({'name': 'xx', 'ext_modules': modules}) + cmd = build_ext(dist) + cmd.ensure_finalized() + self.assertRaises(UnknownFileError, cmd.run) # should raise an error + + modules = [Extension('foo', ['xxx'], optional=True)] + dist = Distribution({'name': 'xx', 'ext_modules': modules}) + cmd = build_ext(dist) + cmd.ensure_finalized() + cmd.run() # should pass + def test_suite(): src = _get_source_filename() if not os.path.exists(src): Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 00:37:55 2009 @@ -282,6 +282,9 @@ Library ------- +- Issue #5583: Added optional Extensions in Distutils. Initial patch by Georg + Brandl. + - Issue #1222: locale.format() bug when the thousands separator is a space character. From python-checkins at python.org Wed Apr 1 00:40:16 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 00:40:16 +0200 (CEST) Subject: [Python-checkins] r70915 - python/trunk/Doc/tutorial/datastructures.rst Message-ID: <20090331224016.7A06D1E4074@bag.python.org> Author: georg.brandl Date: Wed Apr 1 00:40:16 2009 New Revision: 70915 Log: #5018: remove confusing paragraph. Modified: python/trunk/Doc/tutorial/datastructures.rst Modified: python/trunk/Doc/tutorial/datastructures.rst ============================================================================== --- python/trunk/Doc/tutorial/datastructures.rst (original) +++ python/trunk/Doc/tutorial/datastructures.rst Wed Apr 1 00:40:16 2009 @@ -401,13 +401,11 @@ >>> x, y, z = t -This is called, appropriately enough, *sequence unpacking*. Sequence unpacking -requires the list of variables on the left to have the same number of elements -as the length of the sequence. Note that multiple assignment is really just a -combination of tuple packing and sequence unpacking! - -There is a small bit of asymmetry here: packing multiple values always creates -a tuple, and unpacking works for any sequence. +This is called, appropriately enough, *sequence unpacking* and works for any +sequence on the right-hand side. Sequence unpacking requires the list of +variables on the left to have the same number of elements as the length of the +sequence. Note that multiple assignment is really just a combination of tuple +packing and sequence unpacking! .. XXX Add a bit on the difference between tuples and lists. From python-checkins at python.org Wed Apr 1 00:42:05 2009 From: python-checkins at python.org (jesse.noller) Date: Wed, 1 Apr 2009 00:42:05 +0200 (CEST) Subject: [Python-checkins] r70916 - in python/branches/release30-maint: Misc/ACKS Misc/NEWS Modules/_multiprocessing/win32_functions.c Python/pythonrun.c Message-ID: <20090331224205.E76271E4074@bag.python.org> Author: jesse.noller Date: Wed Apr 1 00:42:05 2009 New Revision: 70916 Log: Merged revisions 70913 via svnmerge from svn+ssh://pythondev at svn.python.org/python/branches/py3k ................ r70913 | jesse.noller | 2009-03-31 17:36:44 -0500 (Tue, 31 Mar 2009) | 9 lines Merged revisions 70908 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70908 | jesse.noller | 2009-03-31 17:20:35 -0500 (Tue, 31 Mar 2009) | 1 line Issue 5619: Pass MS CRT debug flags into subprocesses ........ ................ Modified: python/branches/release30-maint/ (props changed) python/branches/release30-maint/Misc/ACKS python/branches/release30-maint/Misc/NEWS python/branches/release30-maint/Modules/_multiprocessing/win32_functions.c python/branches/release30-maint/Python/pythonrun.c Modified: python/branches/release30-maint/Misc/ACKS ============================================================================== --- python/branches/release30-maint/Misc/ACKS (original) +++ python/branches/release30-maint/Misc/ACKS Wed Apr 1 00:42:05 2009 @@ -674,6 +674,7 @@ Nathan Sullivan Mark Summerfield Hisao Suzuki +Andrew Svetlov Kalle Svensson Andrew Svetlov Paul Swartz Modified: python/branches/release30-maint/Misc/NEWS ============================================================================== --- python/branches/release30-maint/Misc/NEWS (original) +++ python/branches/release30-maint/Misc/NEWS Wed Apr 1 00:42:05 2009 @@ -30,6 +30,9 @@ Library ------- +- Issue #5619: Multiprocessing children disobey the debug flag and causes + popups on windows buildbots. Patch applied to work around this issue. + - Issue #5387: Fixed mmap.move crash by integer overflow. - Issue #5595: Fix UnboundedLocalError in ntpath.ismount(). Modified: python/branches/release30-maint/Modules/_multiprocessing/win32_functions.c ============================================================================== --- python/branches/release30-maint/Modules/_multiprocessing/win32_functions.c (original) +++ python/branches/release30-maint/Modules/_multiprocessing/win32_functions.c Wed Apr 1 00:42:05 2009 @@ -130,6 +130,12 @@ if (!PyArg_ParseTuple(args, "I", &uExitCode)) return NULL; + #if defined(Py_DEBUG) + SetErrorMode(SEM_FAILCRITICALERRORS|SEM_NOALIGNMENTFAULTEXCEPT|SEM_NOGPFAULTERRORBOX|SEM_NOOPENFILEERRORBOX); + _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_DEBUG); + #endif + + ExitProcess(uExitCode); return NULL; Modified: python/branches/release30-maint/Python/pythonrun.c ============================================================================== --- python/branches/release30-maint/Python/pythonrun.c (original) +++ python/branches/release30-maint/Python/pythonrun.c Wed Apr 1 00:42:05 2009 @@ -1998,6 +1998,7 @@ Py_FatalError(const char *msg) { fprintf(stderr, "Fatal Python error: %s\n", msg); + fflush(stderr); /* it helps in Windows debug build */ if (PyErr_Occurred()) { PyErr_Print(); } From python-checkins at python.org Wed Apr 1 00:42:41 2009 From: python-checkins at python.org (martin.v.loewis) Date: Wed, 1 Apr 2009 00:42:41 +0200 (CEST) Subject: [Python-checkins] r70917 - python/branches/py3k/Tools/buildbot/test.bat Message-ID: <20090331224241.CD0771E4074@bag.python.org> Author: martin.v.loewis Date: Wed Apr 1 00:42:41 2009 New Revision: 70917 Log: Readd -n. Modified: python/branches/py3k/Tools/buildbot/test.bat Modified: python/branches/py3k/Tools/buildbot/test.bat ============================================================================== --- python/branches/py3k/Tools/buildbot/test.bat (original) +++ python/branches/py3k/Tools/buildbot/test.bat Wed Apr 1 00:42:41 2009 @@ -1,4 +1,4 @@ @rem Used by the buildbot "test" step. cd PCbuild -call rt.bat -d -q -uall -rw +call rt.bat -d -q -uall -rw -n From python-checkins at python.org Wed Apr 1 00:43:03 2009 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 1 Apr 2009 00:43:03 +0200 (CEST) Subject: [Python-checkins] r70918 - python/trunk/Doc/library/collections.rst Message-ID: <20090331224303.75BC21E4074@bag.python.org> Author: raymond.hettinger Date: Wed Apr 1 00:43:03 2009 New Revision: 70918 Log: Improve examples for collections.deque() Modified: python/trunk/Doc/library/collections.rst Modified: python/trunk/Doc/library/collections.rst ============================================================================== --- python/trunk/Doc/library/collections.rst (original) +++ python/trunk/Doc/library/collections.rst Wed Apr 1 00:43:03 2009 @@ -463,6 +463,30 @@ This section shows various approaches to working with deques. +Bounded length deques provide functionality similar to the ``tail`` filter +in Unix:: + + def tail(filename, n=10): + 'Return the last n lines of a file' + return deque(open(filename), n) + +Another approach to using deques is to maintain a sequence of recently +added elements by appending to the right and popping to the left:: + + def moving_average(iterable, n=3): + # moving_average([40, 30, 50, 46, 39, 44]) --> 40.0 42.0 45.0 43.0 + # http://en.wikipedia.org/wiki/Moving_average + n = float(n) + it = iter(iterable) + d = deque(itertools.islice(it, n)) + s = sum(d) + if len(d) == n: + yield s / n + for elem in it: + s += elem - d.popleft() + d.append(elem) + yield s / n + The :meth:`rotate` method provides a way to implement :class:`deque` slicing and deletion. For example, a pure python implementation of ``del d[n]`` relies on the :meth:`rotate` method to position elements to be popped:: @@ -480,31 +504,6 @@ stack manipulations such as ``dup``, ``drop``, ``swap``, ``over``, ``pick``, ``rot``, and ``roll``. -Multi-pass data reduction algorithms can be succinctly expressed and efficiently -coded by extracting elements with multiple calls to :meth:`popleft`, applying -a reduction function, and calling :meth:`append` to add the result back to the -deque. - -For example, building a balanced binary tree of nested lists entails reducing -two adjacent nodes into one by grouping them in a list: - - >>> def maketree(iterable): - ... d = deque(iterable) - ... while len(d) > 1: - ... pair = [d.popleft(), d.popleft()] - ... d.append(pair) - ... return list(d) - ... - >>> print maketree('abcdefgh') - [[[['a', 'b'], ['c', 'd']], [['e', 'f'], ['g', 'h']]]] - -Bounded length deques provide functionality similar to the ``tail`` filter -in Unix:: - - def tail(filename, n=10): - 'Return the last n lines of a file' - return deque(open(filename), n) - :class:`defaultdict` objects ---------------------------- From python-checkins at python.org Wed Apr 1 00:43:15 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:43:15 +0200 (CEST) Subject: [Python-checkins] r70919 - python/branches/release30-maint Message-ID: <20090331224315.E99831E4074@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:43:15 2009 New Revision: 70919 Log: Blocked revisions 70914 via svnmerge ................ r70914 | tarek.ziade | 2009-03-31 17:37:55 -0500 (Tue, 31 Mar 2009) | 9 lines Merged revisions 70910 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70910 | tarek.ziade | 2009-03-31 17:27:23 -0500 (Tue, 31 Mar 2009) | 1 line #5583 Added optional Extensions in Distutils ........ ................ Modified: python/branches/release30-maint/ (props changed) From python-checkins at python.org Wed Apr 1 00:44:10 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:44:10 +0200 (CEST) Subject: [Python-checkins] r70920 - python/trunk/Lib/distutils/command/build_ext.py Message-ID: <20090331224410.ED59D1E40C1@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:44:10 2009 New Revision: 70920 Log: catching msvc9compiler error as well Modified: python/trunk/Lib/distutils/command/build_ext.py Modified: python/trunk/Lib/distutils/command/build_ext.py ============================================================================== --- python/trunk/Lib/distutils/command/build_ext.py (original) +++ python/trunk/Lib/distutils/command/build_ext.py Wed Apr 1 00:44:10 2009 @@ -478,7 +478,7 @@ for ext in self.extensions: try: self.build_extension(ext) - except (CCompilerError, DistutilsError), e: + except (CCompilerError, DistutilsError, CompileError), e: if not ext.optional: raise self.warn('building extension "%s" failed: %s' % From python-checkins at python.org Wed Apr 1 00:46:50 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 00:46:50 +0200 (CEST) Subject: [Python-checkins] r70921 - python/branches/py3k/Tools/scripts/reindent-rst.py Message-ID: <20090331224650.8AE901E406D@bag.python.org> Author: georg.brandl Date: Wed Apr 1 00:46:50 2009 New Revision: 70921 Log: Run 2to3 over new script. Modified: python/branches/py3k/Tools/scripts/reindent-rst.py Modified: python/branches/py3k/Tools/scripts/reindent-rst.py ============================================================================== --- python/branches/py3k/Tools/scripts/reindent-rst.py (original) +++ python/branches/py3k/Tools/scripts/reindent-rst.py Wed Apr 1 00:46:50 2009 @@ -3,7 +3,7 @@ # Make a reST file compliant to our pre-commit hook. # Currently just remove trailing whitespace. -from __future__ import with_statement + import sys, re, shutil ws_re = re.compile(r'\s+(\r?\n)$') @@ -16,12 +16,12 @@ lines = f.readlines() new_lines = [ws_re.sub(r'\1', line) for line in lines] if new_lines != lines: - print 'Fixing %s...' % filename + print('Fixing %s...' % filename) shutil.copyfile(filename, filename + '.bak') with open(filename, 'wb') as f: f.writelines(new_lines) - except Exception, err: - print 'Cannot fix %s: %s' % (filename, err) + except Exception as err: + print('Cannot fix %s: %s' % (filename, err)) rv = 1 return rv From python-checkins at python.org Wed Apr 1 00:47:01 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:47:01 +0200 (CEST) Subject: [Python-checkins] r70922 - python/trunk/Lib/distutils/tests/test_build_ext.py Message-ID: <20090331224701.7E7381E406D@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:47:01 2009 New Revision: 70922 Log: fixed the test for win32 CompileError Modified: python/trunk/Lib/distutils/tests/test_build_ext.py Modified: python/trunk/Lib/distutils/tests/test_build_ext.py ============================================================================== --- python/trunk/Lib/distutils/tests/test_build_ext.py (original) +++ python/trunk/Lib/distutils/tests/test_build_ext.py Wed Apr 1 00:47:01 2009 @@ -10,6 +10,7 @@ from distutils.tests import support from distutils.extension import Extension from distutils.errors import UnknownFileError +from distutils.errors import CompileError import unittest from test import test_support @@ -154,7 +155,8 @@ dist = Distribution({'name': 'xx', 'ext_modules': modules}) cmd = build_ext(dist) cmd.ensure_finalized() - self.assertRaises(UnknownFileError, cmd.run) # should raise an error + self.assertRaises((UnknownFileError, CompileError), + cmd.run) # should raise an error modules = [Extension('foo', ['xxx'], optional=True)] dist = Distribution({'name': 'xx', 'ext_modules': modules}) From python at rcn.com Wed Apr 1 00:47:01 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 31 Mar 2009 15:47:01 -0700 Subject: [Python-checkins] r70915 -python/trunk/Doc/tutorial/datastructures.rst References: <20090331224016.7A06D1E4074@bag.python.org> Message-ID: <13BFBDAB0FF044C1868725499EEABE1F@RaymondLaptop1> > -This is called, appropriately enough, *sequence unpacking*. Sequence unpacking > -requires the list of variables on the left to have the same number of elements > -as the length of the sequence. Note that multiple assignment is really just a > -combination of tuple packing and sequence unpacking! > - > -There is a small bit of asymmetry here: packing multiple values always creates > -a tuple, and unpacking works for any sequence. > +This is called, appropriately enough, *sequence unpacking* and works for any > +sequence on the right-hand side. Sequence unpacking requires the list of > +variables on the left to have the same number of elements as the length of the > +sequence. Note that multiple assignment is really just a combination of tuple > +packing and sequence unpacking! A general comment on writing style in the docs: we ought to go easy on exclamation points and keep a more even-toned style. Raymond From python-checkins at python.org Wed Apr 1 00:48:36 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:48:36 +0200 (CEST) Subject: [Python-checkins] r70923 - python/branches/release26-maint Message-ID: <20090331224836.3DD941E406D@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:48:36 2009 New Revision: 70923 Log: Blocked revisions 70920,70922 via svnmerge ........ r70920 | tarek.ziade | 2009-03-31 17:44:10 -0500 (Tue, 31 Mar 2009) | 1 line catching msvc9compiler error as well ........ r70922 | tarek.ziade | 2009-03-31 17:47:01 -0500 (Tue, 31 Mar 2009) | 1 line fixed the test for win32 CompileError ........ Modified: python/branches/release26-maint/ (props changed) From python-checkins at python.org Wed Apr 1 00:50:54 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:50:54 +0200 (CEST) Subject: [Python-checkins] r70924 - in python/branches/py3k: Lib/distutils/command/build_ext.py Lib/distutils/tests/test_build_ext.py Message-ID: <20090331225054.B8ADB1E406D@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:50:54 2009 New Revision: 70924 Log: Merged revisions 70920,70922 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70920 | tarek.ziade | 2009-03-31 17:44:10 -0500 (Tue, 31 Mar 2009) | 1 line catching msvc9compiler error as well ........ r70922 | tarek.ziade | 2009-03-31 17:47:01 -0500 (Tue, 31 Mar 2009) | 1 line fixed the test for win32 CompileError ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Lib/distutils/command/build_ext.py python/branches/py3k/Lib/distutils/tests/test_build_ext.py Modified: python/branches/py3k/Lib/distutils/command/build_ext.py ============================================================================== --- python/branches/py3k/Lib/distutils/command/build_ext.py (original) +++ python/branches/py3k/Lib/distutils/command/build_ext.py Wed Apr 1 00:50:54 2009 @@ -457,7 +457,7 @@ for ext in self.extensions: try: self.build_extension(ext) - except (CCompilerError, DistutilsError) as e: + except (CCompilerError, DistutilsError, CompileError) as e: if not ext.optional: raise self.warn('building extension "%s" failed: %s' % Modified: python/branches/py3k/Lib/distutils/tests/test_build_ext.py ============================================================================== --- python/branches/py3k/Lib/distutils/tests/test_build_ext.py (original) +++ python/branches/py3k/Lib/distutils/tests/test_build_ext.py Wed Apr 1 00:50:54 2009 @@ -11,6 +11,7 @@ from distutils.tests.support import LoggingSilencer from distutils.extension import Extension from distutils.errors import UnknownFileError +from distutils.errors import CompileError import unittest from test import support @@ -154,7 +155,8 @@ dist = Distribution({'name': 'xx', 'ext_modules': modules}) cmd = build_ext(dist) cmd.ensure_finalized() - self.assertRaises(UnknownFileError, cmd.run) # should raise an error + self.assertRaises((UnknownFileError, CompileError), + cmd.run) # should raise an error modules = [Extension('foo', ['xxx'], optional=True)] dist = Distribution({'name': 'xx', 'ext_modules': modules}) From python-checkins at python.org Wed Apr 1 00:52:48 2009 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 1 Apr 2009 00:52:48 +0200 (CEST) Subject: [Python-checkins] r70925 - python/branches/py3k/Doc/library/collections.rst Message-ID: <20090331225248.F0AD31E406D@bag.python.org> Author: raymond.hettinger Date: Wed Apr 1 00:52:48 2009 New Revision: 70925 Log: Improve examples for collections.deque() Modified: python/branches/py3k/Doc/library/collections.rst Modified: python/branches/py3k/Doc/library/collections.rst ============================================================================== --- python/branches/py3k/Doc/library/collections.rst (original) +++ python/branches/py3k/Doc/library/collections.rst Wed Apr 1 00:52:48 2009 @@ -442,6 +442,29 @@ This section shows various approaches to working with deques. +Bounded length deques provide functionality similar to the ``tail`` filter +in Unix:: + + def tail(filename, n=10): + 'Return the last n lines of a file' + return deque(open(filename), n) + +Another approach to using deques is to maintain a sequence of recently +added elements by appending to the right and popping to the left:: + + def moving_average(iterable, n=3): + # moving_average([40, 30, 50, 46, 39, 44]) --> 40.0 42.0 45.0 43.0 + # http://en.wikipedia.org/wiki/Moving_average + it = iter(iterable) + d = deque(itertools.islice(it, n)) + s = sum(d) + if len(d) == n: + yield s / n + for elem in it: + s += elem - d.popleft() + d.append(elem) + yield s / n + The :meth:`rotate` method provides a way to implement :class:`deque` slicing and deletion. For example, a pure python implementation of ``del d[n]`` relies on the :meth:`rotate` method to position elements to be popped:: @@ -459,31 +482,6 @@ stack manipulations such as ``dup``, ``drop``, ``swap``, ``over``, ``pick``, ``rot``, and ``roll``. -Multi-pass data reduction algorithms can be succinctly expressed and efficiently -coded by extracting elements with multiple calls to :meth:`popleft`, applying -a reduction function, and calling :meth:`append` to add the result back to the -deque. - -For example, building a balanced binary tree of nested lists entails reducing -two adjacent nodes into one by grouping them in a list: - - >>> def maketree(iterable): - ... d = deque(iterable) - ... while len(d) > 1: - ... pair = [d.popleft(), d.popleft()] - ... d.append(pair) - ... return list(d) - ... - >>> print(maketree('abcdefgh')) - [[[['a', 'b'], ['c', 'd']], [['e', 'f'], ['g', 'h']]]] - -Bounded length deques provide functionality similar to the ``tail`` filter -in Unix:: - - def tail(filename, n=10): - 'Return the last n lines of a file' - return deque(open(filename), n) - :class:`defaultdict` objects ---------------------------- From python-checkins at python.org Wed Apr 1 00:52:58 2009 From: python-checkins at python.org (tarek.ziade) Date: Wed, 1 Apr 2009 00:52:58 +0200 (CEST) Subject: [Python-checkins] r70926 - python/branches/release30-maint Message-ID: <20090331225258.CC6951E406D@bag.python.org> Author: tarek.ziade Date: Wed Apr 1 00:52:58 2009 New Revision: 70926 Log: Blocked revisions 70924 via svnmerge ................ r70924 | tarek.ziade | 2009-03-31 17:50:54 -0500 (Tue, 31 Mar 2009) | 13 lines Merged revisions 70920,70922 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70920 | tarek.ziade | 2009-03-31 17:44:10 -0500 (Tue, 31 Mar 2009) | 1 line catching msvc9compiler error as well ........ r70922 | tarek.ziade | 2009-03-31 17:47:01 -0500 (Tue, 31 Mar 2009) | 1 line fixed the test for win32 CompileError ........ ................ Modified: python/branches/release30-maint/ (props changed) From python-checkins at python.org Wed Apr 1 01:01:27 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 01:01:27 +0200 (CEST) Subject: [Python-checkins] r70927 - python/trunk/Doc/tutorial/datastructures.rst Message-ID: <20090331230127.403051E406D@bag.python.org> Author: georg.brandl Date: Wed Apr 1 01:01:27 2009 New Revision: 70927 Log: Dont shout to users. Modified: python/trunk/Doc/tutorial/datastructures.rst Modified: python/trunk/Doc/tutorial/datastructures.rst ============================================================================== --- python/trunk/Doc/tutorial/datastructures.rst (original) +++ python/trunk/Doc/tutorial/datastructures.rst Wed Apr 1 01:01:27 2009 @@ -405,7 +405,7 @@ sequence on the right-hand side. Sequence unpacking requires the list of variables on the left to have the same number of elements as the length of the sequence. Note that multiple assignment is really just a combination of tuple -packing and sequence unpacking! +packing and sequence unpacking. .. XXX Add a bit on the difference between tuples and lists. From python-checkins at python.org Wed Apr 1 01:11:33 2009 From: python-checkins at python.org (benjamin.peterson) Date: Wed, 1 Apr 2009 01:11:33 +0200 (CEST) Subject: [Python-checkins] r70928 - in python/branches/py3k: Lib/_pyio.py Lib/test/test_io.py Misc/NEWS Modules/_textio.c Message-ID: <20090331231133.2DCD01E43D0@bag.python.org> Author: benjamin.peterson Date: Wed Apr 1 01:11:32 2009 New Revision: 70928 Log: fix TextIOWrapper.read() when the buffer is not readable #5628 Modified: python/branches/py3k/Lib/_pyio.py python/branches/py3k/Lib/test/test_io.py python/branches/py3k/Misc/NEWS python/branches/py3k/Modules/_textio.c Modified: python/branches/py3k/Lib/_pyio.py ============================================================================== --- python/branches/py3k/Lib/_pyio.py (original) +++ python/branches/py3k/Lib/_pyio.py Wed Apr 1 01:11:32 2009 @@ -1696,6 +1696,7 @@ return cookie def read(self, n=None): + self._checkReadable() if n is None: n = -1 decoder = self._decoder or self._get_decoder() Modified: python/branches/py3k/Lib/test/test_io.py ============================================================================== --- python/branches/py3k/Lib/test/test_io.py (original) +++ python/branches/py3k/Lib/test/test_io.py Wed Apr 1 01:11:32 2009 @@ -1754,6 +1754,13 @@ self.assertEquals(f.read(), data * 2) self.assertEquals(buf.getvalue(), (data * 2).encode(encoding)) + def test_unreadable(self): + class UnReadable(self.BytesIO): + def readable(self): + return False + txt = self.TextIOWrapper(UnReadable()) + self.assertRaises(IOError, txt.read) + def test_read_one_by_one(self): txt = self.TextIOWrapper(self.BytesIO(b"AA\r\nBB")) reads = "" Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 01:11:32 2009 @@ -53,6 +53,8 @@ Library ------- +- Issue #5628: Fix io.TextIOWrapper.read() with a unreadable buffer. + - Issue #5619: Multiprocessing children disobey the debug flag and causes popups on windows buildbots. Patch applied to work around this issue. Modified: python/branches/py3k/Modules/_textio.c ============================================================================== --- python/branches/py3k/Modules/_textio.c (original) +++ python/branches/py3k/Modules/_textio.c Wed Apr 1 01:11:32 2009 @@ -1348,6 +1348,11 @@ CHECK_CLOSED(self); + if (self->decoder == NULL) { + PyErr_SetString(PyExc_IOError, "not readable"); + return NULL; + } + if (_TextIOWrapper_writeflush(self) < 0) return NULL; From python-checkins at python.org Wed Apr 1 01:16:51 2009 From: python-checkins at python.org (r.david.murray) Date: Wed, 1 Apr 2009 01:16:51 +0200 (CEST) Subject: [Python-checkins] r70929 - in python/branches/py3k: Lib/test/regrtest.py Lib/test/support.py Lib/test/test_asynchat.py Lib/test/test_bz2.py Lib/test/test_crypt.py Lib/test/test_ctypes.py Lib/test/test_curses.py Lib/test/test_dbm.py Lib/test/test_fcntl.py Lib/test/test_fork1.py Lib/test/test_grp.py Lib/test/test_ioctl.py Lib/test/test_mmap.py Lib/test/test_multiprocessing.py Lib/test/test_nis.py Lib/test/test_ossaudiodev.py Lib/test/test_posix.py Lib/test/test_pwd.py Lib/test/test_resource.py Lib/test/test_sqlite.py Lib/test/test_startfile.py Lib/test/test_tcl.py Lib/test/test_tk.py Lib/test/test_ttk_guionly.py Lib/test/test_ttk_textonly.py Lib/test/test_winreg.py Lib/test/test_winsound.py Lib/test/test_xml_etree_c.py Lib/test/test_zlib.py Misc/NEWS Message-ID: <20090331231651.5E3C71E4070@bag.python.org> Author: r.david.murray Date: Wed Apr 1 01:16:50 2009 New Revision: 70929 Log: Blocked revisions 70734,70775,70856,70874,70876-70877 via svnmerge ........ r70734 | r.david.murray | 2009-03-30 15:04:00 -0400 (Mon, 30 Mar 2009) | 7 lines Add import_function method to test.test_support, and modify a number of tests that expect to be skipped if imports fail or functions don't exist to use import_function and import_module. The ultimate goal is to change regrtest to not skip automatically on ImportError. Checking in now to make sure the buldbots don't show any errors on platforms I can't direct test on. ........ r70775 | r.david.murray | 2009-03-30 19:05:48 -0400 (Mon, 30 Mar 2009) | 4 lines Change more tests to use import_module for the modules that should cause tests to be skipped. Also rename import_function to the more descriptive get_attribute and add a docstring. ........ r70856 | r.david.murray | 2009-03-31 14:32:17 -0400 (Tue, 31 Mar 2009) | 7 lines A few more test skips via import_module, and change import_module to return the error message produced by importlib, so that if an import in the package whose import is being wrapped is what failed the skip message will contain the name of that module instead of the name of the wrapped module. Also fixed formatting of some previous comments. ........ r70874 | r.david.murray | 2009-03-31 15:33:15 -0400 (Tue, 31 Mar 2009) | 5 lines Improve test_support.import_module docstring, remove deprecated flag from get_attribute since it isn't likely to do anything useful. ........ r70876 | r.david.murray | 2009-03-31 15:49:15 -0400 (Tue, 31 Mar 2009) | 4 lines Remove the regrtest check that turns any ImportError into a skipped test. Hopefully all modules whose imports legitimately result in a skipped test have been properly wrapped by the previous commits. ........ r70877 | r.david.murray | 2009-03-31 15:57:24 -0400 (Tue, 31 Mar 2009) | 2 lines Add NEWS entry for regrtest change. ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Lib/test/regrtest.py python/branches/py3k/Lib/test/support.py python/branches/py3k/Lib/test/test_asynchat.py python/branches/py3k/Lib/test/test_bz2.py python/branches/py3k/Lib/test/test_crypt.py python/branches/py3k/Lib/test/test_ctypes.py python/branches/py3k/Lib/test/test_curses.py python/branches/py3k/Lib/test/test_dbm.py python/branches/py3k/Lib/test/test_fcntl.py python/branches/py3k/Lib/test/test_fork1.py python/branches/py3k/Lib/test/test_grp.py python/branches/py3k/Lib/test/test_ioctl.py python/branches/py3k/Lib/test/test_mmap.py python/branches/py3k/Lib/test/test_multiprocessing.py python/branches/py3k/Lib/test/test_nis.py python/branches/py3k/Lib/test/test_ossaudiodev.py python/branches/py3k/Lib/test/test_posix.py python/branches/py3k/Lib/test/test_pwd.py python/branches/py3k/Lib/test/test_resource.py python/branches/py3k/Lib/test/test_sqlite.py python/branches/py3k/Lib/test/test_startfile.py python/branches/py3k/Lib/test/test_tcl.py python/branches/py3k/Lib/test/test_tk.py python/branches/py3k/Lib/test/test_ttk_guionly.py python/branches/py3k/Lib/test/test_ttk_textonly.py python/branches/py3k/Lib/test/test_winreg.py python/branches/py3k/Lib/test/test_winsound.py python/branches/py3k/Lib/test/test_xml_etree_c.py python/branches/py3k/Lib/test/test_zlib.py python/branches/py3k/Misc/NEWS Modified: python/branches/py3k/Lib/test/regrtest.py ============================================================================== --- python/branches/py3k/Lib/test/regrtest.py (original) +++ python/branches/py3k/Lib/test/regrtest.py Wed Apr 1 01:16:50 2009 @@ -628,7 +628,7 @@ print(test, "skipped --", msg) sys.stdout.flush() return -2 - except (ImportError, unittest.SkipTest) as msg: + except unittest.SkipTest as msg: if not quiet: print(test, "skipped --", msg) sys.stdout.flush() Modified: python/branches/py3k/Lib/test/support.py ============================================================================== --- python/branches/py3k/Lib/test/support.py (original) +++ python/branches/py3k/Lib/test/support.py Wed Apr 1 01:16:50 2009 @@ -12,6 +12,7 @@ import shutil import warnings import unittest +import importlib __all__ = ["Error", "TestFailed", "ResourceDenied", "import_module", "verbose", "use_resources", "max_memuse", "record_original_stdout", @@ -24,7 +25,7 @@ "TransientResource", "transient_internet", "run_with_locale", "set_memlimit", "bigmemtest", "bigaddrspacetest", "BasicTestRunner", "run_unittest", "run_doctest", "threading_setup", "threading_cleanup", - "reap_children", "cpython_only", "check_impl_detail"] + "reap_children", "cpython_only", "check_impl_detail", "get_attribute"] class Error(Exception): """Base class for regression test exceptions.""" @@ -41,19 +42,32 @@ """ def import_module(name, deprecated=False): - """Import the module to be tested, raising SkipTest if it is not - available.""" + """Import and return the module to be tested, raising SkipTest if + it is not available. + + If deprecated is True, any module or package deprecation messages + will be suppressed.""" with warnings.catch_warnings(): if deprecated: warnings.filterwarnings("ignore", ".+ (module|package)", DeprecationWarning) try: - module = __import__(name, level=0) - except ImportError: - raise unittest.SkipTest("No module named " + name) + module = importlib.import_module(name) + except ImportError as msg: + raise unittest.SkipTest(str(msg)) else: return module +def get_attribute(obj, name): + """Get an attribute, raising SkipTest if AttributeError is raised.""" + try: + attribute = getattr(obj, name) + except AttributeError: + raise unittest.SkipTest("module %s has no attribute %s" % ( + obj.__name__, name)) + else: + return attribute + verbose = 1 # Flag set to 0 by regrtest.py use_resources = None # Flag set to [] by regrtest.py max_memuse = 0 # Disable bigmem tests (they will still be run with Modified: python/branches/py3k/Lib/test/test_asynchat.py ============================================================================== --- python/branches/py3k/Lib/test/test_asynchat.py (original) +++ python/branches/py3k/Lib/test/test_asynchat.py Wed Apr 1 01:16:50 2009 @@ -1,10 +1,13 @@ -# test asynchat -- requires threading +# test asynchat + +from test import support + +# If this fails, the test will be skipped. +thread = support.import_module('_thread') -import _thread as thread # If this fails, we can't test this module import asyncore, asynchat, socket, threading, time import unittest import sys -from test import support HOST = support.HOST SERVER_QUIT = b'QUIT\n' Modified: python/branches/py3k/Lib/test/test_bz2.py ============================================================================== --- python/branches/py3k/Lib/test/test_bz2.py (original) +++ python/branches/py3k/Lib/test/test_bz2.py Wed Apr 1 01:16:50 2009 @@ -8,7 +8,9 @@ import subprocess import sys -import bz2 +# Skip tests if the bz2 module doesn't exist. +bz2 = support.import_module('bz2') + from bz2 import BZ2File, BZ2Compressor, BZ2Decompressor has_cmdline_bunzip2 = sys.platform not in ("win32", "os2emx") Modified: python/branches/py3k/Lib/test/test_crypt.py ============================================================================== --- python/branches/py3k/Lib/test/test_crypt.py (original) +++ python/branches/py3k/Lib/test/test_crypt.py Wed Apr 1 01:16:50 2009 @@ -1,6 +1,7 @@ from test import support import unittest -import crypt + +crypt = support.import_module('crypt') class CryptTestCase(unittest.TestCase): Modified: python/branches/py3k/Lib/test/test_ctypes.py ============================================================================== --- python/branches/py3k/Lib/test/test_ctypes.py (original) +++ python/branches/py3k/Lib/test/test_ctypes.py Wed Apr 1 01:16:50 2009 @@ -1,6 +1,10 @@ import unittest -from test.support import run_unittest +from test.support import run_unittest, import_module + +# Skip tests if _ctypes module was not built. +import_module('_ctypes') + import ctypes.test def test_main(): Modified: python/branches/py3k/Lib/test/test_curses.py ============================================================================== --- python/branches/py3k/Lib/test/test_curses.py (original) +++ python/branches/py3k/Lib/test/test_curses.py Wed Apr 1 01:16:50 2009 @@ -9,16 +9,19 @@ # Only called, not tested: getmouse(), ungetmouse() # -import curses, sys, tempfile, os -import curses.panel +import sys, tempfile, os # Optionally test curses module. This currently requires that the # 'curses' resource be given on the regrtest command line using the -u # option. If not available, nothing after this line will be executed. -from test.support import requires +from test.support import requires, import_module requires('curses') +# If either of these don't exist, skip the tests. +curses = import_module('curses') +curses.panel = import_module('curses.panel') + # XXX: if newterm was supported we could use it instead of initscr and not exit term = os.environ.get('TERM') if not term or term == 'unknown': Modified: python/branches/py3k/Lib/test/test_dbm.py ============================================================================== --- python/branches/py3k/Lib/test/test_dbm.py (original) +++ python/branches/py3k/Lib/test/test_dbm.py Wed Apr 1 01:16:50 2009 @@ -3,10 +3,12 @@ import os import unittest -import dbm import glob import test.support +# Skip tests if dbm module doesn't exist. +dbm = test.support.import_module('dbm') + _fname = test.support.TESTFN # Modified: python/branches/py3k/Lib/test/test_fcntl.py ============================================================================== --- python/branches/py3k/Lib/test/test_fcntl.py (original) +++ python/branches/py3k/Lib/test/test_fcntl.py Wed Apr 1 01:16:50 2009 @@ -3,12 +3,15 @@ OS/2+EMX doesn't support the file locking operations. """ -import fcntl import os import struct import sys import unittest -from test.support import verbose, TESTFN, unlink, run_unittest +from test.support import verbose, TESTFN, unlink, run_unittest, import_module + +# Skip test if no fnctl module. +fcntl = import_module('fcntl') + # TODO - Write tests for flock() and lockf(). Modified: python/branches/py3k/Lib/test/test_fork1.py ============================================================================== --- python/branches/py3k/Lib/test/test_fork1.py (original) +++ python/branches/py3k/Lib/test/test_fork1.py Wed Apr 1 01:16:50 2009 @@ -3,14 +3,12 @@ import os import time -import unittest from test.fork_wait import ForkWait -from test.support import run_unittest, reap_children +from test.support import run_unittest, reap_children, get_attribute + +# Skip test if fork does not exist. +get_attribute(os, 'fork') -try: - os.fork -except AttributeError: - raise unittest.SkipTest("os.fork not defined -- skipping test_fork1") class ForkTest(ForkWait): def wait_impl(self, cpid): Modified: python/branches/py3k/Lib/test/test_grp.py ============================================================================== --- python/branches/py3k/Lib/test/test_grp.py (original) +++ python/branches/py3k/Lib/test/test_grp.py Wed Apr 1 01:16:50 2009 @@ -1,9 +1,10 @@ """Test script for the grp module.""" -import grp import unittest from test import support +grp = support.import_module('grp') + class GroupDatabaseTestCase(unittest.TestCase): def check_value(self, value): Modified: python/branches/py3k/Lib/test/test_ioctl.py ============================================================================== --- python/branches/py3k/Lib/test/test_ioctl.py (original) +++ python/branches/py3k/Lib/test/test_ioctl.py Wed Apr 1 01:16:50 2009 @@ -1,12 +1,9 @@ import unittest -from test.support import run_unittest +from test.support import run_unittest, import_module, get_attribute import os, struct -try: - import fcntl, termios -except ImportError: - raise unittest.SkipTest("No fcntl or termios module") -if not hasattr(termios,'TIOCGPGRP'): - raise unittest.SkipTest("termios module doesn't have TIOCGPGRP") +fcntl = import_module('fcntl') +termios = import_module('termios') +get_attribute(termios, 'TIOCGPGRP') #Can't run tests without this feature try: tty = open("/dev/tty", "r") Modified: python/branches/py3k/Lib/test/test_mmap.py ============================================================================== --- python/branches/py3k/Lib/test/test_mmap.py (original) +++ python/branches/py3k/Lib/test/test_mmap.py Wed Apr 1 01:16:50 2009 @@ -1,8 +1,10 @@ -from test.support import TESTFN, run_unittest -import mmap +from test.support import TESTFN, run_unittest, import_module import unittest import os, re, itertools +# Skip test if we can't import mmap. +mmap = import_module('mmap') + PAGESIZE = mmap.PAGESIZE class MmapTests(unittest.TestCase): Modified: python/branches/py3k/Lib/test/test_multiprocessing.py ============================================================================== --- python/branches/py3k/Lib/test/test_multiprocessing.py (original) +++ python/branches/py3k/Lib/test/test_multiprocessing.py Wed Apr 1 01:16:50 2009 @@ -17,20 +17,19 @@ import socket import random import logging +import test.support -# Work around broken sem_open implementations -try: - import multiprocessing.synchronize -except ImportError as e: - raise unittest.SkipTest(e) +# Skip tests if _multiprocessing wasn't built. +_multiprocessing = test.support.import_module('_multiprocessing') +# Skip tests if sem_open implementation is broken. +test.support.import_module('multiprocessing.synchronize') import multiprocessing.dummy import multiprocessing.connection import multiprocessing.managers import multiprocessing.heap import multiprocessing.pool -import _multiprocessing from multiprocessing import util Modified: python/branches/py3k/Lib/test/test_nis.py ============================================================================== --- python/branches/py3k/Lib/test/test_nis.py (original) +++ python/branches/py3k/Lib/test/test_nis.py Wed Apr 1 01:16:50 2009 @@ -1,6 +1,8 @@ from test import support import unittest -import nis + +# Skip test if nis module does not exist. +nis = support.import_module('nis') raise unittest.SkipTest("test_nis hangs on Solaris") Modified: python/branches/py3k/Lib/test/test_ossaudiodev.py ============================================================================== --- python/branches/py3k/Lib/test/test_ossaudiodev.py (original) +++ python/branches/py3k/Lib/test/test_ossaudiodev.py Wed Apr 1 01:16:50 2009 @@ -3,8 +3,9 @@ from test.support import findfile +ossaudiodev = support.import_module('ossaudiodev') + import errno -import ossaudiodev import sys import sunau import time Modified: python/branches/py3k/Lib/test/test_posix.py ============================================================================== --- python/branches/py3k/Lib/test/test_posix.py (original) +++ python/branches/py3k/Lib/test/test_posix.py Wed Apr 1 01:16:50 2009 @@ -2,17 +2,15 @@ from test import support -try: - import posix -except ImportError: - raise unittest.SkipTest("posix is not available") - import time import os import pwd import shutil import unittest import warnings + +posix = support.import_module('posix') + warnings.filterwarnings('ignore', '.* potential security risk .*', RuntimeWarning) Modified: python/branches/py3k/Lib/test/test_pwd.py ============================================================================== --- python/branches/py3k/Lib/test/test_pwd.py (original) +++ python/branches/py3k/Lib/test/test_pwd.py Wed Apr 1 01:16:50 2009 @@ -1,7 +1,7 @@ import unittest from test import support -import pwd +pwd = support.import_module('pwd') class PwdTest(unittest.TestCase): Modified: python/branches/py3k/Lib/test/test_resource.py ============================================================================== --- python/branches/py3k/Lib/test/test_resource.py (original) +++ python/branches/py3k/Lib/test/test_resource.py Wed Apr 1 01:16:50 2009 @@ -1,9 +1,9 @@ import unittest from test import support - -import resource import time +resource = support.import_module('resource') + # This test is checking a few specific problem spots with the resource module. class ResourceTest(unittest.TestCase): Modified: python/branches/py3k/Lib/test/test_sqlite.py ============================================================================== --- python/branches/py3k/Lib/test/test_sqlite.py (original) +++ python/branches/py3k/Lib/test/test_sqlite.py Wed Apr 1 01:16:50 2009 @@ -1,10 +1,9 @@ import unittest -from test.support import run_unittest +from test.support import run_unittest, import_module + +# Skip test if _sqlite3 module not installed +import_module('_sqlite3') -try: - import _sqlite3 -except ImportError: - raise unittest.SkipTest('no sqlite available') from sqlite3.test import (dbapi, types, userfunctions, factory, transactions, hooks, regression, dump) Modified: python/branches/py3k/Lib/test/test_startfile.py ============================================================================== --- python/branches/py3k/Lib/test/test_startfile.py (original) +++ python/branches/py3k/Lib/test/test_startfile.py Wed Apr 1 01:16:50 2009 @@ -9,9 +9,11 @@ import unittest from test import support +import os +from os import path + +startfile = support.get_attribute(os, 'startfile') -# use this form so that the test is skipped when startfile is not available: -from os import startfile, path class TestCase(unittest.TestCase): def test_nonexisting(self): Modified: python/branches/py3k/Lib/test/test_tcl.py ============================================================================== --- python/branches/py3k/Lib/test/test_tcl.py (original) +++ python/branches/py3k/Lib/test/test_tcl.py Wed Apr 1 01:16:50 2009 @@ -2,8 +2,11 @@ import unittest import os -import _tkinter from test import support + +# Skip this test if the _tkinter module wasn't built. +_tkinter = support.import_module('_tkinter') + from tkinter import Tcl from _tkinter import TclError Modified: python/branches/py3k/Lib/test/test_tk.py ============================================================================== --- python/branches/py3k/Lib/test/test_tk.py (original) +++ python/branches/py3k/Lib/test/test_tk.py Wed Apr 1 01:16:50 2009 @@ -3,6 +3,11 @@ from test import support import unittest +# Skip test if _tkinter wasn't built. +support.import_module('_tkinter') + +import tkinter + try: tkinter.Button() except tkinter.TclError as msg: Modified: python/branches/py3k/Lib/test/test_ttk_guionly.py ============================================================================== --- python/branches/py3k/Lib/test/test_ttk_guionly.py (original) +++ python/branches/py3k/Lib/test/test_ttk_guionly.py Wed Apr 1 01:16:50 2009 @@ -1,11 +1,15 @@ import os import sys -from tkinter import ttk -from tkinter.test import runtktests import unittest -from _tkinter import TclError from test import support +# Skip this test if _tkinter wasn't built. +support.import_module('_tkinter') + +from _tkinter import TclError +from tkinter import ttk +from tkinter.test import runtktests + try: ttk.Button() except TclError as msg: Modified: python/branches/py3k/Lib/test/test_ttk_textonly.py ============================================================================== --- python/branches/py3k/Lib/test/test_ttk_textonly.py (original) +++ python/branches/py3k/Lib/test/test_ttk_textonly.py Wed Apr 1 01:16:50 2009 @@ -1,6 +1,10 @@ import os import sys from test import support + +# Skip this test if _tkinter does not exist. +support.import_module('_tkinter') + from tkinter.test import runtktests def test_main(): Modified: python/branches/py3k/Lib/test/test_winreg.py ============================================================================== --- python/branches/py3k/Lib/test/test_winreg.py (original) +++ python/branches/py3k/Lib/test/test_winreg.py Wed Apr 1 01:16:50 2009 @@ -2,12 +2,15 @@ # Test the windows specific win32reg module. # Only win32reg functions not hit here: FlushKey, LoadKey and SaveKey -from winreg import * import os, sys import unittest - from test import support +# Do this first so test will be skipped if module doesn't exist +support.import_module('winreg') +# Now import everything +from winreg import * + test_key_name = "SOFTWARE\\Python Registry Test Key - Delete Me" test_data = [ Modified: python/branches/py3k/Lib/test/test_winsound.py ============================================================================== --- python/branches/py3k/Lib/test/test_winsound.py (original) +++ python/branches/py3k/Lib/test/test_winsound.py Wed Apr 1 01:16:50 2009 @@ -3,10 +3,12 @@ import unittest from test import support support.requires('audio') -import winsound, time +import time import os import subprocess +winsound = support.import_module('winsound') + class BeepTest(unittest.TestCase): # As with PlaySoundTest, incorporate the _have_soundcard() check Modified: python/branches/py3k/Lib/test/test_xml_etree_c.py ============================================================================== --- python/branches/py3k/Lib/test/test_xml_etree_c.py (original) +++ python/branches/py3k/Lib/test/test_xml_etree_c.py Wed Apr 1 01:16:50 2009 @@ -5,7 +5,7 @@ from test import support -from xml.etree import cElementTree as ET +ET = support.import_module('xml.etree.cElementTree') SAMPLE_XML = """ Modified: python/branches/py3k/Lib/test/test_zlib.py ============================================================================== --- python/branches/py3k/Lib/test/test_zlib.py (original) +++ python/branches/py3k/Lib/test/test_zlib.py Wed Apr 1 01:16:50 2009 @@ -1,9 +1,10 @@ import unittest from test import support -import zlib import binascii import random +zlib = support.import_module('zlib') + class ChecksumTestCase(unittest.TestCase): # checksum test cases Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 01:16:50 2009 @@ -725,6 +725,10 @@ Tests ----- +- regrtest no longer treats ImportError as equivalent to SkipTest. Imports + that should cause a test to be skipped are now done using import_module + from test support, which does the conversion. + - Issue #5083: New 'gui' resource for regrtest. From python-checkins at python.org Wed Apr 1 01:22:11 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 01:22:11 +0200 (CEST) Subject: [Python-checkins] r70929 - svn:log Message-ID: <20090331232212.1F3121E4146@bag.python.org> Author: georg.brandl Revision: 70929 Property Name: svn:log Action: modified Property diff: --- old property value +++ new property value @@ -1,4 +1,4 @@ -Blocked revisions 70734,70775,70856,70874,70876-70877 via svnmerge +Merged revisions 70734,70775,70856,70874,70876-70877 via svnmerge ........ r70734 | r.david.murray | 2009-03-30 15:04:00 -0400 (Mon, 30 Mar 2009) | 7 lines From python-checkins at python.org Wed Apr 1 01:45:39 2009 From: python-checkins at python.org (r.david.murray) Date: Wed, 1 Apr 2009 01:45:39 +0200 (CEST) Subject: [Python-checkins] r70930 - in python/trunk/Lib/test: test_sqlite.py test_wait4.py Message-ID: <20090331234539.360F31E406D@bag.python.org> Author: r.david.murray Date: Wed Apr 1 01:45:39 2009 New Revision: 70930 Log: Fix Windows test skip error revealed by buildbot. Also a comment spelling correction in a previously fixed test. Modified: python/trunk/Lib/test/test_sqlite.py python/trunk/Lib/test/test_wait4.py Modified: python/trunk/Lib/test/test_sqlite.py ============================================================================== --- python/trunk/Lib/test/test_sqlite.py (original) +++ python/trunk/Lib/test/test_sqlite.py Wed Apr 1 01:45:39 2009 @@ -1,7 +1,7 @@ import unittest from test.test_support import run_unittest, import_module -#Skip test of _sqlite3 module not installed +# Skip test if _sqlite3 module was not built. import_module('_sqlite3') from sqlite3.test import (dbapi, types, userfunctions, py25tests, Modified: python/trunk/Lib/test/test_wait4.py ============================================================================== --- python/trunk/Lib/test/test_wait4.py (original) +++ python/trunk/Lib/test/test_wait4.py Wed Apr 1 01:45:39 2009 @@ -4,17 +4,12 @@ import os import time from test.fork_wait import ForkWait -from test.test_support import run_unittest, reap_children +from test.test_support import run_unittest, reap_children, get_attribute -try: - os.fork -except AttributeError: - raise unittest.SkipTest, "os.fork not defined -- skipping test_wait4" +# If either of these do not exist, skip this test. +get_attribute(os, 'fork') +get_attribute(os, 'wait4') -try: - os.wait4 -except AttributeError: - raise unittest.SkipTest, "os.wait4 not defined -- skipping test_wait4" class Wait4Test(ForkWait): def wait_impl(self, cpid): From python-checkins at python.org Wed Apr 1 01:46:48 2009 From: python-checkins at python.org (jack.diederich) Date: Wed, 1 Apr 2009 01:46:48 +0200 (CEST) Subject: [Python-checkins] r70931 - in python/trunk: Lib/test/test_functools.py Misc/ACKS Misc/NEWS Modules/_functoolsmodule.c Message-ID: <20090331234648.E4B091E406D@bag.python.org> Author: jack.diederich Date: Wed Apr 1 01:46:48 2009 New Revision: 70931 Log: #5228: add pickle support to functools.partial Modified: python/trunk/Lib/test/test_functools.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS python/trunk/Modules/_functoolsmodule.c Modified: python/trunk/Lib/test/test_functools.py ============================================================================== --- python/trunk/Lib/test/test_functools.py (original) +++ python/trunk/Lib/test/test_functools.py Wed Apr 1 01:46:48 2009 @@ -2,6 +2,7 @@ import unittest from test import test_support from weakref import proxy +import pickle @staticmethod def PythonPartial(func, *args, **keywords): @@ -19,6 +20,10 @@ """capture all positional and keyword arguments""" return args, kw +def signature(part): + """ return the signature of a partial object """ + return (part.func, part.args, part.keywords, part.__dict__) + class TestPartial(unittest.TestCase): thetype = functools.partial @@ -140,6 +145,12 @@ join = self.thetype(''.join) self.assertEqual(join(data), '0123456789') + def test_pickle(self): + f = self.thetype(signature, 'asdf', bar=True) + f.add_something_to__dict__ = True + f_copy = pickle.loads(pickle.dumps(f)) + self.assertEqual(signature(f), signature(f_copy)) + class PartialSubclass(functools.partial): pass @@ -147,11 +158,13 @@ thetype = PartialSubclass - class TestPythonPartial(TestPartial): thetype = PythonPartial + # the python version isn't picklable + def test_pickle(self): pass + class TestUpdateWrapper(unittest.TestCase): def check_wrapper(self, wrapper, wrapped, Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Wed Apr 1 01:46:48 2009 @@ -168,6 +168,7 @@ Raghuram Devarakonda Toby Dickenson Mark Dickinson +Jack Diederich Yves Dionne Daniel Dittmar Jaromir Dolecek Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 01:46:48 2009 @@ -710,6 +710,8 @@ - Issue #4396: The parser module now correctly validates the with statement. +- Issue #5228: Make functools.partial objects can now be pickled. + Tests ----- Modified: python/trunk/Modules/_functoolsmodule.c ============================================================================== --- python/trunk/Modules/_functoolsmodule.c (original) +++ python/trunk/Modules/_functoolsmodule.c Wed Apr 1 01:46:48 2009 @@ -274,6 +274,53 @@ {NULL} /* Sentinel */ }; +/* Pickle strategy: + __reduce__ by itself doesn't support getting kwargs in the unpickle + operation so we define a __setstate__ that replaces all the information + about the partial. If we only replaced part of it someone would use + it as a hook to do stange things. + */ + +PyObject * +partial_reduce(partialobject *pto, PyObject *unused) +{ + return Py_BuildValue("O(O)(OOOO)", Py_TYPE(pto), pto->fn, pto->fn, + pto->args, pto->kw, + pto->dict ? pto->dict : Py_None); +} + +PyObject * +partial_setstate(partialobject *pto, PyObject *args) +{ + PyObject *fn, *fnargs, *kw, *dict; + if (!PyArg_ParseTuple(args, "(OOOO):__setstate__", + &fn, &fnargs, &kw, &dict)) + return NULL; + Py_XDECREF(pto->fn); + Py_XDECREF(pto->args); + Py_XDECREF(pto->kw); + Py_XDECREF(pto->dict); + pto->fn = fn; + pto->args = fnargs; + pto->kw = kw; + if (dict != Py_None) { + pto->dict = dict; + Py_INCREF(dict); + } else { + pto->dict = NULL; + } + Py_INCREF(fn); + Py_INCREF(fnargs); + Py_INCREF(kw); + Py_RETURN_NONE; +} + +static PyMethodDef partial_methods[] = { + {"__reduce__", (PyCFunction)partial_reduce, METH_NOARGS}, + {"__setstate__", (PyCFunction)partial_setstate, METH_VARARGS}, + {NULL, NULL} /* sentinel */ +}; + static PyTypeObject partial_type = { PyVarObject_HEAD_INIT(NULL, 0) "functools.partial", /* tp_name */ @@ -304,7 +351,7 @@ offsetof(partialobject, weakreflist), /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ - 0, /* tp_methods */ + partial_methods, /* tp_methods */ partial_memberlist, /* tp_members */ partial_getsetlist, /* tp_getset */ 0, /* tp_base */ From python-checkins at python.org Wed Apr 1 01:50:31 2009 From: python-checkins at python.org (r.david.murray) Date: Wed, 1 Apr 2009 01:50:31 +0200 (CEST) Subject: [Python-checkins] r70932 - in python/branches/py3k: Lib/test/test_wait4.py Message-ID: <20090331235031.5A3751E406D@bag.python.org> Author: r.david.murray Date: Wed Apr 1 01:50:31 2009 New Revision: 70932 Log: Merged revisions 70930 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70930 | r.david.murray | 2009-03-31 19:45:39 -0400 (Tue, 31 Mar 2009) | 3 lines Fix Windows test skip error revealed by buildbot. Also a comment spelling correction in a previously fixed test. ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Lib/test/test_wait4.py Modified: python/branches/py3k/Lib/test/test_wait4.py ============================================================================== --- python/branches/py3k/Lib/test/test_wait4.py (original) +++ python/branches/py3k/Lib/test/test_wait4.py Wed Apr 1 01:50:31 2009 @@ -4,17 +4,12 @@ import os import time from test.fork_wait import ForkWait -from test.support import run_unittest, reap_children +from test.support import run_unittest, reap_children, get_attribute -try: - os.fork -except AttributeError: - raise unittest.SkipTest("os.fork not defined -- skipping test_wait4") +# If either of these do not exist, skip this test. +get_attribute(os, 'fork') +get_attribute(os, 'wait4') -try: - os.wait4 -except AttributeError: - raise unittest.SkipTest("os.wait4 not defined -- skipping test_wait4") class Wait4Test(ForkWait): def wait_impl(self, cpid): From python-checkins at python.org Wed Apr 1 02:04:33 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 02:04:33 +0200 (CEST) Subject: [Python-checkins] r70933 - in python/trunk: Lib/test/test_sys.py Misc/NEWS Message-ID: <20090401000433.A28E01E42F3@bag.python.org> Author: georg.brandl Date: Wed Apr 1 02:04:33 2009 New Revision: 70933 Log: Issue #5635: Fix running test_sys with tracing enabled. Modified: python/trunk/Lib/test/test_sys.py python/trunk/Misc/NEWS Modified: python/trunk/Lib/test/test_sys.py ============================================================================== --- python/trunk/Lib/test/test_sys.py (original) +++ python/trunk/Lib/test/test_sys.py Wed Apr 1 02:04:33 2009 @@ -221,6 +221,11 @@ sys.setdlopenflags(oldflags) def test_refcount(self): + # n here must be a global in order for this test to pass while + # tracing with a python function. Tracing calls PyFrame_FastToLocals + # which will add a copy of any locals to the frame object, causing + # the reference count to increase by 2 instead of 1. + global n self.assertRaises(TypeError, sys.getrefcount) c = sys.getrefcount(None) n = None Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 02:04:33 2009 @@ -1,4 +1,5 @@ -+++++++++++ Python News ++++++++++++ +Python News +++++++++++ (editors: check NEWS.help for information about editing NEWS using ReST.) @@ -715,6 +716,8 @@ Tests ----- +- Issue #5635: Fix running test_sys with tracing enabled. + - regrtest no longer treats ImportError as equivalent to SkipTest. Imports that should cause a test to be skipped are now done using import_module from test support, which does the conversion. From buildbot at python.org Wed Apr 1 02:12:51 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 00:12:51 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable trunk Message-ID: <20090401001251.C2EFA1E406D@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable trunk. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%20trunk/builds/297 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: benjamin.peterson,georg.brandl,gregory.p.smith,hirokazu.yamamoto,josiah.carlson,r.david.murray,tarek.ziade BUILD FAILED: failed test Excerpt from the test logfile: 2 tests failed: test_asyncore test_multiprocessing ====================================================================== FAIL: test_readwrite (test.test_asyncore.HelperFunctionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_asyncore.py", line 144, in test_readwrite self.assertEqual(tobj.read, True) AssertionError: False != True ====================================================================== FAIL: test_unhandled (test.test_asyncore.DispatcherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_asyncore.py", line 321, in test_unhandled self.assertEquals(lines, expected) AssertionError: First differing element 0: ====================================================================== FAIL: test_event (test.test_multiprocessing.WithThreadsTestEvent) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_multiprocessing.py", line 754, in test_event self.assertEqual(wait(0.0), None) AssertionError: False != None ====================================================================== FAIL: test_event (test.test_multiprocessing.WithManagerTestEvent) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/pybot/buildarea/trunk.klose-debian-ppc/build/Lib/test/test_multiprocessing.py", line 754, in test_event self.assertEqual(wait(0.0), None) AssertionError: False != None make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Apr 1 02:38:32 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 00:38:32 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 trunk Message-ID: <20090401003832.E4A7E1E4066@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 trunk. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%20trunk/builds/2005 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: georg.brandl,gregory.p.smith,hirokazu.yamamoto,jesse.noller,josiah.carlson,r.david.murray,raymond.hettinger BUILD FAILED: failed failed slave lost sincerely, -The Buildbot From python-checkins at python.org Wed Apr 1 03:28:12 2009 From: python-checkins at python.org (josiah.carlson) Date: Wed, 1 Apr 2009 03:28:12 +0200 (CEST) Subject: [Python-checkins] r70934 - in python/trunk: Doc/library/asyncore.rst Lib/asyncore.py Lib/test/test_asyncore.py Message-ID: <20090401012812.075C71E4090@bag.python.org> Author: josiah.carlson Date: Wed Apr 1 03:28:11 2009 New Revision: 70934 Log: Fix for failing asyncore tests. Modified: python/trunk/Doc/library/asyncore.rst python/trunk/Lib/asyncore.py python/trunk/Lib/test/test_asyncore.py Modified: python/trunk/Doc/library/asyncore.rst ============================================================================== --- python/trunk/Doc/library/asyncore.rst (original) +++ python/trunk/Doc/library/asyncore.rst Wed Apr 1 03:28:11 2009 @@ -81,7 +81,8 @@ +----------------------+----------------------------------------+ | Event | Description | +======================+========================================+ - | ``handle_connect()`` | Implied by the first write event | + | ``handle_connect()`` | Implied by the first read or write | + | | event | +----------------------+----------------------------------------+ | ``handle_close()`` | Implied by a read event with no data | | | available | Modified: python/trunk/Lib/asyncore.py ============================================================================== --- python/trunk/Lib/asyncore.py (original) +++ python/trunk/Lib/asyncore.py Wed Apr 1 03:28:11 2009 @@ -401,7 +401,7 @@ sys.stderr.write('log: %s\n' % str(message)) def log_info(self, message, type='info'): - if __debug__ or type not in self.ignore_log_types: + if type not in self.ignore_log_types: print '%s: %s' % (type, message) def handle_read_event(self): Modified: python/trunk/Lib/test/test_asyncore.py ============================================================================== --- python/trunk/Lib/test/test_asyncore.py (original) +++ python/trunk/Lib/test/test_asyncore.py Wed Apr 1 03:28:11 2009 @@ -298,6 +298,7 @@ def test_unhandled(self): d = asyncore.dispatcher() + d.ignore_log_types = () # capture output of dispatcher.log_info() (to stdout via print) fp = StringIO() @@ -313,7 +314,7 @@ sys.stdout = stdout lines = fp.getvalue().splitlines() - expected = ['warning: unhandled exception', + expected = ['warning: unhandled incoming priority event', 'warning: unhandled read event', 'warning: unhandled write event', 'warning: unhandled connect event', From buildbot at python.org Wed Apr 1 03:33:06 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 01:33:06 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 3.x Message-ID: <20090401013306.C56251E4090@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 3.x. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%203.x/builds/409 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/py3k] HEAD Blamelist: benjamin.peterson,georg.brandl,jesse.noller,martin.v.loewis,r.david.murray,raymond.hettinger,tarek.ziade BUILD FAILED: failed compile sincerely, -The Buildbot From buildbot at python.org Wed Apr 1 03:35:44 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 01:35:44 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo 3.0 Message-ID: <20090401013545.05CBB1E4070@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%203.0/builds/216 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch branches/release30-maint] HEAD Blamelist: jesse.noller,tarek.ziade BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_smtplib make: *** [buildbottest] Error 1 sincerely, -The Buildbot From buildbot at python.org Wed Apr 1 04:19:07 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 02:19:07 +0000 Subject: [Python-checkins] buildbot failure in x86 osx.5 3.0 Message-ID: <20090401021908.2487E1E405D@bag.python.org> The Buildbot has detected a new failure of x86 osx.5 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20osx.5%203.0/builds/237 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-x86-osx5 Build Reason: Build Source Stamp: [branch branches/release30-maint] HEAD Blamelist: jesse.noller,tarek.ziade BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Wed Apr 1 04:35:57 2009 From: python-checkins at python.org (jeremy.hylton) Date: Wed, 1 Apr 2009 04:35:57 +0200 (CEST) Subject: [Python-checkins] r70935 - python/branches/py3k/Lib/http/client.py Message-ID: <20090401023557.278761E40E0@bag.python.org> Author: jeremy.hylton Date: Wed Apr 1 04:35:56 2009 New Revision: 70935 Log: An HTTPResponse is, by its nature, readable. Modified: python/branches/py3k/Lib/http/client.py Modified: python/branches/py3k/Lib/http/client.py ============================================================================== --- python/branches/py3k/Lib/http/client.py (original) +++ python/branches/py3k/Lib/http/client.py Wed Apr 1 04:35:56 2009 @@ -469,6 +469,9 @@ def flush(self): self.fp.flush() + def readable(self): + return True + # End of "raw stream" methods def isclosed(self): From buildbot at python.org Wed Apr 1 04:39:48 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 02:39:48 +0000 Subject: [Python-checkins] buildbot failure in sparc solaris10 gcc 3.0 Message-ID: <20090401023948.A5FF81E4090@bag.python.org> The Buildbot has detected a new failure of sparc solaris10 gcc 3.0. Full details are available at: http://www.python.org/dev/buildbot/all/sparc%20solaris10%20gcc%203.0/builds/239 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: loewis-sun Build Reason: Build Source Stamp: [branch branches/release30-maint] HEAD Blamelist: jesse.noller,tarek.ziade BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_posix ====================================================================== ERROR: test_getcwd_long_pathnames (test.test_posix.PosixTester) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home2/buildbot/slave/3.0.loewis-sun/build/Lib/test/test_posix.py", line 252, in test_getcwd_long_pathnames support.rmtree(base_path) File "/home2/buildbot/slave/3.0.loewis-sun/build/Lib/test/support.py", line 98, in rmtree shutil.rmtree(path) File "/home2/buildbot/slave/3.0.loewis-sun/build/Lib/shutil.py", line 225, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/home2/buildbot/slave/3.0.loewis-sun/build/Lib/shutil.py", line 223, in rmtree os.rmdir(path) OSError: [Errno 22] Invalid argument: '/home2/buildbot/slave/3.0.loewis-sun/build/@test.getcwd/@test.getcwd' sincerely, -The Buildbot From buildbot at python.org Wed Apr 1 04:46:20 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 02:46:20 +0000 Subject: [Python-checkins] buildbot failure in x86 XP-4 2.6 Message-ID: <20090401024620.6AE961E43D4@bag.python.org> The Buildbot has detected a new failure of x86 XP-4 2.6. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20XP-4%202.6/builds/167 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: bolen-windows Build Reason: Build Source Stamp: [branch branches/release26-maint] HEAD Blamelist: benjamin.peterson,jesse.noller,tarek.ziade BUILD FAILED: failed compile sincerely, -The Buildbot From python-checkins at python.org Wed Apr 1 05:21:43 2009 From: python-checkins at python.org (r.david.murray) Date: Wed, 1 Apr 2009 05:21:43 +0200 (CEST) Subject: [Python-checkins] r70936 - in python/trunk: Lib/locale.py Lib/test/test_locale.py Misc/ACKS Misc/NEWS Message-ID: <20090401032143.E38BC1E4515@bag.python.org> Author: r.david.murray Date: Wed Apr 1 05:21:43 2009 New Revision: 70936 Log: Fix issue 2522. locale.format now checks that it is passed exactly one pattern, which avoids mysterious errors where it had seemed to fail to do localization. Modified: python/trunk/Lib/locale.py python/trunk/Lib/test/test_locale.py python/trunk/Misc/ACKS python/trunk/Misc/NEWS Modified: python/trunk/Lib/locale.py ============================================================================== --- python/trunk/Lib/locale.py (original) +++ python/trunk/Lib/locale.py Wed Apr 1 05:21:43 2009 @@ -11,7 +11,11 @@ """ -import sys, encodings, encodings.aliases +import sys +import encodings +import encodings.aliases +import re +import operator import functools # Try importing the _locale module. @@ -166,6 +170,9 @@ amount -= 1 return s[lpos:rpos+1] +_percent_re = re.compile(r'%(?:\((?P.*?)\))?' + r'(?P[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]') + def format(percent, value, grouping=False, monetary=False, *additional): """Returns the locale-aware substitution of a %? specifier (percent). @@ -173,9 +180,13 @@ additional is for format strings which contain one or more '*' modifiers.""" # this is only for one-percent-specifier strings and this should be checked - if percent[0] != '%': - raise ValueError("format() must be given exactly one %char " - "format specifier") + match = _percent_re.match(percent) + if not match or len(match.group())!= len(percent): + raise ValueError(("format() must be given exactly one %%char " + "format specifier, %s not valid") % repr(percent)) + return _format(percent, value, grouping, monetary, *additional) + +def _format(percent, value, grouping=False, monetary=False, *additional): if additional: formatted = percent % ((value,) + additional) else: @@ -199,10 +210,6 @@ formatted = _strip_padding(formatted, seps) return formatted -import re, operator -_percent_re = re.compile(r'%(?:\((?P.*?)\))?' - r'(?P[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]') - def format_string(f, val, grouping=False): """Formats a string in the same way that the % formatting would use, but takes the current locale into account. Modified: python/trunk/Lib/test/test_locale.py ============================================================================== --- python/trunk/Lib/test/test_locale.py (original) +++ python/trunk/Lib/test/test_locale.py Wed Apr 1 05:21:43 2009 @@ -221,6 +221,19 @@ (self.sep, self.sep)) +class TestFormatPatternArg(unittest.TestCase): + # Test handling of pattern argument of format + + def test_onlyOnePattern(self): + # Issue 2522: accept exactly one % pattern, and no extra chars. + self.assertRaises(ValueError, locale.format, "%f\n", 'foo') + self.assertRaises(ValueError, locale.format, "%f\r", 'foo') + self.assertRaises(ValueError, locale.format, "%f\r\n", 'foo') + self.assertRaises(ValueError, locale.format, " %f", 'foo') + self.assertRaises(ValueError, locale.format, "%fg", 'foo') + self.assertRaises(ValueError, locale.format, "%^g", 'foo') + + class TestNumberFormatting(BaseLocalizedTest, EnUSNumberFormatting): # Test number formatting with a real English locale. @@ -351,6 +364,7 @@ def test_main(): tests = [ TestMiscellaneous, + TestFormatPatternArg, TestEnUSNumberFormatting, TestCNumberFormatting, TestFrFRNumberFormatting, Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Wed Apr 1 05:21:43 2009 @@ -480,6 +480,7 @@ Chad Miller Jay T. Miller Roman Milner +Andrii V. Mishkovskyi Dustin J. Mitchell Dom Mitchell Doug Moen @@ -490,6 +491,7 @@ Sjoerd Mullender Sape Mullender Michael Muller +R. David Murray Piotr Meyer John Nagle Takahiro Nakayama Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 05:21:43 2009 @@ -200,6 +200,10 @@ Library ------- +- Issue #2522: locale.format now checks its first argument to ensure it has + been passed only one pattern, avoiding mysterious errors where it appeared + that it was failing to do localization. + - Issue #5583: Added optional Extensions in Distutils. Initial patch by Georg Brandl. From python-checkins at python.org Wed Apr 1 05:35:21 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 05:35:21 +0200 (CEST) Subject: [Python-checkins] r70937 - python/branches/py3k/Lib/importlib/_bootstrap.py Message-ID: <20090401033521.27CCD1E4090@bag.python.org> Author: brett.cannon Date: Wed Apr 1 05:35:20 2009 New Revision: 70937 Log: Rip out a useless method that the superclass implements properly. Modified: python/branches/py3k/Lib/importlib/_bootstrap.py Modified: python/branches/py3k/Lib/importlib/_bootstrap.py ============================================================================== --- python/branches/py3k/Lib/importlib/_bootstrap.py (original) +++ python/branches/py3k/Lib/importlib/_bootstrap.py Wed Apr 1 05:35:20 2009 @@ -475,25 +475,6 @@ # Not a property so that it is easy to override. return self._find_path(imp.PY_SOURCE) - @_check_name - def get_source(self, fullname): - """Return the source for the module as a string. - - Return None if the source is not available. Raise ImportError if the - laoder cannot handle the specified module. - - """ - source_path = self._source_path(name) - if source_path is None: - return None - import tokenize - with _closing(_io.FileIO(source_path, 'r')) as file: # Assuming bytes. - encoding, lines = tokenize.detect_encoding(file.readline) - # XXX Will fail when passed to compile() if the encoding is - # anything other than UTF-8. - return open(source_path, encoding=encoding).read() - - def get_data(self, path): """Return the data from path as raw bytes.""" return _io.FileIO(path, 'r').read() # Assuming bytes. From python-checkins at python.org Wed Apr 1 05:42:00 2009 From: python-checkins at python.org (r.david.murray) Date: Wed, 1 Apr 2009 05:42:00 +0200 (CEST) Subject: [Python-checkins] r70938 - in python/branches/py3k: Lib/locale.py Lib/test/test_locale.py Misc/ACKS Misc/NEWS Message-ID: <20090401034200.7A5331E4090@bag.python.org> Author: r.david.murray Date: Wed Apr 1 05:42:00 2009 New Revision: 70938 Log: Merged revisions 70936 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70936 | r.david.murray | 2009-03-31 23:21:43 -0400 (Tue, 31 Mar 2009) | 4 lines Fix issue 2522. locale.format now checks that it is passed exactly one pattern, which avoids mysterious errors where it had seemed to fail to do localization. ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Lib/locale.py python/branches/py3k/Lib/test/test_locale.py python/branches/py3k/Misc/ACKS python/branches/py3k/Misc/NEWS Modified: python/branches/py3k/Lib/locale.py ============================================================================== --- python/branches/py3k/Lib/locale.py (original) +++ python/branches/py3k/Lib/locale.py Wed Apr 1 05:42:00 2009 @@ -11,7 +11,11 @@ """ -import sys, encodings, encodings.aliases +import sys +import encodings +import encodings.aliases +import re +import collections from builtins import str as _builtin_str import functools @@ -173,6 +177,9 @@ amount -= 1 return s[lpos:rpos+1] +_percent_re = re.compile(r'%(?:\((?P.*?)\))?' + r'(?P[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]') + def format(percent, value, grouping=False, monetary=False, *additional): """Returns the locale-aware substitution of a %? specifier (percent). @@ -180,9 +187,13 @@ additional is for format strings which contain one or more '*' modifiers.""" # this is only for one-percent-specifier strings and this should be checked - if percent[0] != '%': - raise ValueError("format() must be given exactly one %char " - "format specifier") + match = _percent_re.match(percent) + if not match or len(match.group())!= len(percent): + raise ValueError(("format() must be given exactly one %%char " + "format specifier, %s not valid") % repr(percent)) + return _format(percent, value, grouping, monetary, *additional) + +def _format(percent, value, grouping=False, monetary=False, *additional): if additional: formatted = percent % ((value,) + additional) else: @@ -206,10 +217,6 @@ formatted = _strip_padding(formatted, seps) return formatted -import re, collections -_percent_re = re.compile(r'%(?:\((?P.*?)\))?' - r'(?P[-#0-9 +*.hlL]*?)[eEfFgGdiouxXcrs%]') - def format_string(f, val, grouping=False): """Formats a string in the same way that the % formatting would use, but takes the current locale into account. Modified: python/branches/py3k/Lib/test/test_locale.py ============================================================================== --- python/branches/py3k/Lib/test/test_locale.py (original) +++ python/branches/py3k/Lib/test/test_locale.py Wed Apr 1 05:42:00 2009 @@ -219,6 +219,19 @@ (self.sep, self.sep)) +class TestFormatPatternArg(unittest.TestCase): + # Test handling of pattern argument of format + + def test_onlyOnePattern(self): + # Issue 2522: accept exactly one % pattern, and no extra chars. + self.assertRaises(ValueError, locale.format, "%f\n", 'foo') + self.assertRaises(ValueError, locale.format, "%f\r", 'foo') + self.assertRaises(ValueError, locale.format, "%f\r\n", 'foo') + self.assertRaises(ValueError, locale.format, " %f", 'foo') + self.assertRaises(ValueError, locale.format, "%fg", 'foo') + self.assertRaises(ValueError, locale.format, "%^g", 'foo') + + class TestNumberFormatting(BaseLocalizedTest, EnUSNumberFormatting): # Test number formatting with a real English locale. @@ -314,6 +327,7 @@ def test_main(): tests = [ TestMiscellaneous, + TestFormatPatternArg, TestEnUSNumberFormatting, TestCNumberFormatting, TestFrFRNumberFormatting, Modified: python/branches/py3k/Misc/ACKS ============================================================================== --- python/branches/py3k/Misc/ACKS (original) +++ python/branches/py3k/Misc/ACKS Wed Apr 1 05:42:00 2009 @@ -482,6 +482,7 @@ Chad Miller Jay T. Miller Roman Milner +Andrii V. Mishkovskyi Dustin J. Mitchell Dom Mitchell Doug Moen @@ -492,6 +493,7 @@ Sjoerd Mullender Sape Mullender Michael Muller +R. David Murray Piotr Meyer John Nagle Takahiro Nakayama Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 05:42:00 2009 @@ -284,6 +284,10 @@ Library ------- +- Issue #2522: locale.format now checks its first argument to ensure it has + been passed only one pattern, avoiding mysterious errors where it appeared + that it was failing to do localization. + - Issue #5583: Added optional Extensions in Distutils. Initial patch by Georg Brandl. From python-checkins at python.org Wed Apr 1 05:45:50 2009 From: python-checkins at python.org (jesse.noller) Date: Wed, 1 Apr 2009 05:45:50 +0200 (CEST) Subject: [Python-checkins] r70939 - in python/trunk: Doc/library/multiprocessing.rst Lib/multiprocessing/synchronize.py Lib/test/test_multiprocessing.py Message-ID: <20090401034550.A76981E4090@bag.python.org> Author: jesse.noller Date: Wed Apr 1 05:45:50 2009 New Revision: 70939 Log: Fix multiprocessing.event to match the new threading.Event API Modified: python/trunk/Doc/library/multiprocessing.rst python/trunk/Lib/multiprocessing/synchronize.py python/trunk/Lib/test/test_multiprocessing.py Modified: python/trunk/Doc/library/multiprocessing.rst ============================================================================== --- python/trunk/Doc/library/multiprocessing.rst (original) +++ python/trunk/Doc/library/multiprocessing.rst Wed Apr 1 05:45:50 2009 @@ -836,6 +836,12 @@ .. class:: Event() A clone of :class:`threading.Event`. + This method returns the state of the internal semaphore on exit, so it + will always return ``True`` except if a timeout is given and the operation + times out. + + .. versionchanged:: 2.7 + Previously, the method always returned ``None``. .. class:: Lock() Modified: python/trunk/Lib/multiprocessing/synchronize.py ============================================================================== --- python/trunk/Lib/multiprocessing/synchronize.py (original) +++ python/trunk/Lib/multiprocessing/synchronize.py Wed Apr 1 05:45:50 2009 @@ -301,5 +301,10 @@ self._flag.release() else: self._cond.wait(timeout) + + if self._flag.acquire(False): + self._flag.release() + return True + return False finally: self._cond.release() Modified: python/trunk/Lib/test/test_multiprocessing.py ============================================================================== --- python/trunk/Lib/test/test_multiprocessing.py (original) +++ python/trunk/Lib/test/test_multiprocessing.py Wed Apr 1 05:45:50 2009 @@ -749,20 +749,22 @@ # Removed temporaily, due to API shear, this does not # work with threading._Event objects. is_set == isSet - #self.assertEqual(event.is_set(), False) + self.assertEqual(event.is_set(), False) - self.assertEqual(wait(0.0), None) + # Removed, threading.Event.wait() will return the value of the __flag + # instead of None. API Shear with the semaphore backed mp.Event + self.assertEqual(wait(0.0), False) self.assertTimingAlmostEqual(wait.elapsed, 0.0) - self.assertEqual(wait(TIMEOUT1), None) + self.assertEqual(wait(TIMEOUT1), False) self.assertTimingAlmostEqual(wait.elapsed, TIMEOUT1) event.set() # See note above on the API differences - # self.assertEqual(event.is_set(), True) - self.assertEqual(wait(), None) + self.assertEqual(event.is_set(), True) + self.assertEqual(wait(), True) self.assertTimingAlmostEqual(wait.elapsed, 0.0) - self.assertEqual(wait(TIMEOUT1), None) + self.assertEqual(wait(TIMEOUT1), True) self.assertTimingAlmostEqual(wait.elapsed, 0.0) # self.assertEqual(event.is_set(), True) @@ -771,7 +773,7 @@ #self.assertEqual(event.is_set(), False) self.Process(target=self._test_event, args=(event,)).start() - self.assertEqual(wait(), None) + self.assertEqual(wait(), True) # # From buildbot at python.org Wed Apr 1 05:57:26 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 03:57:26 +0000 Subject: [Python-checkins] buildbot failure in ppc Debian unstable 2.6 Message-ID: <20090401035727.0D1411E4090@bag.python.org> The Buildbot has detected a new failure of ppc Debian unstable 2.6. Full details are available at: http://www.python.org/dev/buildbot/all/ppc%20Debian%20unstable%202.6/builds/202 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: klose-debian-ppc Build Reason: Build Source Stamp: [branch branches/release26-maint] HEAD Blamelist: jesse.noller,tarek.ziade BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_bsddb3 make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Wed Apr 1 06:21:14 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 06:21:14 +0200 (CEST) Subject: [Python-checkins] r70940 - in python/trunk: Lib/SimpleXMLRPCServer.py Misc/NEWS Message-ID: <20090401042114.AD7571E43A1@bag.python.org> Author: georg.brandl Date: Wed Apr 1 06:21:14 2009 New Revision: 70940 Log: The SimpleXMLRPCServer's CGI handler now runs like a pony. Modified: python/trunk/Lib/SimpleXMLRPCServer.py python/trunk/Misc/NEWS Modified: python/trunk/Lib/SimpleXMLRPCServer.py ============================================================================== --- python/trunk/Lib/SimpleXMLRPCServer.py (original) +++ python/trunk/Lib/SimpleXMLRPCServer.py Wed Apr 1 06:21:14 2009 @@ -598,8 +598,12 @@ self.handle_get() else: # POST data is normally available through stdin + try: + length = int(os.environ.get('CONTENT_LENGTH', None)) + except ValueError: + length = -1 if request_text is None: - request_text = sys.stdin.read() + request_text = sys.stdin.read(length) self.handle_xmlrpc(request_text) Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 06:21:14 2009 @@ -200,6 +200,8 @@ Library ------- +- Actually make the SimpleXMLRPCServer CGI handler work. + - Issue #2522: locale.format now checks its first argument to ensure it has been passed only one pattern, avoiding mysterious errors where it appeared that it was failing to do localization. From python-checkins at python.org Wed Apr 1 06:27:09 2009 From: python-checkins at python.org (jack.diederich) Date: Wed, 1 Apr 2009 06:27:09 +0200 (CEST) Subject: [Python-checkins] r70941 - in python/branches/py3k: Lib/test/test_functools.py Misc/ACKS Misc/NEWS Modules/_functoolsmodule.c Message-ID: <20090401042709.C623F1E406D@bag.python.org> Author: jack.diederich Date: Wed Apr 1 06:27:09 2009 New Revision: 70941 Log: Merged revisions 70931 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70931 | jack.diederich | 2009-03-31 19:46:48 -0400 (Tue, 31 Mar 2009) | 1 line #5228: add pickle support to functools.partial ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Lib/test/test_functools.py python/branches/py3k/Misc/ACKS python/branches/py3k/Misc/NEWS python/branches/py3k/Modules/_functoolsmodule.c Modified: python/branches/py3k/Lib/test/test_functools.py ============================================================================== --- python/branches/py3k/Lib/test/test_functools.py (original) +++ python/branches/py3k/Lib/test/test_functools.py Wed Apr 1 06:27:09 2009 @@ -2,6 +2,7 @@ import unittest from test import support from weakref import proxy +import pickle @staticmethod def PythonPartial(func, *args, **keywords): @@ -19,6 +20,9 @@ """capture all positional and keyword arguments""" return args, kw +def signature(part): + """ return the signature of a partial object """ + return (part.func, part.args, part.keywords, part.__dict__) class TestPartial(unittest.TestCase): @@ -141,6 +145,12 @@ join = self.thetype(''.join) self.assertEqual(join(data), '0123456789') + def test_pickle(self): + f = self.thetype(signature, 'asdf', bar=True) + f.add_something_to__dict__ = True + f_copy = pickle.loads(pickle.dumps(f)) + self.assertEqual(signature(f), signature(f_copy)) + class PartialSubclass(functools.partial): pass @@ -148,11 +158,13 @@ thetype = PartialSubclass - class TestPythonPartial(TestPartial): thetype = PythonPartial + # the python version isn't picklable + def test_pickle(self): pass + class TestUpdateWrapper(unittest.TestCase): def check_wrapper(self, wrapper, wrapped, Modified: python/branches/py3k/Misc/ACKS ============================================================================== --- python/branches/py3k/Misc/ACKS (original) +++ python/branches/py3k/Misc/ACKS Wed Apr 1 06:27:09 2009 @@ -167,6 +167,7 @@ Raghuram Devarakonda Toby Dickenson Mark Dickinson +Jack Diederich Humberto Diogenes Yves Dionne Daniel Dittmar Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 06:27:09 2009 @@ -726,6 +726,8 @@ buffer. +- Issue #5228: Make functools.partial objects can now be pickled. + Tests ----- Modified: python/branches/py3k/Modules/_functoolsmodule.c ============================================================================== --- python/branches/py3k/Modules/_functoolsmodule.c (original) +++ python/branches/py3k/Modules/_functoolsmodule.c Wed Apr 1 06:27:09 2009 @@ -196,6 +196,53 @@ {NULL} /* Sentinel */ }; +/* Pickle strategy: + __reduce__ by itself doesn't support getting kwargs in the unpickle + operation so we define a __setstate__ that replaces all the information + about the partial. If we only replaced part of it someone would use + it as a hook to do stange things. + */ + +PyObject * +partial_reduce(partialobject *pto, PyObject *unused) +{ + return Py_BuildValue("O(O)(OOOO)", Py_TYPE(pto), pto->fn, pto->fn, + pto->args, pto->kw, + pto->dict ? pto->dict : Py_None); +} + +PyObject * +partial_setstate(partialobject *pto, PyObject *args) +{ + PyObject *fn, *fnargs, *kw, *dict; + if (!PyArg_ParseTuple(args, "(OOOO):__setstate__", + &fn, &fnargs, &kw, &dict)) + return NULL; + Py_XDECREF(pto->fn); + Py_XDECREF(pto->args); + Py_XDECREF(pto->kw); + Py_XDECREF(pto->dict); + pto->fn = fn; + pto->args = fnargs; + pto->kw = kw; + if (dict != Py_None) { + pto->dict = dict; + Py_INCREF(dict); + } else { + pto->dict = NULL; + } + Py_INCREF(fn); + Py_INCREF(fnargs); + Py_INCREF(kw); + Py_RETURN_NONE; +} + +static PyMethodDef partial_methods[] = { + {"__reduce__", (PyCFunction)partial_reduce, METH_NOARGS}, + {"__setstate__", (PyCFunction)partial_setstate, METH_VARARGS}, + {NULL, NULL} /* sentinel */ +}; + static PyTypeObject partial_type = { PyVarObject_HEAD_INIT(NULL, 0) "functools.partial", /* tp_name */ @@ -226,7 +273,7 @@ offsetof(partialobject, weakreflist), /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ - 0, /* tp_methods */ + partial_methods, /* tp_methods */ partial_memberlist, /* tp_members */ partial_getsetlist, /* tp_getset */ 0, /* tp_base */ From python-checkins at python.org Wed Apr 1 06:27:47 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 06:27:47 +0200 (CEST) Subject: [Python-checkins] r70942 - in python/branches/py3k: Lib/xmlrpc/server.py Message-ID: <20090401042747.AEAD91E406D@bag.python.org> Author: georg.brandl Date: Wed Apr 1 06:27:47 2009 New Revision: 70942 Log: Merged revisions 70940 via svnmerge ........ r70940 | georg.brandl | 2009-03-31 23:21:14 -0500 (Di, 31 M?r 2009) | 2 lines The SimpleXMLRPCServer's CGI handler now runs like a pony. ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Lib/xmlrpc/server.py Modified: python/branches/py3k/Lib/xmlrpc/server.py ============================================================================== --- python/branches/py3k/Lib/xmlrpc/server.py (original) +++ python/branches/py3k/Lib/xmlrpc/server.py Wed Apr 1 06:27:47 2009 @@ -588,8 +588,12 @@ self.handle_get() else: # POST data is normally available through stdin + try: + length = int(os.environ.get('CONTENT_LENGTH', None)) + except ValueError: + length = -1 if request_text is None: - request_text = sys.stdin.read() + request_text = sys.stdin.read(length) self.handle_xmlrpc(request_text) From python-checkins at python.org Wed Apr 1 06:28:33 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 06:28:33 +0200 (CEST) Subject: [Python-checkins] r70943 - in python/branches/py3k: Lib/distutils/tests/test_msvc9compiler.py Lib/urllib/request.py Misc/NEWS Tools/msi/msilib.py Message-ID: <20090401042833.D7ACD1E406D@bag.python.org> Author: georg.brandl Date: Wed Apr 1 06:28:33 2009 New Revision: 70943 Log: Merged revisions 70940 via svnmerge ........ r70940 | georg.brandl | 2009-03-31 23:21:14 -0500 (Di, 31 M?r 2009) | 2 lines The SimpleXMLRPCServer's CGI handler now runs like a pony. ........ Modified: python/branches/py3k/Lib/distutils/tests/test_msvc9compiler.py python/branches/py3k/Lib/urllib/request.py python/branches/py3k/Misc/NEWS python/branches/py3k/Tools/msi/msilib.py Modified: python/branches/py3k/Lib/distutils/tests/test_msvc9compiler.py ============================================================================== --- python/branches/py3k/Lib/distutils/tests/test_msvc9compiler.py (original) +++ python/branches/py3k/Lib/distutils/tests/test_msvc9compiler.py Wed Apr 1 06:28:33 2009 @@ -48,8 +48,8 @@ v = Reg.get_value(path, "lfitalic") self.assert_(v in (0, 1)) - import _winreg - HKCU = _winreg.HKEY_CURRENT_USER + import winreg + HKCU = winreg.HKEY_CURRENT_USER keys = Reg.read_keys(HKCU, 'xxxx') self.assertEquals(keys, None) Modified: python/branches/py3k/Lib/urllib/request.py ============================================================================== --- python/branches/py3k/Lib/urllib/request.py (original) +++ python/branches/py3k/Lib/urllib/request.py Wed Apr 1 06:28:33 2009 @@ -2160,18 +2160,18 @@ """ proxies = {} try: - import _winreg + import winreg except ImportError: # Std module, so should be around - but you never know! return proxies try: - internetSettings = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, + internetSettings = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r'Software\Microsoft\Windows\CurrentVersion\Internet Settings') - proxyEnable = _winreg.QueryValueEx(internetSettings, + proxyEnable = winreg.QueryValueEx(internetSettings, 'ProxyEnable')[0] if proxyEnable: # Returned as Unicode but problems if not converted to ASCII - proxyServer = str(_winreg.QueryValueEx(internetSettings, + proxyServer = str(winreg.QueryValueEx(internetSettings, 'ProxyServer')[0]) if '=' in proxyServer: # Per-protocol settings @@ -2208,17 +2208,17 @@ def proxy_bypass_registry(host): try: - import _winreg + import winreg import re except ImportError: # Std modules, so should be around - but you never know! return 0 try: - internetSettings = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, + internetSettings = winreg.OpenKey(winreg.HKEY_CURRENT_USER, r'Software\Microsoft\Windows\CurrentVersion\Internet Settings') - proxyEnable = _winreg.QueryValueEx(internetSettings, + proxyEnable = winreg.QueryValueEx(internetSettings, 'ProxyEnable')[0] - proxyOverride = str(_winreg.QueryValueEx(internetSettings, + proxyOverride = str(winreg.QueryValueEx(internetSettings, 'ProxyOverride')[0]) # ^^^^ Returned as Unicode but problems if not converted to ASCII except WindowsError: Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 06:28:33 2009 @@ -53,6 +53,8 @@ Library ------- +- Issue #5624: Fix the _winreg module name still used in several modules. + - Issue #5628: Fix io.TextIOWrapper.read() with a unreadable buffer. - Issue #5619: Multiprocessing children disobey the debug flag and causes Modified: python/branches/py3k/Tools/msi/msilib.py ============================================================================== --- python/branches/py3k/Tools/msi/msilib.py (original) +++ python/branches/py3k/Tools/msi/msilib.py Wed Apr 1 06:28:33 2009 @@ -5,7 +5,7 @@ import win32com.client import pythoncom, pywintypes from win32com.client import constants -import re, string, os, sets, glob, subprocess, sys, _winreg, struct +import re, string, os, sets, glob, subprocess, sys, winreg, struct try: basestring @@ -387,9 +387,9 @@ (r"Software\Microsoft\Win32SDK\Directories", "Install Dir"), ]: try: - key = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, k) - dir = _winreg.QueryValueEx(key, v)[0] - _winreg.CloseKey(key) + key = winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, k) + dir = winreg.QueryValueEx(key, v)[0] + winreg.CloseKey(key) except (WindowsError, IndexError): continue cabarc = os.path.join(dir, r"Bin", "cabarc.exe") From python-checkins at python.org Wed Apr 1 06:32:39 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 06:32:39 +0200 (CEST) Subject: [Python-checkins] r70944 - python/trunk/Lib/distutils/command/__init__.py Message-ID: <20090401043239.83D761E406D@bag.python.org> Author: georg.brandl Date: Wed Apr 1 06:32:39 2009 New Revision: 70944 Log: #5631: add upload to list of possible commands, which is presented in --help-commands. Modified: python/trunk/Lib/distutils/command/__init__.py Modified: python/trunk/Lib/distutils/command/__init__.py ============================================================================== --- python/trunk/Lib/distutils/command/__init__.py (original) +++ python/trunk/Lib/distutils/command/__init__.py Wed Apr 1 06:32:39 2009 @@ -22,6 +22,8 @@ 'bdist_dumb', 'bdist_rpm', 'bdist_wininst', + 'upload', + # These two are reserved for future use: #'bdist_sdux', #'bdist_pkgtool', From python-checkins at python.org Wed Apr 1 07:08:41 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 07:08:41 +0200 (CEST) Subject: [Python-checkins] r70945 - in python/branches/py3k: Grammar/Grammar Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Lib/test/test_flufl.py Parser/parser.c Parser/parsetok.c Parser/tokenizer.c Python/future.c Python/graminit.c Python/pythonrun.c Message-ID: <20090401050841.D5B051E40F1@bag.python.org> Author: brett.cannon Date: Wed Apr 1 07:08:41 2009 New Revision: 70945 Log: The BDFL has retired! Long live the FLUFL (Friendly Language Uncle For Life)! Added: python/branches/py3k/Lib/test/test_flufl.py Modified: python/branches/py3k/Grammar/Grammar python/branches/py3k/Include/code.h python/branches/py3k/Include/compile.h python/branches/py3k/Include/parsetok.h python/branches/py3k/Include/pythonrun.h python/branches/py3k/Lib/__future__.py python/branches/py3k/Parser/parser.c python/branches/py3k/Parser/parsetok.c python/branches/py3k/Parser/tokenizer.c python/branches/py3k/Python/future.c python/branches/py3k/Python/graminit.c python/branches/py3k/Python/pythonrun.c Modified: python/branches/py3k/Grammar/Grammar ============================================================================== --- python/branches/py3k/Grammar/Grammar (original) +++ python/branches/py3k/Grammar/Grammar Wed Apr 1 07:08:41 2009 @@ -87,7 +87,7 @@ and_test: not_test ('and' not_test)* not_test: 'not' not_test | comparison comparison: star_expr (comp_op star_expr)* -comp_op: '<'|'>'|'=='|'>='|'<='|'!='|'in'|'not' 'in'|'is'|'is' 'not' +comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' star_expr: ['*'] expr expr: xor_expr ('|' xor_expr)* xor_expr: and_expr ('^' and_expr)* Modified: python/branches/py3k/Include/code.h ============================================================================== --- python/branches/py3k/Include/code.h (original) +++ python/branches/py3k/Include/code.h Wed Apr 1 07:08:41 2009 @@ -52,10 +52,12 @@ #define CO_FUTURE_UNICODE_LITERALS 0x20000 #endif +#define CO_FUTURE_BARRY_AS_BDFL 0x40000 + /* This should be defined if a future statement modifies the syntax. For example, when a keyword is added. */ -/* #define PY_PARSER_REQUIRES_FUTURE_KEYWORD */ +#define PY_PARSER_REQUIRES_FUTURE_KEYWORD #define CO_MAXBLOCKS 20 /* Max static block nesting within a function */ Modified: python/branches/py3k/Include/compile.h ============================================================================== --- python/branches/py3k/Include/compile.h (original) +++ python/branches/py3k/Include/compile.h Wed Apr 1 07:08:41 2009 @@ -26,6 +26,7 @@ #define FUTURE_WITH_STATEMENT "with_statement" #define FUTURE_PRINT_FUNCTION "print_function" #define FUTURE_UNICODE_LITERALS "unicode_literals" +#define FUTURE_BARRY_AS_BDFL "barry_as_FLUFL" struct _mod; /* Declare the existence of this type */ PyAPI_FUNC(PyCodeObject *) PyAST_Compile(struct _mod *, const char *, Modified: python/branches/py3k/Include/parsetok.h ============================================================================== --- python/branches/py3k/Include/parsetok.h (original) +++ python/branches/py3k/Include/parsetok.h Wed Apr 1 07:08:41 2009 @@ -30,6 +30,7 @@ #endif #define PyPARSE_IGNORE_COOKIE 0x0010 +#define PyPARSE_BARRY_AS_BDFL 0x0020 PyAPI_FUNC(node *) PyParser_ParseString(const char *, grammar *, int, perrdetail *); Modified: python/branches/py3k/Include/pythonrun.h ============================================================================== --- python/branches/py3k/Include/pythonrun.h (original) +++ python/branches/py3k/Include/pythonrun.h Wed Apr 1 07:08:41 2009 @@ -7,7 +7,7 @@ extern "C" { #endif -#define PyCF_MASK 0 +#define PyCF_MASK CO_FUTURE_BARRY_AS_BDFL #define PyCF_MASK_OBSOLETE 0 #define PyCF_SOURCE_IS_UTF8 0x0100 #define PyCF_DONT_IMPLY_DEDENT 0x0200 Modified: python/branches/py3k/Lib/__future__.py ============================================================================== --- python/branches/py3k/Lib/__future__.py (original) +++ python/branches/py3k/Lib/__future__.py Wed Apr 1 07:08:41 2009 @@ -70,6 +70,7 @@ CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals +CO_FUTURE_BARRY_AS_BDFL = 0x40000 class _Feature: def __init__(self, optionalRelease, mandatoryRelease, compiler_flag): @@ -126,3 +127,7 @@ unicode_literals = _Feature((2, 6, 0, "alpha", 2), (3, 0, 0, "alpha", 0), CO_FUTURE_UNICODE_LITERALS) + +barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2), + (3, 9, 0, "alpha", 0), + CO_FUTURE_BARRY_AS_BDFL) Added: python/branches/py3k/Lib/test/test_flufl.py ============================================================================== --- (empty file) +++ python/branches/py3k/Lib/test/test_flufl.py Wed Apr 1 07:08:41 2009 @@ -0,0 +1,27 @@ +import __future__ +import unittest + +class FLUFLTests(unittest.TestCase): + + def test_barry_as_bdfl(self): + code = "from __future__ import barry_as_FLUFL; 2 {0} 3" + compile(code.format('<>'), '', 'exec', + __future__.CO_FUTURE_BARRY_AS_BDFL) + self.assertRaises(SyntaxError, compile, code.format('!='), + '', 'exec', + __future__.CO_FUTURE_BARRY_AS_BDFL) + + def test_guido_as_bdfl(self): + code = '2 {0} 3' + compile(code.format('!='), '', 'exec') + self.assertRaises(SyntaxError, compile, code.format('<>'), + '', 'exec') + + +def test_main(): + from test.support import run_unittest + run_unittest(FLUFLTests) + + +if __name__ == '__main__': + test_main() Modified: python/branches/py3k/Parser/parser.c ============================================================================== --- python/branches/py3k/Parser/parser.c (original) +++ python/branches/py3k/Parser/parser.c Wed Apr 1 07:08:41 2009 @@ -149,6 +149,7 @@ strcmp(l->lb_str, s) != 0) continue; #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD +#if 0 /* Leaving this in as an example */ if (!(ps->p_flags & CO_FUTURE_WITH_STATEMENT)) { if (s[0] == 'w' && strcmp(s, "with") == 0) @@ -157,6 +158,7 @@ break; /* not a keyword yet */ } #endif +#endif D(printf("It's a keyword\n")); return n - i; } @@ -178,6 +180,7 @@ } #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD +#if 0 /* Leaving this in as an example */ static void future_hack(parser_state *ps) @@ -218,6 +221,7 @@ } } } +#endif #endif /* future keyword */ int @@ -278,11 +282,13 @@ d->d_name, ps->p_stack.s_top->s_state)); #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD +#if 0 if (d->d_name[0] == 'i' && strcmp(d->d_name, "import_stmt") == 0) future_hack(ps); #endif +#endif s_pop(&ps->p_stack); if (s_empty(&ps->p_stack)) { D(printf(" ACCEPT.\n")); @@ -296,10 +302,12 @@ if (s->s_accept) { #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD +#if 0 if (d->d_name[0] == 'i' && strcmp(d->d_name, "import_stmt") == 0) future_hack(ps); #endif +#endif /* Pop this dfa and try again */ s_pop(&ps->p_stack); D(printf(" Pop ...\n")); Modified: python/branches/py3k/Parser/parsetok.c ============================================================================== --- python/branches/py3k/Parser/parsetok.c (original) +++ python/branches/py3k/Parser/parsetok.c Wed Apr 1 07:08:41 2009 @@ -100,6 +100,7 @@ } #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD +#if 0 static char with_msg[] = "%s:%d: Warning: 'with' will become a reserved keyword in Python 2.6\n"; @@ -114,6 +115,7 @@ PySys_WriteStderr(msg, filename, lineno); } #endif +#endif /* Parse input coming from the given tokenizer structure. Return error code. */ @@ -133,8 +135,8 @@ return NULL; } #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - if (*flags & PyPARSE_WITH_IS_KEYWORD) - ps->p_flags |= CO_FUTURE_WITH_STATEMENT; + if (*flags & PyPARSE_BARRY_AS_BDFL) + ps->p_flags |= CO_FUTURE_BARRY_AS_BDFL; #endif for (;;) { @@ -177,26 +179,20 @@ str[len] = '\0'; #ifdef PY_PARSER_REQUIRES_FUTURE_KEYWORD - /* This is only necessary to support the "as" warning, but - we don't want to warn about "as" in import statements. */ - if (type == NAME && - len == 6 && str[0] == 'i' && strcmp(str, "import") == 0) - handling_import = 1; - - /* Warn about with as NAME */ - if (type == NAME && - !(ps->p_flags & CO_FUTURE_WITH_STATEMENT)) { - if (len == 4 && str[0] == 'w' && strcmp(str, "with") == 0) - warn(with_msg, err_ret->filename, tok->lineno); - else if (!(handling_import || handling_with) && - len == 2 && str[0] == 'a' && - strcmp(str, "as") == 0) - warn(as_msg, err_ret->filename, tok->lineno); - } - else if (type == NAME && - (ps->p_flags & CO_FUTURE_WITH_STATEMENT) && - len == 4 && str[0] == 'w' && strcmp(str, "with") == 0) - handling_with = 1; + if (type == NOTEQUAL) { + if (!(ps->p_flags & CO_FUTURE_BARRY_AS_BDFL) && + strcmp(str, "!=")) { + err_ret->error = E_SYNTAX; + break; + } + else if ((ps->p_flags & CO_FUTURE_BARRY_AS_BDFL) && + strcmp(str, "<>")) { + err_ret->text = "with Barry as BDFL, use '<>' " + "instead of '!='"; + err_ret->error = E_SYNTAX; + break; + } + } #endif if (a >= tok->line_start) col_offset = a - tok->line_start; Modified: python/branches/py3k/Parser/tokenizer.c ============================================================================== --- python/branches/py3k/Parser/tokenizer.c (original) +++ python/branches/py3k/Parser/tokenizer.c Wed Apr 1 07:08:41 2009 @@ -1040,6 +1040,7 @@ break; case '<': switch (c2) { + case '>': return NOTEQUAL; case '=': return LESSEQUAL; case '<': return LEFTSHIFT; } Modified: python/branches/py3k/Python/future.c ============================================================================== --- python/branches/py3k/Python/future.c (original) +++ python/branches/py3k/Python/future.c Wed Apr 1 07:08:41 2009 @@ -39,6 +39,8 @@ continue; } else if (strcmp(feature, FUTURE_UNICODE_LITERALS) == 0) { continue; + } else if (strcmp(feature, FUTURE_BARRY_AS_BDFL) == 0) { + ff->ff_features |= CO_FUTURE_BARRY_AS_BDFL; } else if (strcmp(feature, "braces") == 0) { PyErr_SetString(PyExc_SyntaxError, "not a chance"); Modified: python/branches/py3k/Python/graminit.c ============================================================================== --- python/branches/py3k/Python/graminit.c (original) +++ python/branches/py3k/Python/graminit.c Wed Apr 1 07:08:41 2009 @@ -1129,16 +1129,17 @@ {1, arcs_52_0}, {2, arcs_52_1}, }; -static arc arcs_53_0[9] = { +static arc arcs_53_0[10] = { {118, 1}, {119, 1}, {120, 1}, {121, 1}, {122, 1}, {123, 1}, + {124, 1}, {95, 1}, {114, 2}, - {124, 3}, + {125, 3}, }; static arc arcs_53_1[1] = { {0, 1}, @@ -1151,7 +1152,7 @@ {0, 3}, }; static state states_53[4] = { - {9, arcs_53_0}, + {10, arcs_53_0}, {1, arcs_53_1}, {1, arcs_53_2}, {2, arcs_53_3}, @@ -1172,10 +1173,10 @@ {1, arcs_54_2}, }; static arc arcs_55_0[1] = { - {125, 1}, + {126, 1}, }; static arc arcs_55_1[2] = { - {126, 0}, + {127, 0}, {0, 1}, }; static state states_55[2] = { @@ -1183,10 +1184,10 @@ {2, arcs_55_1}, }; static arc arcs_56_0[1] = { - {127, 1}, + {128, 1}, }; static arc arcs_56_1[2] = { - {128, 0}, + {129, 0}, {0, 1}, }; static state states_56[2] = { @@ -1194,10 +1195,10 @@ {2, arcs_56_1}, }; static arc arcs_57_0[1] = { - {129, 1}, + {130, 1}, }; static arc arcs_57_1[2] = { - {130, 0}, + {131, 0}, {0, 1}, }; static state states_57[2] = { @@ -1205,11 +1206,11 @@ {2, arcs_57_1}, }; static arc arcs_58_0[1] = { - {131, 1}, + {132, 1}, }; static arc arcs_58_1[3] = { - {132, 0}, {133, 0}, + {134, 0}, {0, 1}, }; static state states_58[2] = { @@ -1217,11 +1218,11 @@ {3, arcs_58_1}, }; static arc arcs_59_0[1] = { - {134, 1}, + {135, 1}, }; static arc arcs_59_1[3] = { - {135, 0}, {136, 0}, + {137, 0}, {0, 1}, }; static state states_59[2] = { @@ -1229,13 +1230,13 @@ {3, arcs_59_1}, }; static arc arcs_60_0[1] = { - {137, 1}, + {138, 1}, }; static arc arcs_60_1[5] = { {31, 0}, - {138, 0}, {139, 0}, {140, 0}, + {141, 0}, {0, 1}, }; static state states_60[2] = { @@ -1243,13 +1244,13 @@ {5, arcs_60_1}, }; static arc arcs_61_0[4] = { - {135, 1}, {136, 1}, - {141, 1}, - {142, 2}, + {137, 1}, + {142, 1}, + {143, 2}, }; static arc arcs_61_1[1] = { - {137, 2}, + {138, 2}, }; static arc arcs_61_2[1] = { {0, 2}, @@ -1260,15 +1261,15 @@ {1, arcs_61_2}, }; static arc arcs_62_0[1] = { - {143, 1}, + {144, 1}, }; static arc arcs_62_1[3] = { - {144, 1}, + {145, 1}, {32, 2}, {0, 1}, }; static arc arcs_62_2[1] = { - {137, 3}, + {138, 3}, }; static arc arcs_62_3[1] = { {0, 3}, @@ -1281,44 +1282,44 @@ }; static arc arcs_63_0[10] = { {13, 1}, - {146, 2}, - {148, 3}, + {147, 2}, + {149, 3}, {21, 4}, - {151, 4}, - {152, 5}, + {152, 4}, + {153, 5}, {77, 4}, - {153, 4}, {154, 4}, {155, 4}, + {156, 4}, }; static arc arcs_63_1[3] = { {46, 6}, - {145, 6}, + {146, 6}, {15, 4}, }; static arc arcs_63_2[2] = { - {145, 7}, - {147, 4}, + {146, 7}, + {148, 4}, }; static arc arcs_63_3[2] = { - {149, 8}, - {150, 4}, + {150, 8}, + {151, 4}, }; static arc arcs_63_4[1] = { {0, 4}, }; static arc arcs_63_5[2] = { - {152, 5}, + {153, 5}, {0, 5}, }; static arc arcs_63_6[1] = { {15, 4}, }; static arc arcs_63_7[1] = { - {147, 4}, + {148, 4}, }; static arc arcs_63_8[1] = { - {150, 4}, + {151, 4}, }; static state states_63[9] = { {10, arcs_63_0}, @@ -1335,7 +1336,7 @@ {24, 1}, }; static arc arcs_64_1[3] = { - {156, 2}, + {157, 2}, {30, 3}, {0, 1}, }; @@ -1359,7 +1360,7 @@ }; static arc arcs_65_0[3] = { {13, 1}, - {146, 2}, + {147, 2}, {76, 3}, }; static arc arcs_65_1[2] = { @@ -1367,7 +1368,7 @@ {15, 5}, }; static arc arcs_65_2[1] = { - {157, 6}, + {158, 6}, }; static arc arcs_65_3[1] = { {21, 5}, @@ -1379,7 +1380,7 @@ {0, 5}, }; static arc arcs_65_6[1] = { - {147, 5}, + {148, 5}, }; static state states_65[7] = { {3, arcs_65_0}, @@ -1391,14 +1392,14 @@ {1, arcs_65_6}, }; static arc arcs_66_0[1] = { - {158, 1}, + {159, 1}, }; static arc arcs_66_1[2] = { {30, 2}, {0, 1}, }; static arc arcs_66_2[2] = { - {158, 1}, + {159, 1}, {0, 2}, }; static state states_66[3] = { @@ -1416,11 +1417,11 @@ }; static arc arcs_67_2[3] = { {24, 3}, - {159, 4}, + {160, 4}, {0, 2}, }; static arc arcs_67_3[2] = { - {159, 4}, + {160, 4}, {0, 3}, }; static arc arcs_67_4[1] = { @@ -1485,7 +1486,7 @@ }; static arc arcs_71_1[4] = { {25, 2}, - {156, 3}, + {157, 3}, {30, 4}, {0, 1}, }; @@ -1500,7 +1501,7 @@ {0, 4}, }; static arc arcs_71_5[3] = { - {156, 3}, + {157, 3}, {30, 7}, {0, 5}, }; @@ -1536,7 +1537,7 @@ {2, arcs_71_10}, }; static arc arcs_72_0[1] = { - {160, 1}, + {161, 1}, }; static arc arcs_72_1[1] = { {21, 2}, @@ -1572,7 +1573,7 @@ {1, arcs_72_7}, }; static arc arcs_73_0[3] = { - {161, 1}, + {162, 1}, {31, 2}, {32, 3}, }; @@ -1587,7 +1588,7 @@ {24, 6}, }; static arc arcs_73_4[4] = { - {161, 1}, + {162, 1}, {31, 2}, {32, 3}, {0, 4}, @@ -1600,7 +1601,7 @@ {0, 6}, }; static arc arcs_73_7[2] = { - {161, 5}, + {162, 5}, {32, 3}, }; static state states_73[8] = { @@ -1617,7 +1618,7 @@ {24, 1}, }; static arc arcs_74_1[3] = { - {156, 2}, + {157, 2}, {29, 3}, {0, 1}, }; @@ -1634,8 +1635,8 @@ {1, arcs_74_3}, }; static arc arcs_75_0[2] = { - {156, 1}, - {163, 1}, + {157, 1}, + {164, 1}, }; static arc arcs_75_1[1] = { {0, 1}, @@ -1657,7 +1658,7 @@ {105, 4}, }; static arc arcs_76_4[2] = { - {162, 5}, + {163, 5}, {0, 4}, }; static arc arcs_76_5[1] = { @@ -1678,7 +1679,7 @@ {107, 2}, }; static arc arcs_77_2[2] = { - {162, 3}, + {163, 3}, {0, 2}, }; static arc arcs_77_3[1] = { @@ -1712,7 +1713,7 @@ {1, arcs_79_1}, }; static arc arcs_80_0[1] = { - {166, 1}, + {167, 1}, }; static arc arcs_80_1[2] = { {9, 2}, @@ -1728,11 +1729,11 @@ }; static dfa dfas[81] = { {256, "single_input", 0, 3, states_0, - "\004\050\060\200\000\000\000\050\370\044\034\144\011\040\004\000\200\041\224\017\101"}, + "\004\050\060\200\000\000\000\050\370\044\034\144\011\040\004\000\000\103\050\037\202"}, {257, "file_input", 0, 2, states_1, - "\204\050\060\200\000\000\000\050\370\044\034\144\011\040\004\000\200\041\224\017\101"}, + "\204\050\060\200\000\000\000\050\370\044\034\144\011\040\004\000\000\103\050\037\202"}, {258, "eval_input", 0, 3, states_2, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {259, "decorator", 0, 7, states_3, "\000\010\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, {260, "decorators", 0, 2, states_4, @@ -1752,13 +1753,13 @@ {267, "vfpdef", 0, 2, states_11, "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, {268, "stmt", 0, 2, states_12, - "\000\050\060\200\000\000\000\050\370\044\034\144\011\040\004\000\200\041\224\017\101"}, + "\000\050\060\200\000\000\000\050\370\044\034\144\011\040\004\000\000\103\050\037\202"}, {269, "simple_stmt", 0, 4, states_13, - "\000\040\040\200\000\000\000\050\370\044\034\000\000\040\004\000\200\041\224\017\100"}, + "\000\040\040\200\000\000\000\050\370\044\034\000\000\040\004\000\000\103\050\037\200"}, {270, "small_stmt", 0, 2, states_14, - "\000\040\040\200\000\000\000\050\370\044\034\000\000\040\004\000\200\041\224\017\100"}, + "\000\040\040\200\000\000\000\050\370\044\034\000\000\040\004\000\000\103\050\037\200"}, {271, "expr_stmt", 0, 6, states_15, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {272, "augassign", 0, 2, states_16, "\000\000\000\000\000\200\377\007\000\000\000\000\000\000\000\000\000\000\000\000\000"}, {273, "del_stmt", 0, 3, states_17, @@ -1766,7 +1767,7 @@ {274, "pass_stmt", 0, 2, states_18, "\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000"}, {275, "flow_stmt", 0, 2, states_19, - "\000\000\000\000\000\000\000\000\170\000\000\000\000\000\000\000\000\000\000\000\100"}, + "\000\000\000\000\000\000\000\000\170\000\000\000\000\000\000\000\000\000\000\000\200"}, {276, "break_stmt", 0, 2, states_20, "\000\000\000\000\000\000\000\000\010\000\000\000\000\000\000\000\000\000\000\000\000"}, {277, "continue_stmt", 0, 2, states_21, @@ -1774,7 +1775,7 @@ {278, "return_stmt", 0, 3, states_22, "\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000"}, {279, "yield_stmt", 0, 2, states_23, - "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\100"}, + "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\200"}, {280, "raise_stmt", 0, 5, states_24, "\000\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000\000\000\000\000"}, {281, "import_stmt", 0, 2, states_25, @@ -1800,7 +1801,7 @@ {291, "assert_stmt", 0, 5, states_35, "\000\000\000\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\000\000"}, {292, "compound_stmt", 0, 2, states_36, - "\000\010\020\000\000\000\000\000\000\000\000\144\011\000\000\000\000\000\000\000\001"}, + "\000\010\020\000\000\000\000\000\000\000\000\144\011\000\000\000\000\000\000\000\002"}, {293, "if_stmt", 0, 8, states_37, "\000\000\000\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000"}, {294, "while_stmt", 0, 8, states_38, @@ -1816,67 +1817,67 @@ {299, "except_clause", 0, 5, states_43, "\000\000\000\000\000\000\000\000\000\000\000\000\100\000\000\000\000\000\000\000\000"}, {300, "suite", 0, 5, states_44, - "\004\040\040\200\000\000\000\050\370\044\034\000\000\040\004\000\200\041\224\017\100"}, + "\004\040\040\200\000\000\000\050\370\044\034\000\000\040\004\000\000\103\050\037\200"}, {301, "test", 0, 6, states_45, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {302, "test_nocond", 0, 2, states_46, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {303, "lambdef", 0, 5, states_47, "\000\000\000\000\000\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000"}, {304, "lambdef_nocond", 0, 5, states_48, "\000\000\000\000\000\000\000\000\000\000\000\000\000\040\000\000\000\000\000\000\000"}, {305, "or_test", 0, 2, states_49, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\004\000\000\103\050\037\000"}, {306, "and_test", 0, 2, states_50, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\004\000\000\103\050\037\000"}, {307, "not_test", 0, 3, states_51, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\004\000\000\103\050\037\000"}, {308, "comparison", 0, 2, states_52, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {309, "comp_op", 0, 4, states_53, - "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\304\037\000\000\000\000\000"}, + "\000\000\000\000\000\000\000\000\000\000\000\200\000\000\304\077\000\000\000\000\000"}, {310, "star_expr", 0, 3, states_54, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {311, "expr", 0, 2, states_55, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {312, "xor_expr", 0, 2, states_56, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {313, "and_expr", 0, 2, states_57, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {314, "shift_expr", 0, 2, states_58, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {315, "arith_expr", 0, 2, states_59, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {316, "term", 0, 2, states_60, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {317, "factor", 0, 3, states_61, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {318, "power", 0, 4, states_62, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\050\037\000"}, {319, "atom", 0, 9, states_63, - "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\224\017\000"}, + "\000\040\040\000\000\000\000\000\000\040\000\000\000\000\000\000\000\000\050\037\000"}, {320, "testlist_comp", 0, 5, states_64, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {321, "trailer", 0, 7, states_65, - "\000\040\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\004\000\000"}, + "\000\040\000\000\000\000\000\000\000\020\000\000\000\000\000\000\000\000\010\000\000"}, {322, "subscriptlist", 0, 3, states_66, - "\000\040\040\202\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\202\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {323, "subscript", 0, 5, states_67, - "\000\040\040\202\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\202\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {324, "sliceop", 0, 3, states_68, "\000\000\000\002\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, {325, "exprlist", 0, 3, states_69, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\000\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\000\000\000\000\103\050\037\000"}, {326, "testlist", 0, 3, states_70, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {327, "dictorsetmaker", 0, 11, states_71, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {328, "classdef", 0, 8, states_72, - "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\001"}, + "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\002"}, {329, "arglist", 0, 8, states_73, - "\000\040\040\200\001\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\001\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {330, "argument", 0, 4, states_74, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {331, "comp_iter", 0, 2, states_75, "\000\000\000\000\000\000\000\000\000\000\000\104\000\000\000\000\000\000\000\000\000"}, {332, "comp_for", 0, 6, states_76, @@ -1884,13 +1885,13 @@ {333, "comp_if", 0, 4, states_77, "\000\000\000\000\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\000\000"}, {334, "testlist1", 0, 2, states_78, - "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\200\041\224\017\000"}, + "\000\040\040\200\000\000\000\000\000\040\000\000\000\040\004\000\000\103\050\037\000"}, {335, "encoding_decl", 0, 2, states_79, "\000\000\040\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000"}, {336, "yield_expr", 0, 3, states_80, - "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\100"}, + "\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\000\200"}, }; -static label labels[167] = { +static label labels[168] = { {0, "EMPTY"}, {256, 0}, {4, 0}, @@ -2015,6 +2016,7 @@ {31, 0}, {30, 0}, {29, 0}, + {29, 0}, {1, "is"}, {312, 0}, {18, 0}, @@ -2062,6 +2064,6 @@ grammar _PyParser_Grammar = { 81, dfas, - {167, labels}, + {168, labels}, 256 }; Modified: python/branches/py3k/Python/pythonrun.c ============================================================================== --- python/branches/py3k/Python/pythonrun.c (original) +++ python/branches/py3k/Python/pythonrun.c Wed Apr 1 07:08:41 2009 @@ -1011,6 +1011,8 @@ parser_flags |= PyPARSE_DONT_IMPLY_DEDENT; if (flags->cf_flags & PyCF_IGNORE_COOKIE) parser_flags |= PyPARSE_IGNORE_COOKIE; + if (flags->cf_flags & CO_FUTURE_BARRY_AS_BDFL) + parser_flags |= PyPARSE_BARRY_AS_BDFL; return parser_flags; } From python-checkins at python.org Wed Apr 1 07:09:14 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 07:09:14 +0200 (CEST) Subject: [Python-checkins] r70946 - peps/trunk/pep-0401.txt Message-ID: <20090401050914.9D1F71E40F1@bag.python.org> Author: brett.cannon Date: Wed Apr 1 07:09:14 2009 New Revision: 70946 Log: Add PEP 401: BDFL retirement. Added: peps/trunk/pep-0401.txt (contents, props changed) Added: peps/trunk/pep-0401.txt ============================================================================== --- (empty file) +++ peps/trunk/pep-0401.txt Wed Apr 1 07:09:14 2009 @@ -0,0 +1,118 @@ +PEP: 401 +Title: BDFL Retirement +Version: $Revision$ +Last-Modified: $Date: 2009-04-01 00:00:00 -0400 (Wed, 1 Apr 2009)$ +Author: Barry Warsaw, Brett Cannon +Status: Accepted +Type: Process +Content-Type: text/x-rst +Created: 01-Apr-2009 +Post-History: 01-Apr-2009 + + +Abstract +======== + +The BDFL, having shepherded Python development for 20 years, +officially announces his retirement, effective immediately. Following +a unanimous vote, his replacement is named. + + +Rationale +========= + +Guido wrote the original implementation of Python in 1989, and after +nearly 20 years of leading the community, has decided to step aside as +its Benevolent Dictator For Life. His official title is now +Benevolent Dictator Emeritus Vacationing Indefinitely from the +Language (BDEVIL). Guido leaves Python in the good hands of its new +leader and its vibrant community, in order to train for his lifelong +dream of climbing Mount Everest. + +After unanimous vote of the Python Steering Union (not to be confused +with the Python Secret Underground, which emphatically does not exist) +at the 2009 Python Conference (PyCon_ 2009), Guido's successor has been +chosen: Barry Warsaw, or as he is affectionately known, Uncle Barry. +Uncle Barry's official title is Friendly Language Uncle For Life (FLUFL). + +.. _PyCon: http://us.pycon.org/ + + +Official Acts of the FLUFL +========================== + +FLUFL Uncle Barry enacts the following decisions, in order to +demonstrate his intention to lead the community in the same +responsible and open manner as his predecessor, whose name escapes +him: + +* Recognized that the selection of ``Hg`` as the DVCS of choice was + clear proof of the onset of the BDEVIL's insanity, and reverting + this decision to switch to ``Bzr`` instead, the only true choice. + +* Recognized that the ``!=`` inequality operator in Python 3.0 was a + horrible, finger pain inducing mistake, the FLUFL reinstates the + ``<>`` diamond operator as the sole spelling. This change is + important enough to be implemented for, and released in Python + 3.1. To help transition to this feature, a new future statement, + ``from __future__ import barry_as_flufl`` has been added. + +* Recognized that the ``print`` function in Python 3.0 was a horrible, + pain-inducing mistake, the FLUFL reinstates the ``print`` + statement. This change is important enough to be implemented for, + and released in Python 3.0.2. + +* Recognized that the disappointing adoption curve of Python 3.0 + signals its abject failure, all work on Python 3.1 and subsequent + Python 3.x versions is hereby terminated. All features in Python + 3.0 shall be back ported to Python 2.7 which will be the official + and sole next release. The Python 3.0 string and bytes types will + be back ported to Python 2.6.2 for the convenience of developers. + +* Recognized that C is a 20th century language with almost universal + rejection by programmers under the age of 30, the CPython + implementation will terminate with the release of Python 2.6.2 and + 3.0.2. Thereafter, the reference implementation of Python will + target the Parrot [1]_ virtual machine. Alternative implementations + of Python (e.g. Jython [2]_ and IronPython [3]_) are officially + discouraged but tolerated. + +* Recognized that the Python Software Foundation [4]_ having fulfilled + its mission admirably, is hereby disbanded. They Python Steering + Union [5]_ (not to be confused with the Python Secret Underground, + which emphatically does not exist), is now the sole steward for all + of Python's intellectual property. All PSF funds are hereby + transferred to the PSU (not that PSU, the other PSU). + + +References +========== + +.. [1] http://www.parrot.org + +.. [2] http://www.jython.org + +.. [3] http://www.ironpython.com + +.. [4] http://www.python.org/psf + +.. [5] http://www.pythonlabs.com + + +Copyright +========= + +This document is the property of the Python Steering Union (not to be +confused with the Python Secret Underground, which emphatically does +not exist). We suppose it's okay for you to read this, but don't even +think about quoting, copying, modifying, or distributing it. + + +.. + Local Variables: + mode: indented-text + indent-tabs-mode: nil + sentence-end-double-space: t + fill-column: 70 + coding: utf-8 + End: From python-checkins at python.org Wed Apr 1 07:17:23 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 07:17:23 +0200 (CEST) Subject: [Python-checkins] r70947 - peps/trunk/pep-0401.txt Message-ID: <20090401051723.BA3211E40F1@bag.python.org> Author: brett.cannon Date: Wed Apr 1 07:17:13 2009 New Revision: 70947 Log: Fix a typo. Modified: peps/trunk/pep-0401.txt Modified: peps/trunk/pep-0401.txt ============================================================================== --- peps/trunk/pep-0401.txt (original) +++ peps/trunk/pep-0401.txt Wed Apr 1 07:17:13 2009 @@ -55,7 +55,7 @@ ``<>`` diamond operator as the sole spelling. This change is important enough to be implemented for, and released in Python 3.1. To help transition to this feature, a new future statement, - ``from __future__ import barry_as_flufl`` has been added. + ``from __future__ import barry_as_FLUFL`` has been added. * Recognized that the ``print`` function in Python 3.0 was a horrible, pain-inducing mistake, the FLUFL reinstates the ``print`` From eric at trueblade.com Wed Apr 1 10:55:40 2009 From: eric at trueblade.com (Eric Smith) Date: Wed, 01 Apr 2009 04:55:40 -0400 Subject: [Python-checkins] r70945 - in python/branches/py3k: Grammar/Grammar Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Lib/test/test_flufl.py Parser/parser.c Parser/parsetok.c Parser/tokenizer.c Python/future.c Python/graminit.c Python/pythonrun.c In-Reply-To: <20090401050841.D5B051E40F1@bag.python.org> References: <20090401050841.D5B051E40F1@bag.python.org> Message-ID: <49D32C0C.10802@trueblade.com> brett.cannon wrote: > -comp_op: '<'|'>'|'=='|'>='|'<='|'!='|'in'|'not' 'in'|'is'|'is' 'not' > +comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' The PEP says that '<>' is the one true spelling, yet this leaves in '!='. I realize we want to have a transition period where both are accepted. I'll get busy on switching the standard library over so we can keep the transition period to an ample 12 hour duration. Eric. From solipsis at pitrou.net Wed Apr 1 13:06:35 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 1 Apr 2009 11:06:35 +0000 (UTC) Subject: [Python-checkins] =?utf-8?q?r70945_-_in_python/branches/py3k=3A_G?= =?utf-8?q?rammar/Grammar=09Include/code=2Eh_Include/compile=2Eh_In?= =?utf-8?q?clude/parsetok=2Eh=09Include/pythonrun=2EhLib/=5F=5Ffutu?= =?utf-8?q?re=5F=5F=2EpyLib/test/test=5Fflufl=2Epy=09Parser/parser?= =?utf-8?q?=2Ec_Parser/parsetok=2Ec_Parser/tokenizer=2Ec=09Python/f?= =?utf-8?q?uture=2Ec_Python/graminit=2Ec_Python/pythonrun=2Ec?= References: <20090401050841.D5B051E40F1@bag.python.org> Message-ID: writes: > > Author: brett.cannon > Date: Wed Apr 1 07:08:41 2009 > New Revision: 70945 > > Log: > The BDFL has retired! Long live the FLUFL (Friendly Language Uncle For Life)! Haven't you forgotten the part where Mark Shuttleworth becomes the new release manager (Benjamin Peterson having died after he ate a poisoned memoryview) and the Bazaar developers will make Python 3x faster real soon now? Regards Antoine. From python-checkins at python.org Wed Apr 1 13:28:47 2009 From: python-checkins at python.org (kristjan.jonsson) Date: Wed, 1 Apr 2009 13:28:47 +0200 (CEST) Subject: [Python-checkins] r70948 - in python/branches/py3k/Lib/test: regrtest.py test_dbm_gnu.py test_dbm_ndbm.py test_posix.py test_pty.py test_syslog.py test_tk.py Message-ID: <20090401112847.7465A1E4010@bag.python.org> Author: kristjan.jonsson Date: Wed Apr 1 13:28:47 2009 New Revision: 70948 Log: Allow skipping of regression tests not supported on windows. This reduces noise in the regression test suite for py3k on Windows. Modified: python/branches/py3k/Lib/test/regrtest.py python/branches/py3k/Lib/test/test_dbm_gnu.py python/branches/py3k/Lib/test/test_dbm_ndbm.py python/branches/py3k/Lib/test/test_posix.py python/branches/py3k/Lib/test/test_pty.py python/branches/py3k/Lib/test/test_syslog.py python/branches/py3k/Lib/test/test_tk.py Modified: python/branches/py3k/Lib/test/regrtest.py ============================================================================== --- python/branches/py3k/Lib/test/regrtest.py (original) +++ python/branches/py3k/Lib/test/regrtest.py Wed Apr 1 13:28:47 2009 @@ -892,6 +892,7 @@ test_fork1 test_epoll test_dbm_gnu + test_dbm_ndbm test_grp test_ioctl test_largefile Modified: python/branches/py3k/Lib/test/test_dbm_gnu.py ============================================================================== --- python/branches/py3k/Lib/test/test_dbm_gnu.py (original) +++ python/branches/py3k/Lib/test/test_dbm_gnu.py Wed Apr 1 13:28:47 2009 @@ -1,4 +1,5 @@ -import dbm.gnu as gdbm +from test import support +gdbm = support.import_module("dbm.gnu") #skip if not supported import unittest import os from test.support import verbose, TESTFN, run_unittest, unlink Modified: python/branches/py3k/Lib/test/test_dbm_ndbm.py ============================================================================== --- python/branches/py3k/Lib/test/test_dbm_ndbm.py (original) +++ python/branches/py3k/Lib/test/test_dbm_ndbm.py Wed Apr 1 13:28:47 2009 @@ -1,4 +1,5 @@ from test import support +support.import_module("dbm.ndbm") #skip if not supported import unittest import os import random Modified: python/branches/py3k/Lib/test/test_posix.py ============================================================================== --- python/branches/py3k/Lib/test/test_posix.py (original) +++ python/branches/py3k/Lib/test/test_posix.py Wed Apr 1 13:28:47 2009 @@ -1,6 +1,7 @@ "Test posix functions" from test import support +posix = support.import_module('posix') #skip if not supported import time import os @@ -9,8 +10,6 @@ import unittest import warnings -posix = support.import_module('posix') - warnings.filterwarnings('ignore', '.* potential security risk .*', RuntimeWarning) Modified: python/branches/py3k/Lib/test/test_pty.py ============================================================================== --- python/branches/py3k/Lib/test/test_pty.py (original) +++ python/branches/py3k/Lib/test/test_pty.py Wed Apr 1 13:28:47 2009 @@ -1,6 +1,7 @@ +from test import support +pty = support.import_module("pty") #skip if not supported import errno import fcntl -import pty import os import sys import signal Modified: python/branches/py3k/Lib/test/test_syslog.py ============================================================================== --- python/branches/py3k/Lib/test/test_syslog.py (original) +++ python/branches/py3k/Lib/test/test_syslog.py Wed Apr 1 13:28:47 2009 @@ -1,7 +1,7 @@ -import syslog -import unittest from test import support +syslog = support.import_module("syslog") #skip if not supported +import unittest # XXX(nnorwitz): This test sucks. I don't know of a platform independent way # to verify that the messages were really logged. Modified: python/branches/py3k/Lib/test/test_tk.py ============================================================================== --- python/branches/py3k/Lib/test/test_tk.py (original) +++ python/branches/py3k/Lib/test/test_tk.py Wed Apr 1 13:28:47 2009 @@ -1,10 +1,11 @@ +from test import support +# Skip test if _tkinter wasn't built. +support.import_module('_tkinter') + import tkinter from tkinter.test import runtktests -from test import support import unittest -# Skip test if _tkinter wasn't built. -support.import_module('_tkinter') import tkinter From python-checkins at python.org Wed Apr 1 14:54:16 2009 From: python-checkins at python.org (david.goodger) Date: Wed, 1 Apr 2009 14:54:16 +0200 (CEST) Subject: [Python-checkins] r70949 - peps/trunk/pep-0401.txt Message-ID: <20090401125416.590A31E4018@bag.python.org> Author: david.goodger Date: Wed Apr 1 14:54:16 2009 New Revision: 70949 Log: typo fix Modified: peps/trunk/pep-0401.txt Modified: peps/trunk/pep-0401.txt ============================================================================== --- peps/trunk/pep-0401.txt (original) +++ peps/trunk/pep-0401.txt Wed Apr 1 14:54:16 2009 @@ -78,7 +78,7 @@ discouraged but tolerated. * Recognized that the Python Software Foundation [4]_ having fulfilled - its mission admirably, is hereby disbanded. They Python Steering + its mission admirably, is hereby disbanded. The Python Steering Union [5]_ (not to be confused with the Python Secret Underground, which emphatically does not exist), is now the sole steward for all of Python's intellectual property. All PSF funds are hereby From python-checkins at python.org Wed Apr 1 15:46:47 2009 From: python-checkins at python.org (ronald.oussoren) Date: Wed, 1 Apr 2009 15:46:47 +0200 (CEST) Subject: [Python-checkins] r70950 - python/branches/release26-maint/Mac/BuildScript/seticon.m Message-ID: <20090401134647.6BA1A1E4010@bag.python.org> Author: ronald.oussoren Date: Wed Apr 1 15:46:47 2009 New Revision: 70950 Log: Merge seticon.m file, for some reason 'svnmerge' didn't actually add this file to the repository (probably due me not being awake enough at the time of the initial merge) Added: python/branches/release26-maint/Mac/BuildScript/seticon.m Added: python/branches/release26-maint/Mac/BuildScript/seticon.m ============================================================================== --- (empty file) +++ python/branches/release26-maint/Mac/BuildScript/seticon.m Wed Apr 1 15:46:47 2009 @@ -0,0 +1,26 @@ +/* + * Simple tool for setting an icon on a file. + */ +#import +#include + +int main(int argc, char** argv) +{ + if (argc != 3) { + fprintf(stderr, "Usage: seticon ICON TARGET"); + return 1; + } + + NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init]; + NSString* iconPath = [NSString stringWithUTF8String:argv[1]]; + NSString* filePath = [NSString stringWithUTF8String:argv[2]]; + + [NSApplication sharedApplication]; + + [[NSWorkspace sharedWorkspace] + setIcon: [[NSImage alloc] initWithContentsOfFile: iconPath] + forFile: filePath + options: 0]; + [pool release]; + return 0; +} From python-checkins at python.org Wed Apr 1 16:02:27 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 16:02:27 +0200 (CEST) Subject: [Python-checkins] r70951 - python/trunk/Misc/ACKS Message-ID: <20090401140227.8EBD81E428B@bag.python.org> Author: georg.brandl Date: Wed Apr 1 16:02:27 2009 New Revision: 70951 Log: Add Maksim, who worked on several issues at the sprint. Modified: python/trunk/Misc/ACKS Modified: python/trunk/Misc/ACKS ============================================================================== --- python/trunk/Misc/ACKS (original) +++ python/trunk/Misc/ACKS Wed Apr 1 16:02:27 2009 @@ -391,6 +391,7 @@ Greg Kochanski Damon Kohler Joseph Koshy +Maksim Kozyarchuk Bob Kras Holger Krekel Michael Kremer From buildbot at python.org Wed Apr 1 16:26:54 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 14:26:54 +0000 Subject: [Python-checkins] buildbot failure in x86 osx.5 2.6 Message-ID: <20090401142654.AEB7B1E463E@bag.python.org> The Buildbot has detected a new failure of x86 osx.5 2.6. Full details are available at: http://www.python.org/dev/buildbot/all/x86%20osx.5%202.6/builds/210 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: heller-x86-osx5 Build Reason: Build Source Stamp: [branch branches/release26-maint] HEAD Blamelist: ronald.oussoren BUILD FAILED: failed test Excerpt from the test logfile: sincerely, -The Buildbot From python-checkins at python.org Wed Apr 1 16:59:59 2009 From: python-checkins at python.org (ronald.oussoren) Date: Wed, 1 Apr 2009 16:59:59 +0200 (CEST) Subject: [Python-checkins] r70952 - python/branches/py3k/Mac/BuildScript/build-installer.py Message-ID: <20090401145959.3C9141E4126@bag.python.org> Author: ronald.oussoren Date: Wed Apr 1 16:59:59 2009 New Revision: 70952 Log: Fix typo in configure line that caused the build installer to not use the right LDFLAGS settings. Modified: python/branches/py3k/Mac/BuildScript/build-installer.py Modified: python/branches/py3k/Mac/BuildScript/build-installer.py ============================================================================== --- python/branches/py3k/Mac/BuildScript/build-installer.py (original) +++ python/branches/py3k/Mac/BuildScript/build-installer.py Wed Apr 1 16:59:59 2009 @@ -651,7 +651,7 @@ 'libraries', 'usr', 'local', 'lib') print "Running configure..." runCommand("%s -C --enable-framework --enable-universalsdk=%s " - "--with-universal-archs=%s --with-computed-gotos" + "--with-universal-archs=%s --with-computed-gotos " "LDFLAGS='-g -L%s/libraries/usr/local/lib' " "OPT='-g -O3 -I%s/libraries/usr/local/include' 2>&1"%( shellQuote(os.path.join(SRCDIR, 'configure')), shellQuote(SDKPATH), From ncoghlan at gmail.com Wed Apr 1 16:59:58 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 02 Apr 2009 00:59:58 +1000 Subject: [Python-checkins] r70879 - in python/trunk: Lib/test/test_mmap.py Modules/mmapmodule.c In-Reply-To: References: <20090331201404.8FED71E4054@bag.python.org> Message-ID: <49D3816E.9050504@gmail.com> Jack diederich wrote: > FYI, PEP-8 > > Imports should usually be on separate lines, e.g.: > > Yes: import os > import sys > > No: import sys, os Of all the PEP 8 guidelines, I think that's the one that I deviate from most frequently. This is especially so for modules with names which are shorter than the import keyword itself (sys, os, re, time being the main offenders there). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From python-checkins at python.org Wed Apr 1 17:13:53 2009 From: python-checkins at python.org (hirokazu.yamamoto) Date: Wed, 1 Apr 2009 17:13:53 +0200 (CEST) Subject: [Python-checkins] r70953 - python/trunk/Modules/_multiprocessing/multiprocessing.h Message-ID: <20090401151353.288AE1E4070@bag.python.org> Author: hirokazu.yamamoto Date: Wed Apr 1 17:13:52 2009 New Revision: 70953 Log: Fixed compile error on windows. Modified: python/trunk/Modules/_multiprocessing/multiprocessing.h Modified: python/trunk/Modules/_multiprocessing/multiprocessing.h ============================================================================== --- python/trunk/Modules/_multiprocessing/multiprocessing.h (original) +++ python/trunk/Modules/_multiprocessing/multiprocessing.h Wed Apr 1 17:13:52 2009 @@ -16,6 +16,9 @@ # include # include # include /* getpid() */ +# ifdef Py_DEBUG +# include +# endif # define SEM_HANDLE HANDLE # define SEM_VALUE_MAX LONG_MAX #else From brett at python.org Wed Apr 1 17:15:54 2009 From: brett at python.org (Brett Cannon) Date: Wed, 1 Apr 2009 08:15:54 -0700 Subject: [Python-checkins] r70945 - in python/branches/py3k: Grammar/Grammar Include/code.h Include/compile.h Include/parsetok.h Include/pythonrun.h Lib/__future__.py Lib/test/test_flufl.py Parser/parser.c Parser/parsetok.c Parser/tokenizer.c Python/futur Message-ID: On Wed, Apr 1, 2009 at 01:55, Eric Smith wrote: > brett.cannon wrote: > >> -comp_op: '<'|'>'|'=='|'>='|'<='|'!='|'in'|'not' 'in'|'is'|'is' 'not' >> +comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' >> > > The PEP says that '<>' is the one true spelling, yet this leaves in '!='. > The __future__ statement takes care of enforcing only one of the operators. > > I realize we want to have a transition period where both are accepted. I'll > get busy on switching the standard library over so we can keep the > transition period to an ample 12 hour duration. Well, the __future__ statement is permanent, so don't go too nuts. =) -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From python-checkins at python.org Wed Apr 1 17:23:43 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 17:23:43 +0200 (CEST) Subject: [Python-checkins] r70954 - in python/trunk/Lib: SimpleXMLRPCServer.py test/test_xmlrpc.py Message-ID: <20090401152343.7EA1B1E41E4@bag.python.org> Author: georg.brandl Date: Wed Apr 1 17:23:43 2009 New Revision: 70954 Log: Fix test_xmlrpc and make the CGI handler work with no CONTENT_LENGTH. Modified: python/trunk/Lib/SimpleXMLRPCServer.py python/trunk/Lib/test/test_xmlrpc.py Modified: python/trunk/Lib/SimpleXMLRPCServer.py ============================================================================== --- python/trunk/Lib/SimpleXMLRPCServer.py (original) +++ python/trunk/Lib/SimpleXMLRPCServer.py Wed Apr 1 17:23:43 2009 @@ -600,7 +600,7 @@ # POST data is normally available through stdin try: length = int(os.environ.get('CONTENT_LENGTH', None)) - except ValueError: + except (TypeError, ValueError): length = -1 if request_text is None: request_text = sys.stdin.read(length) Modified: python/trunk/Lib/test/test_xmlrpc.py ============================================================================== --- python/trunk/Lib/test/test_xmlrpc.py (original) +++ python/trunk/Lib/test/test_xmlrpc.py Wed Apr 1 17:23:43 2009 @@ -629,7 +629,11 @@ sys.stdin = open("xmldata.txt", "r") sys.stdout = open(test_support.TESTFN, "w") - self.cgi.handle_request() + os.environ['CONTENT_LENGTH'] = str(len(data)) + try: + self.cgi.handle_request() + finally: + del os.environ['CONTENT_LENGTH'] sys.stdin.close() sys.stdout.close() From python-checkins at python.org Wed Apr 1 17:53:15 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 17:53:15 +0200 (CEST) Subject: [Python-checkins] r70955 - in python/branches/py3k: Doc/library/csv.rst Lib/test/test_xmlrpc.py Lib/xmlrpc/server.py Message-ID: <20090401155315.6B9D21E4079@bag.python.org> Author: georg.brandl Date: Wed Apr 1 17:53:15 2009 New Revision: 70955 Log: #5636: fix next -> __next__ in csv reader docs. Modified: python/branches/py3k/ (props changed) python/branches/py3k/Doc/library/csv.rst python/branches/py3k/Lib/test/test_xmlrpc.py python/branches/py3k/Lib/xmlrpc/server.py Modified: python/branches/py3k/Doc/library/csv.rst ============================================================================== --- python/branches/py3k/Doc/library/csv.rst (original) +++ python/branches/py3k/Doc/library/csv.rst Wed Apr 1 17:53:15 2009 @@ -351,14 +351,13 @@ Reader objects (:class:`DictReader` instances and objects returned by the :func:`reader` function) have the following public methods: - -.. method:: csvreader.next() +.. method:: csvreader.__next__() Return the next row of the reader's iterable object as a list, parsed according - to the current dialect. + to the current dialect. Usually you should call this as ``next(reader)``. -Reader objects have the following public attributes: +Reader objects have the following public attributes: .. attribute:: csvreader.dialect @@ -371,10 +370,8 @@ number of records returned, as records can span multiple lines. - DictReader objects have the following public attribute: - .. attribute:: csvreader.fieldnames If not passed as a parameter when creating the object, this attribute is Modified: python/branches/py3k/Lib/test/test_xmlrpc.py ============================================================================== --- python/branches/py3k/Lib/test/test_xmlrpc.py (original) +++ python/branches/py3k/Lib/test/test_xmlrpc.py Wed Apr 1 17:53:15 2009 @@ -598,7 +598,11 @@ sys.stdin = open("xmldata.txt", "r") sys.stdout = open(support.TESTFN, "w") - self.cgi.handle_request() + os.environ['CONTENT_LENGTH'] = str(len(data)) + try: + self.cgi.handle_request() + finally: + del os.environ['CONTENT_LENGTH'] sys.stdin.close() sys.stdout.close() Modified: python/branches/py3k/Lib/xmlrpc/server.py ============================================================================== --- python/branches/py3k/Lib/xmlrpc/server.py (original) +++ python/branches/py3k/Lib/xmlrpc/server.py Wed Apr 1 17:53:15 2009 @@ -590,7 +590,7 @@ # POST data is normally available through stdin try: length = int(os.environ.get('CONTENT_LENGTH', None)) - except ValueError: + except (ValueError, TypeError): length = -1 if request_text is None: request_text = sys.stdin.read(length) From jnoller at gmail.com Wed Apr 1 17:53:28 2009 From: jnoller at gmail.com (Jesse Noller) Date: Wed, 1 Apr 2009 10:53:28 -0500 Subject: [Python-checkins] r70953 - python/trunk/Modules/_multiprocessing/multiprocessing.h In-Reply-To: <20090401151353.288AE1E4070@bag.python.org> References: <20090401151353.288AE1E4070@bag.python.org> Message-ID: <4222a8490904010853q860f951o5cbe3765acfa01b4@mail.gmail.com> Hi Hiro, I saw this checkin - talking to Martin this shouldn't be an issue - what version of VStudio and Windows are you using? Additionally, if this is an issue, this change needs to be merged to all of the active branches. -jesse On Wed, Apr 1, 2009 at 10:13 AM, hirokazu.yamamoto wrote: > Author: hirokazu.yamamoto > Date: Wed Apr ?1 17:13:52 2009 > New Revision: 70953 > > Log: > Fixed compile error on windows. > > Modified: > ? python/trunk/Modules/_multiprocessing/multiprocessing.h > > Modified: python/trunk/Modules/_multiprocessing/multiprocessing.h > ============================================================================== > --- python/trunk/Modules/_multiprocessing/multiprocessing.h ? ? (original) > +++ python/trunk/Modules/_multiprocessing/multiprocessing.h ? ? Wed Apr ?1 17:13:52 2009 > @@ -16,6 +16,9 @@ > ?# ?include > ?# ?include > ?# ?include ? ? ? ? ? ? ?/* getpid() */ > +# ?ifdef Py_DEBUG > +# ? ?include > +# ?endif > ?# ?define SEM_HANDLE HANDLE > ?# ?define SEM_VALUE_MAX LONG_MAX > ?#else > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From python-checkins at python.org Wed Apr 1 18:00:34 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 18:00:34 +0200 (CEST) Subject: [Python-checkins] r70956 - in python/trunk: Lib/cgitb.py Misc/NEWS Message-ID: <20090401160034.CA2E51E4070@bag.python.org> Author: brett.cannon Date: Wed Apr 1 18:00:34 2009 New Revision: 70956 Log: The cgitb module had imports in its functions. This can cause deadlock with the import lock if called from within a thread that was triggered by an import. Partially fixes issue #1665206. Modified: python/trunk/Lib/cgitb.py python/trunk/Misc/NEWS Modified: python/trunk/Lib/cgitb.py ============================================================================== --- python/trunk/Lib/cgitb.py (original) +++ python/trunk/Lib/cgitb.py Wed Apr 1 18:00:34 2009 @@ -19,13 +19,19 @@ for you, call cgitb.handler(). The optional argument to handler() is a 3-item tuple (etype, evalue, etb) just like the value of sys.exc_info(). The default handler displays output as HTML. -""" - -__author__ = 'Ka-Ping Yee' - -__version__ = '$Revision$' +""" +import inspect +import keyword +import linecache +import os +import pydoc import sys +import tempfile +import time +import tokenize +import traceback +import types def reset(): """Return a string that resets the CGI and browser to a known state.""" @@ -74,7 +80,6 @@ def scanvars(reader, frame, locals): """Scan one logical line of Python and look up values of variables used.""" - import tokenize, keyword vars, lasttoken, parent, prefix, value = [], None, None, '', __UNDEF__ for ttype, token, start, end, line in tokenize.generate_tokens(reader): if ttype == tokenize.NEWLINE: break @@ -96,8 +101,6 @@ def html((etype, evalue, etb), context=5): """Return a nice HTML document describing a given traceback.""" - import os, types, time, traceback, linecache, inspect, pydoc - if type(etype) is types.ClassType: etype = etype.__name__ pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable @@ -173,7 +176,6 @@ value = pydoc.html.repr(getattr(evalue, name)) exception.append('\n
%s%s =\n%s' % (indent, name, value)) - import traceback return head + ''.join(frames) + ''.join(exception) + ''' @@ -188,8 +190,6 @@ def text((etype, evalue, etb), context=5): """Return a plain text document describing a given traceback.""" - import os, types, time, traceback, linecache, inspect, pydoc - if type(etype) is types.ClassType: etype = etype.__name__ pyver = 'Python ' + sys.version.split()[0] + ': ' + sys.executable @@ -245,7 +245,6 @@ value = pydoc.text.repr(getattr(evalue, name)) exception.append('\n%s%s = %s' % (" "*4, name, value)) - import traceback return head + ''.join(frames) + ''.join(exception) + ''' The above is a description of an error in a Python program. Here is @@ -278,7 +277,6 @@ try: doc = formatter(info, self.context) except: # just in case something goes wrong - import traceback doc = ''.join(traceback.format_exception(*info)) plain = True @@ -292,7 +290,6 @@ self.file.write('

A problem occurred in a Python script.\n') if self.logdir is not None: - import os, tempfile suffix = ['.txt', '.html'][self.format=="html"] (fd, path) = tempfile.mkstemp(suffix=suffix, dir=self.logdir) try: Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 18:00:34 2009 @@ -200,6 +200,10 @@ Library ------- +- Issue #1665206 (partially): Move imports in cgitb to the top of the module + instead of performing them in functions. Helps prevent import deadlocking in + threads. + - Actually make the SimpleXMLRPCServer CGI handler work. - Issue #2522: locale.format now checks its first argument to ensure it has From python-checkins at python.org Wed Apr 1 18:06:02 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 18:06:02 +0200 (CEST) Subject: [Python-checkins] r70957 - in python/branches/py3k: Lib/cgitb.py Misc/NEWS Message-ID: <20090401160602.1344C1E4070@bag.python.org> Author: brett.cannon Date: Wed Apr 1 18:06:01 2009 New Revision: 70957 Log: Merged revisions 70956 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70956 | brett.cannon | 2009-04-01 09:00:34 -0700 (Wed, 01 Apr 2009) | 5 lines The cgitb module had imports in its functions. This can cause deadlock with the import lock if called from within a thread that was triggered by an import. Partially fixes issue #1665206. ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Lib/cgitb.py python/branches/py3k/Misc/NEWS Modified: python/branches/py3k/Lib/cgitb.py ============================================================================== --- python/branches/py3k/Lib/cgitb.py (original) +++ python/branches/py3k/Lib/cgitb.py Wed Apr 1 18:06:01 2009 @@ -19,13 +19,19 @@ for you, call cgitb.handler(). The optional argument to handler() is a 3-item tuple (etype, evalue, etb) just like the value of sys.exc_info(). The default handler displays output as HTML. -""" - -__author__ = 'Ka-Ping Yee' - -__version__ = '$Revision$' +""" +import inspect +import keyword +import linecache +import os +import pydoc import sys +import tempfile +import time +import tokenize +import traceback +import types def reset(): """Return a string that resets the CGI and browser to a known state.""" @@ -74,7 +80,6 @@ def scanvars(reader, frame, locals): """Scan one logical line of Python and look up values of variables used.""" - import tokenize, keyword vars, lasttoken, parent, prefix, value = [], None, None, '', __UNDEF__ for ttype, token, start, end, line in tokenize.generate_tokens(reader): if ttype == tokenize.NEWLINE: break @@ -96,8 +101,6 @@ def html(einfo, context=5): """Return a nice HTML document describing a given traceback.""" - import os, time, traceback, linecache, inspect, pydoc - etype, evalue, etb = einfo if isinstance(etype, type): etype = etype.__name__ @@ -173,7 +176,6 @@ value = pydoc.html.repr(getattr(evalue, name)) exception.append('\n
%s%s =\n%s' % (indent, name, value)) - import traceback return head + ''.join(frames) + ''.join(exception) + ''' @@ -188,8 +190,6 @@ def text(einfo, context=5): """Return a plain text document describing a given traceback.""" - import os, time, traceback, linecache, inspect, pydoc - etype, evalue, etb = einfo if isinstance(etype, type): etype = etype.__name__ @@ -245,7 +245,6 @@ value = pydoc.text.repr(getattr(evalue, name)) exception.append('\n%s%s = %s' % (" "*4, name, value)) - import traceback return head + ''.join(frames) + ''.join(exception) + ''' The above is a description of an error in a Python program. Here is @@ -278,7 +277,6 @@ try: doc = formatter(info, self.context) except: # just in case something goes wrong - import traceback doc = ''.join(traceback.format_exception(*info)) plain = True @@ -292,7 +290,6 @@ self.file.write('

A problem occurred in a Python script.\n') if self.logdir is not None: - import os, tempfile suffix = ['.txt', '.html'][self.format=="html"] (fd, path) = tempfile.mkstemp(suffix=suffix, dir=self.logdir) try: Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 18:06:01 2009 @@ -286,6 +286,10 @@ Library ------- +- Issue #1665206 (partially): Move imports in cgitb to the top of the module + instead of performing them in functions. Helps prevent import deadlocking in + threads. + - Issue #2522: locale.format now checks its first argument to ensure it has been passed only one pattern, avoiding mysterious errors where it appeared that it was failing to do localization. From python-checkins at python.org Wed Apr 1 18:08:35 2009 From: python-checkins at python.org (kristjan.jonsson) Date: Wed, 1 Apr 2009 18:08:35 +0200 (CEST) Subject: [Python-checkins] r70958 - python/trunk/Modules/posixmodule.c Message-ID: <20090401160835.129D71E4070@bag.python.org> Author: kristjan.jonsson Date: Wed Apr 1 18:08:34 2009 New Revision: 70958 Log: http://bugs.python.org/issue5623 Dynamically discoverd the size of the ioinfo struct used by the crt for its file descriptors. This should work across all flavors of the CRT. Thanks to Amaury Forgeot d'Arc Needs porting to 3.1 Modified: python/trunk/Modules/posixmodule.c Modified: python/trunk/Modules/posixmodule.c ============================================================================== --- python/trunk/Modules/posixmodule.c (original) +++ python/trunk/Modules/posixmodule.c Wed Apr 1 18:08:34 2009 @@ -269,6 +269,7 @@ #include #endif #include "osdefs.h" +#include #include #include /* for ShellExecute() */ #define popen _popen @@ -364,41 +365,15 @@ * (all of this is to avoid globally modifying the CRT behaviour using * _set_invalid_parameter_handler() and _CrtSetReportMode()) */ -#if _MSC_VER >= 1500 /* VS 2008 */ -typedef struct { - intptr_t osfhnd; - char osfile; - char pipech; - int lockinitflag; - CRITICAL_SECTION lock; -#ifndef _SAFECRT_IMPL - char textmode : 7; - char unicode : 1; - char pipech2[2]; - __int64 startpos; - BOOL utf8translations; - char dbcsBuffer; - BOOL dbcsBufferUsed; -#endif /* _SAFECRT_IMPL */ - } ioinfo; -#elif _MSC_VER >= 1400 /* VS 2005 */ +/* The actual size of the structure is determined at runtime. + * Only the first items must be present. + */ typedef struct { intptr_t osfhnd; char osfile; - char pipech; - int lockinitflag; - CRITICAL_SECTION lock; -#ifndef _SAFECRT_IMPL - char textmode : 7; - char unicode : 1; - char pipech2[2]; - __int64 startpos; - BOOL utf8translations; -#endif /* _SAFECRT_IMPL */ - } ioinfo; -#endif +} my_ioinfo; -extern __declspec(dllimport) ioinfo * __pioinfo[]; +extern __declspec(dllimport) char * __pioinfo[]; #define IOINFO_L2E 5 #define IOINFO_ARRAY_ELTS (1 << IOINFO_L2E) #define IOINFO_ARRAYS 64 @@ -412,6 +387,19 @@ { const int i1 = fd >> IOINFO_L2E; const int i2 = fd & ((1 << IOINFO_L2E) - 1); + + static int sizeof_ioinfo = 0; + + /* Determine the actual size of the ioinfo structure, + * as used by the CRT loaded in memory + */ + if (sizeof_ioinfo == 0 && __pioinfo[0] != NULL) { + sizeof_ioinfo = _msize(__pioinfo[0]) / IOINFO_ARRAY_ELTS; + } + if (sizeof_ioinfo == 0) { + /* This should not happen... */ + goto fail; + } /* See that it isn't a special CLEAR fileno */ if (fd != _NO_CONSOLE_FILENO) { @@ -420,10 +408,13 @@ */ if (0 <= i1 && i1 < IOINFO_ARRAYS && __pioinfo[i1] != NULL) { /* finally, check that the file is open */ - if (__pioinfo[i1][i2].osfile & FOPEN) + my_ioinfo* info = (my_ioinfo*)(__pioinfo[i1] + i2 * sizeof_ioinfo); + if (info->osfile & FOPEN) { return 1; + } } } + fail: errno = EBADF; return 0; } From python-checkins at python.org Wed Apr 1 18:39:21 2009 From: python-checkins at python.org (r.david.murray) Date: Wed, 1 Apr 2009 18:39:21 +0200 (CEST) Subject: [Python-checkins] r70959 - python/branches/py3k/Lib/test/test_tk.py Message-ID: <20090401163921.EF30A1E40A7@bag.python.org> Author: r.david.murray Date: Wed Apr 1 18:39:21 2009 New Revision: 70959 Log: Remove redundant import of tkinter. Modified: python/branches/py3k/Lib/test/test_tk.py Modified: python/branches/py3k/Lib/test/test_tk.py ============================================================================== --- python/branches/py3k/Lib/test/test_tk.py (original) +++ python/branches/py3k/Lib/test/test_tk.py Wed Apr 1 18:39:21 2009 @@ -6,9 +6,6 @@ from tkinter.test import runtktests import unittest - -import tkinter - try: tkinter.Button() except tkinter.TclError as msg: From python-checkins at python.org Wed Apr 1 18:42:19 2009 From: python-checkins at python.org (jesse.noller) Date: Wed, 1 Apr 2009 18:42:19 +0200 (CEST) Subject: [Python-checkins] r70960 - python/trunk/Doc/library/multiprocessing.rst Message-ID: <20090401164219.9FA9E1E4070@bag.python.org> Author: jesse.noller Date: Wed Apr 1 18:42:19 2009 New Revision: 70960 Log: Issue 3270: document Listener address restrictions on windows Modified: python/trunk/Doc/library/multiprocessing.rst Modified: python/trunk/Doc/library/multiprocessing.rst ============================================================================== --- python/trunk/Doc/library/multiprocessing.rst (original) +++ python/trunk/Doc/library/multiprocessing.rst Wed Apr 1 18:42:19 2009 @@ -1705,6 +1705,12 @@ *address* is the address to be used by the bound socket or named pipe of the listener object. + .. note:: + + If an address of '0.0.0.0' is used, the address will not be a connectable + end point on Windows. If you require a connectable end-point, + you should use '127.0.0.1'. + *family* is the type of socket (or named pipe) to use. This can be one of the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix domain socket) or ``'AF_PIPE'`` (for a Windows named pipe). Of these only From python-checkins at python.org Wed Apr 1 18:44:24 2009 From: python-checkins at python.org (jesse.noller) Date: Wed, 1 Apr 2009 18:44:24 +0200 (CEST) Subject: [Python-checkins] r70961 - in python/branches/release26-maint: Doc/library/multiprocessing.rst Message-ID: <20090401164424.AB9121E4070@bag.python.org> Author: jesse.noller Date: Wed Apr 1 18:44:24 2009 New Revision: 70961 Log: Merged revisions 70960 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70960 | jesse.noller | 2009-04-01 11:42:19 -0500 (Wed, 01 Apr 2009) | 1 line Issue 3270: document Listener address restrictions on windows ........ Modified: python/branches/release26-maint/ (props changed) python/branches/release26-maint/Doc/library/multiprocessing.rst Modified: python/branches/release26-maint/Doc/library/multiprocessing.rst ============================================================================== --- python/branches/release26-maint/Doc/library/multiprocessing.rst (original) +++ python/branches/release26-maint/Doc/library/multiprocessing.rst Wed Apr 1 18:44:24 2009 @@ -1697,6 +1697,12 @@ *address* is the address to be used by the bound socket or named pipe of the listener object. + .. note:: + + If an address of '0.0.0.0' is used, the address will not be a connectable + end point on Windows. If you require a connectable end-point, + you should use '127.0.0.1'. + *family* is the type of socket (or named pipe) to use. This can be one of the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix domain socket) or ``'AF_PIPE'`` (for a Windows named pipe). Of these only From ocean-city at m2.ccsnet.ne.jp Wed Apr 1 19:03:00 2009 From: ocean-city at m2.ccsnet.ne.jp (Hirokazu Yamamoto) Date: Thu, 02 Apr 2009 02:03:00 +0900 Subject: [Python-checkins] r70953 - python/trunk/Modules/_multiprocessing/multiprocessing.h In-Reply-To: <4222a8490904010853q860f951o5cbe3765acfa01b4@mail.gmail.com> References: <20090401151353.288AE1E4070@bag.python.org> <4222a8490904010853q860f951o5cbe3765acfa01b4@mail.gmail.com> Message-ID: <49D39E44.2050809@m2.ccsnet.ne.jp> Jesse Noller wrote: > Hi Hiro, I saw this checkin - talking to Martin this shouldn't be an > issue - what version of VStudio and Windows are you using? I'm using VC6, but I can see this compile error on buildbot too. http://www.python.org/dev/buildbot/trunk.stable/x86%20XP-4%20trunk/builds/2010/step-compile/0 > Additionally, if this is an issue, this change needs to be merged to > all of the active branches. Yes, I was waiting for buildbot result. And I had a feeling that some reaction I could get. (ex: this include should be done in win32_functions.c rather than header file...?) ;-) From python-checkins at python.org Wed Apr 1 19:07:16 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 19:07:16 +0200 (CEST) Subject: [Python-checkins] r70962 - python/trunk/Misc/developers.txt Message-ID: <20090401170716.C3C041E4070@bag.python.org> Author: brett.cannon Date: Wed Apr 1 19:07:16 2009 New Revision: 70962 Log: Ron DuPlain was given commit privileges at PyCon 2009 to work on 3to2. Modified: python/trunk/Misc/developers.txt Modified: python/trunk/Misc/developers.txt ============================================================================== --- python/trunk/Misc/developers.txt (original) +++ python/trunk/Misc/developers.txt Wed Apr 1 19:07:16 2009 @@ -17,6 +17,8 @@ Permissions History ------------------- +- Ron DuPlain was given commit privileges at PyCon 2009 by BAC to work on 3to2. + - Several developers of alternative Python implementations where given access for test suite and library adaptions by MvL: Allison Randal (Parrot), Michael Foord (IronPython), From python-checkins at python.org Wed Apr 1 19:46:01 2009 From: python-checkins at python.org (georg.brandl) Date: Wed, 1 Apr 2009 19:46:01 +0200 (CEST) Subject: [Python-checkins] r70963 - python/trunk/Lib/glob.py Message-ID: <20090401174601.893F71E474A@bag.python.org> Author: georg.brandl Date: Wed Apr 1 19:46:01 2009 New Revision: 70963 Log: #5655: fix docstring oversight. Modified: python/trunk/Lib/glob.py Modified: python/trunk/Lib/glob.py ============================================================================== --- python/trunk/Lib/glob.py (original) +++ python/trunk/Lib/glob.py Wed Apr 1 19:46:01 2009 @@ -16,7 +16,7 @@ return list(iglob(pathname)) def iglob(pathname): - """Return a list of paths matching a pathname pattern. + """Return an iterator which yields the paths matching a pathname pattern. The pattern may contain simple shell-style wildcards a la fnmatch. From python-checkins at python.org Wed Apr 1 19:52:13 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 19:52:13 +0200 (CEST) Subject: [Python-checkins] r70964 - python/trunk/Misc/developers.txt Message-ID: <20090401175213.C55581E437B@bag.python.org> Author: brett.cannon Date: Wed Apr 1 19:52:13 2009 New Revision: 70964 Log: Paul Kippes was given commit privileges to work on 3to2. Modified: python/trunk/Misc/developers.txt Modified: python/trunk/Misc/developers.txt ============================================================================== --- python/trunk/Misc/developers.txt (original) +++ python/trunk/Misc/developers.txt Wed Apr 1 19:52:13 2009 @@ -17,6 +17,8 @@ Permissions History ------------------- +- Paul Kippes was given commit privileges at PyCon 2009 by BAC to work on 3to2. + - Ron DuPlain was given commit privileges at PyCon 2009 by BAC to work on 3to2. - Several developers of alternative Python implementations where From python-checkins at python.org Wed Apr 1 20:03:59 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 20:03:59 +0200 (CEST) Subject: [Python-checkins] r70965 - in python/trunk: Lib/test/test_warnings.py Misc/NEWS Python/_warnings.c Message-ID: <20090401180359.60DDD1E4070@bag.python.org> Author: brett.cannon Date: Wed Apr 1 20:03:59 2009 New Revision: 70965 Log: _warnings was importing itself to get an attribute. That's bad if warnings gets called in a thread that was spawned by an import itself. Last part to close #1665206. Modified: python/trunk/Lib/test/test_warnings.py python/trunk/Misc/NEWS python/trunk/Python/_warnings.c Modified: python/trunk/Lib/test/test_warnings.py ============================================================================== --- python/trunk/Lib/test/test_warnings.py (original) +++ python/trunk/Lib/test/test_warnings.py Wed Apr 1 20:03:59 2009 @@ -413,6 +413,41 @@ finally: self.module.onceregistry = original_registry + def test_default_action(self): + # Replacing or removing defaultaction should be okay. + message = UserWarning("defaultaction test") + original = self.module.defaultaction + try: + with original_warnings.catch_warnings(record=True, + module=self.module) as w: + self.module.resetwarnings() + registry = {} + self.module.warn_explicit(message, UserWarning, "", 42, + registry=registry) + self.assertEqual(w[-1].message, message) + self.assertEqual(len(w), 1) + self.assertEqual(len(registry), 1) + del w[:] + # Test removal. + del self.module.defaultaction + __warningregistry__ = {} + registry = {} + self.module.warn_explicit(message, UserWarning, "", 43, + registry=registry) + self.assertEqual(w[-1].message, message) + self.assertEqual(len(w), 1) + self.assertEqual(len(registry), 1) + del w[:] + # Test setting. + self.module.defaultaction = "ignore" + __warningregistry__ = {} + registry = {} + self.module.warn_explicit(message, UserWarning, "", 44, + registry=registry) + self.assertEqual(len(w), 0) + finally: + self.module.defaultaction = original + def test_showwarning_missing(self): # Test that showwarning() missing is okay. text = 'del showwarning test' Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 20:03:59 2009 @@ -12,6 +12,8 @@ Core and Builtins ----------------- +- Issue #1665206: Remove the last eager import in _warnings.c and make it lazy. + - Issue #4865: On MacOSX /Library/Python/2.7/site-packages is added to the end sys.path, for compatibility with the system install of Python. Modified: python/trunk/Python/_warnings.c ============================================================================== --- python/trunk/Python/_warnings.c (original) +++ python/trunk/Python/_warnings.c Wed Apr 1 20:03:59 2009 @@ -2,7 +2,6 @@ #include "frameobject.h" #define MODULE_NAME "_warnings" -#define DEFAULT_ACTION_NAME "default_action" PyDoc_STRVAR(warnings__doc__, MODULE_NAME " provides basic warning filtering support.\n" @@ -12,6 +11,7 @@ get_warnings_attr() will reset these variables accordingly. */ static PyObject *_filters; /* List */ static PyObject *_once_registry; /* Dict */ +static PyObject *_default_action; /* String */ static int @@ -78,12 +78,31 @@ } +static PyObject * +get_default_action(void) +{ + PyObject *default_action; + + default_action = get_warnings_attr("defaultaction"); + if (default_action == NULL) { + if (PyErr_Occurred()) { + return NULL; + } + return _default_action; + } + + Py_DECREF(_default_action); + _default_action = default_action; + return default_action; +} + + /* The item is a borrowed reference. */ static const char * get_filter(PyObject *category, PyObject *text, Py_ssize_t lineno, PyObject *module, PyObject **item) { - PyObject *action, *m, *d; + PyObject *action; Py_ssize_t i; PyObject *warnings_filters; @@ -135,22 +154,17 @@ return PyString_AsString(action); } - m = PyImport_ImportModule(MODULE_NAME); - if (m == NULL) - return NULL; - d = PyModule_GetDict(m); - Py_DECREF(m); - if (d == NULL) - return NULL; - action = PyDict_GetItemString(d, DEFAULT_ACTION_NAME); - if (action != NULL) + action = get_default_action(); + if (action != NULL) { return PyString_AsString(action); + } PyErr_SetString(PyExc_ValueError, - MODULE_NAME "." DEFAULT_ACTION_NAME " not found"); + MODULE_NAME ".defaultaction not found"); return NULL; } + static int already_warned(PyObject *registry, PyObject *key, int should_set) { @@ -854,7 +868,7 @@ PyMODINIT_FUNC _PyWarnings_Init(void) { - PyObject *m, *default_action; + PyObject *m; m = Py_InitModule3(MODULE_NAME, warnings_functions, warnings__doc__); if (m == NULL) @@ -874,9 +888,9 @@ if (PyModule_AddObject(m, "once_registry", _once_registry) < 0) return; - default_action = PyString_InternFromString("default"); - if (default_action == NULL) + _default_action = PyString_FromString("default"); + if (_default_action == NULL) return; - if (PyModule_AddObject(m, DEFAULT_ACTION_NAME, default_action) < 0) + if (PyModule_AddObject(m, "default_action", _default_action) < 0) return; } From python-checkins at python.org Wed Apr 1 20:13:08 2009 From: python-checkins at python.org (brett.cannon) Date: Wed, 1 Apr 2009 20:13:08 +0200 (CEST) Subject: [Python-checkins] r70966 - in python/branches/py3k: Lib/test/test_warnings.py Misc/NEWS Python/_warnings.c Message-ID: <20090401181308.2F2D11E4070@bag.python.org> Author: brett.cannon Date: Wed Apr 1 20:13:07 2009 New Revision: 70966 Log: Merged revisions 70965 via svnmerge from svn+ssh://pythondev at svn.python.org/python/trunk ........ r70965 | brett.cannon | 2009-04-01 11:03:59 -0700 (Wed, 01 Apr 2009) | 5 lines _warnings was importing itself to get an attribute. That's bad if warnings gets called in a thread that was spawned by an import itself. Last part to close #1665206. ........ Modified: python/branches/py3k/ (props changed) python/branches/py3k/Lib/test/test_warnings.py python/branches/py3k/Misc/NEWS python/branches/py3k/Python/_warnings.c Modified: python/branches/py3k/Lib/test/test_warnings.py ============================================================================== --- python/branches/py3k/Lib/test/test_warnings.py (original) +++ python/branches/py3k/Lib/test/test_warnings.py Wed Apr 1 20:13:07 2009 @@ -423,6 +423,41 @@ finally: self.module.onceregistry = original_registry + def test_default_action(self): + # Replacing or removing defaultaction should be okay. + message = UserWarning("defaultaction test") + original = self.module.defaultaction + try: + with original_warnings.catch_warnings(record=True, + module=self.module) as w: + self.module.resetwarnings() + registry = {} + self.module.warn_explicit(message, UserWarning, "", 42, + registry=registry) + self.assertEqual(w[-1].message, message) + self.assertEqual(len(w), 1) + self.assertEqual(len(registry), 1) + del w[:] + # Test removal. + del self.module.defaultaction + __warningregistry__ = {} + registry = {} + self.module.warn_explicit(message, UserWarning, "", 43, + registry=registry) + self.assertEqual(w[-1].message, message) + self.assertEqual(len(w), 1) + self.assertEqual(len(registry), 1) + del w[:] + # Test setting. + self.module.defaultaction = "ignore" + __warningregistry__ = {} + registry = {} + self.module.warn_explicit(message, UserWarning, "", 44, + registry=registry) + self.assertEqual(len(w), 0) + finally: + self.module.defaultaction = original + def test_showwarning_missing(self): # Test that showwarning() missing is okay. text = 'del showwarning test' Modified: python/branches/py3k/Misc/NEWS ============================================================================== --- python/branches/py3k/Misc/NEWS (original) +++ python/branches/py3k/Misc/NEWS Wed Apr 1 20:13:07 2009 @@ -12,6 +12,8 @@ Core and Builtins ----------------- +- Issue #1665206: Remove the last eager import in _warnings.c and make it lazy. + - Fix a segfault when running test_exceptions with coverage, caused by insufficient checks in accessors of Exception.__context__. Modified: python/branches/py3k/Python/_warnings.c ============================================================================== --- python/branches/py3k/Python/_warnings.c (original) +++ python/branches/py3k/Python/_warnings.c Wed Apr 1 20:13:07 2009 @@ -2,7 +2,6 @@ #include "frameobject.h" #define MODULE_NAME "_warnings" -#define DEFAULT_ACTION_NAME "default_action" PyDoc_STRVAR(warnings__doc__, MODULE_NAME " provides basic warning filtering support.\n" @@ -12,6 +11,7 @@ get_warnings_attr() will reset these variables accordingly. */ static PyObject *_filters; /* List */ static PyObject *_once_registry; /* Dict */ +static PyObject *_default_action; /* String */ static int @@ -78,12 +78,31 @@ } +static PyObject * +get_default_action(void) +{ + PyObject *default_action; + + default_action = get_warnings_attr("defaultaction"); + if (default_action == NULL) { + if (PyErr_Occurred()) { + return NULL; + } + return _default_action; + } + + Py_DECREF(_default_action); + _default_action = default_action; + return default_action; +} + + /* The item is a borrowed reference. */ static const char * get_filter(PyObject *category, PyObject *text, Py_ssize_t lineno, PyObject *module, PyObject **item) { - PyObject *action, *m, *d; + PyObject *action; Py_ssize_t i; PyObject *warnings_filters; @@ -135,22 +154,17 @@ return _PyUnicode_AsString(action); } - m = PyImport_ImportModule(MODULE_NAME); - if (m == NULL) - return NULL; - d = PyModule_GetDict(m); - Py_DECREF(m); - if (d == NULL) - return NULL; - action = PyDict_GetItemString(d, DEFAULT_ACTION_NAME); - if (action != NULL) + action = get_default_action(); + if (action != NULL) { return _PyUnicode_AsString(action); + } PyErr_SetString(PyExc_ValueError, - MODULE_NAME "." DEFAULT_ACTION_NAME " not found"); + MODULE_NAME ".defaultaction not found"); return NULL; } + static int already_warned(PyObject *registry, PyObject *key, int should_set) { @@ -874,7 +888,7 @@ PyMODINIT_FUNC _PyWarnings_Init(void) { - PyObject *m, *default_action; + PyObject *m; m = PyModule_Create(&warningsmodule); if (m == NULL) @@ -894,10 +908,10 @@ if (PyModule_AddObject(m, "once_registry", _once_registry) < 0) return NULL; - default_action = PyUnicode_InternFromString("default"); - if (default_action == NULL) + _default_action = PyUnicode_FromString("default"); + if (_default_action == NULL) return NULL; - if (PyModule_AddObject(m, DEFAULT_ACTION_NAME, default_action) < 0) + if (PyModule_AddObject(m, "default_action", _default_action) < 0) return NULL; return m; } From python-checkins at python.org Wed Apr 1 20:25:03 2009 From: python-checkins at python.org (ron.duplain) Date: Wed, 1 Apr 2009 20:25:03 +0200 (CEST) Subject: [Python-checkins] r70967 - sandbox/trunk/refactor_pkg Message-ID: <20090401182503.763BC1E4070@bag.python.org> Author: ron.duplain Date: Wed Apr 1 20:25:03 2009 New Revision: 70967 Log: Creating a sandbox home for 3to2, refactor_pkg. Added: sandbox/trunk/refactor_pkg/ From python-checkins at python.org Wed Apr 1 20:25:39 2009 From: python-checkins at python.org (michael.foord) Date: Wed, 1 Apr 2009 20:25:39 +0200 (CEST) Subject: [Python-checkins] r70968 - in python/trunk/Misc: README python-wing.wpr Message-ID: <20090401182539.31ECF1E4747@bag.python.org> Author: michael.foord Date: Wed Apr 1 20:25:38 2009 New Revision: 70968 Log: Adding Wing project file Added: python/trunk/Misc/python-wing.wpr (contents, props changed) Modified: python/trunk/Misc/ (props changed) python/trunk/Misc/README Modified: python/trunk/Misc/README ============================================================================== --- python/trunk/Misc/README (original) +++ python/trunk/Misc/README Wed Apr 1 20:25:38 2009 @@ -23,6 +23,7 @@ pymemcompat.h Memory interface compatibility file. python.man UNIX man page for the python interpreter python-mode.el Emacs mode for editing Python programs +python-wing.wpr Wing IDE project file README The file you're reading now README.valgrind Information for Valgrind users, see valgrind-python.supp RFD Request For Discussion about a Python newsgroup Added: python/trunk/Misc/python-wing.wpr ============================================================================== --- (empty file) +++ python/trunk/Misc/python-wing.wpr Wed Apr 1 20:25:38 2009 @@ -0,0 +1,13 @@ +#!wing +#!version=3.0 +################################################################## +# Wing IDE project file # +################################################################## +[project attributes] +proj.directory-list = [{'dirloc': loc('..'), + 'excludes': (), + 'filter': '*', + 'include_hidden': False, + 'recursive': True, + 'watch_for_changes': True}] +proj.file-type = 'shared' From buildbot at python.org Wed Apr 1 20:39:17 2009 From: buildbot at python.org (buildbot at python.org) Date: Wed, 01 Apr 2009 18:39:17 +0000 Subject: [Python-checkins] buildbot failure in amd64 gentoo trunk Message-ID: <20090401183917.9E0C61E4070@bag.python.org> The Buildbot has detected a new failure of amd64 gentoo trunk. Full details are available at: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/2057 Buildbot URL: http://www.python.org/dev/buildbot/all/ Buildslave for this Build: norwitz-amd64 Build Reason: Build Source Stamp: [branch trunk] HEAD Blamelist: brett.cannon BUILD FAILED: failed test Excerpt from the test logfile: 1 test failed: test_asyncore ====================================================================== FAIL: test_readwrite (test.test_asyncore.HelperFunctionTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/slave/py-build/trunk.norwitz-amd64/build/Lib/test/test_asyncore.py", line 144, in test_readwrite self.assertEqual(tobj.read, True) AssertionError: False != True make: *** [buildbottest] Error 1 sincerely, -The Buildbot From python-checkins at python.org Wed Apr 1 20:50:57 2009 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 1 Apr 2009 20:50:57 +0200 (CEST) Subject: [Python-checkins] r70969 - in python/trunk: Lib/_abcoll.py Lib/test/test_collections.py Misc/NEWS Message-ID: <20090401185057.1DE2B1E4070@bag.python.org> Author: raymond.hettinger Date: Wed Apr 1 20:50:56 2009 New Revision: 70969 Log: Issue #5647: MutableSet.__iand__() no longer mutates self during iteration. Modified: python/trunk/Lib/_abcoll.py python/trunk/Lib/test/test_collections.py python/trunk/Misc/NEWS Modified: python/trunk/Lib/_abcoll.py ============================================================================== --- python/trunk/Lib/_abcoll.py (original) +++ python/trunk/Lib/_abcoll.py Wed Apr 1 20:50:56 2009 @@ -286,10 +286,9 @@ self.add(value) return self - def __iand__(self, c): - for value in self: - if value not in c: - self.discard(value) + def __iand__(self, it): + for value in (self - it): + self.discard(value) return self def __ixor__(self, it): Modified: python/trunk/Lib/test/test_collections.py ============================================================================== --- python/trunk/Lib/test/test_collections.py (original) +++ python/trunk/Lib/test/test_collections.py Wed Apr 1 20:50:56 2009 @@ -327,6 +327,25 @@ B.register(C) self.failUnless(issubclass(C, B)) +class WithSet(MutableSet): + + def __init__(self, it=()): + self.data = set(it) + + def __len__(self): + return len(self.data) + + def __iter__(self): + return iter(self.data) + + def __contains__(self, item): + return item in self.data + + def add(self, item): + self.data.add(item) + + def discard(self, item): + self.data.discard(item) class TestCollectionABCs(ABCTestCase): @@ -363,6 +382,12 @@ self.validate_abstract_methods(MutableSet, '__contains__', '__iter__', '__len__', 'add', 'discard') + def test_issue_5647(self): + # MutableSet.__iand__ mutated the set during iteration + s = WithSet('abcd') + s &= WithSet('cdef') # This used to fail + self.assertEqual(set(s), set('cd')) + def test_issue_4920(self): # MutableSet.pop() method did not work class MySet(collections.MutableSet): Modified: python/trunk/Misc/NEWS ============================================================================== --- python/trunk/Misc/NEWS (original) +++ python/trunk/Misc/NEWS Wed Apr 1 20:50:56 2009 @@ -206,6 +206,8 @@ instead of performing them in functions. Helps prevent import deadlocking in threads. +- Issue #5647: MutableSet.__iand__() no longer mutates self during iteration. + - Actually make the SimpleXMLRPCServer CGI handler work. - Issue #2522: locale.format now checks its first argument to ensure it has From python-checkins at python.org Wed Apr 1 20:55:57 2009 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 1 Apr 2009 20:55:57 +0200 (CEST) Subject: [Python-checkins] r70970 - in python/branches/release26-maint/Lib: _abcoll.py test/test_collections.py Message-ID: <20090401185557.6A1D01E4070@bag.python.org> Author: raymond.hettinger Date: Wed Apr 1 20:55:57 2009 New Revision: 70970 Log: Issue #5647: MutableSet.__iand__() no longer mutates self during iteration. Modified: python/branches/release26-maint/Lib/_abcoll.py python/branches/release26-maint/Lib/test/test_collections.py Modified: python/branches/release26-maint/Lib/_abcoll.py ============================================================================== --- python/branches/release26-maint/Lib/_abcoll.py (original) +++ python/branches/release26-maint/Lib/_abcoll.py Wed Apr 1 20:55:57 2009 @@ -286,10 +286,9 @@ self.add(value) return self - def __iand__(self, c): - for value in self: - if value not in c: - self.discard(value) + def __iand__(self, it): + for value in (self - it): + self.discard(value) return self def __ixor__(self, it): Modified: python/branches/release26-maint/Lib/test/test_collections.py ============================================================================== --- python/branches/release26-maint/Lib/test/test_collections.py (original) +++ python/branches/release26-maint/Lib/test/test_collections.py Wed Apr 1 20:55:57 2009 @@ -311,6 +311,25 @@ B.register(C) self.failUnless(issubclass(C, B)) +class WithSet(MutableSet): + + def __init__(self, it=()): + self.data = set(it) + + def __len__(self): + return len(self.data) + + def __iter__(self): + return iter(self.data) + + def __contains__(self, item): + return item in self.data + + def add(self, item): + self.data.add(item) + + def discard(self, item): + self.data.discard(item) class TestCollectionABCs(ABCTestCase): @@ -347,6 +366,12 @@ self.validate_abstract_methods(MutableSet, '__contains__', '__iter__', '__len__', 'add', 'discard') + def test_issue_5647(self): + # MutableSet.__iand__ mutated the set during iteration + s = WithSet('abcd') + s &= WithSet('cdef') # This used to fail + self.assertEqual(set(s), set('cd')) + def test_issue_4920(self): # MutableSet.pop() method did not work class MySet(collections.MutableSet): From python-checkins at python.org Wed Apr 1 20:57:45 2009 From: python-checkins at python.org (raymond.hettinger) Date: Wed, 1 Apr 2009 20:57:45 +0200 (CEST) Subject: [Python-checkins] r70971 - python/branches/release26-maint/Misc/NEWS Message-ID: <20090401185745.A7BD71E40D9@bag.python.org> Author: raymond.hettinger Date: Wed Apr 1 20:57:45 2009 New Revision: 70971 Log: Add NEWS item. Modified: python/branches/release26-maint/Misc/NEWS Modified: python/branches/release26-maint/Misc/NEWS ============================================================================== --- python/branches/release26-maint/Misc/NEWS (original) +++ python/branches/release26-maint/Misc/NEWS Wed Apr 1 20:57:45 2009 @@ -92,6 +92,8 @@ Library ------- +- Issue #5647: MutableSet.__iand__() no longer mutates self during iteration. + - Issue #5619: Multiprocessing children disobey the debug flag and causes popups on windows buildbots. Patch applied to work around this issue. From python-checkins at python.org Wed Apr 1 21:02:12 2009 From: python-checkins at python.org (paul.kippes) Date: Wed, 1 Apr 2009 21:02:12 +0200 (CEST) Subject: [Python-checkins] r70972 - in sandbox/trunk/refactor_pkg: 2to3 3to2 HACKING README TODO diff_from_r70785.diff example.py lib2to3 lib2to3/Grammar.txt lib2to3/PatternGrammar.txt lib2to3/__init__.py lib2to3/fixes lib2to3/fixes/__init__.py lib2to3/pgen2 lib2to3/pgen2/__init__.py lib2to3/tests lib2to3/tests/__init__.py lib2to3/tests/data lib2to3/tests/data/README lib2to3/tests/data/fixers lib2to3/tests/data/fixers/bad_order.py lib2to3/tests/data/fixers/myfixes lib2to3/tests/data/fixers/myfixes/__init__.py lib2to3/tests/data/fixers/myfixes/fix_explicit.py lib2to3/tests/data/fixers/myfixes/fix_first.py lib2to3/tests/data/fixers/myfixes/fix_last.py lib2to3/tests/data/fixers/myfixes/fix_parrot.py lib2to3/tests/data/fixers/myfixes/fix_preorder.py lib2to3/tests/data/fixers/no_fixer_cls.py lib2to3/tests/data/fixers/parrot_example.py lib2to3/tests/data/infinite_recursion.py lib2to3/tests/data/py2_test_grammar.py lib2to3/tests/data/py3_test_grammar.py lib2to3/tests/pytree_idempotency.py lib2to3/tests/support.py lib2to3/tests/test_all_fixers.py lib2to3/tests/test_fixers.py lib2to3/tests/test_parser.py lib2to3/tests/test_pytree.py lib2to3/tests/test_refactor.py lib2to3/tests/test_util.py refactor refactor/Grammar.txt refactor/PatternGrammar.txt refactor/__init__.py refactor/fixer_base.py refactor/fixer_util.py refactor/fixes refactor/fixes/__init__.py refactor/fixes/fixer_common.py refactor/fixes/from2 refactor/fixes/from2/__init__.py refactor/fixes/from2/fix_apply.py refactor/fixes/from2/fix_basestring.py refactor/fixes/from2/fix_buffer.py refactor/fixes/from2/fix_callable.py refactor/fixes/from2/fix_dict.py refactor/fixes/from2/fix_except.py refactor/fixes/from2/fix_exec.py refactor/fixes/from2/fix_execfile.py refactor/fixes/from2/fix_filter.py refactor/fixes/from2/fix_funcattrs.py refactor/fixes/from2/fix_future.py refactor/fixes/from2/fix_getcwdu.py refactor/fixes/from2/fix_has_key.py refactor/fixes/from2/fix_idioms.py refactor/fixes/from2/fix_import.py refactor/fixes/from2/fix_imports.py refactor/fixes/from2/fix_imports2.py refactor/fixes/from2/fix_input.py refactor/fixes/from2/fix_intern.py refactor/fixes/from2/fix_isinstance.py refactor/fixes/from2/fix_itertools.py refactor/fixes/from2/fix_itertools_imports.py refactor/fixes/from2/fix_long.py refactor/fixes/from2/fix_map.py refactor/fixes/from2/fix_metaclass.py refactor/fixes/from2/fix_methodattrs.py refactor/fixes/from2/fix_ne.py refactor/fixes/from2/fix_next.py refactor/fixes/from2/fix_nonzero.py refactor/fixes/from2/fix_numliterals.py refactor/fixes/from2/fix_paren.py refactor/fixes/from2/fix_print.py refactor/fixes/from2/fix_raise.py refactor/fixes/from2/fix_raw_input.py refactor/fixes/from2/fix_reduce.py refactor/fixes/from2/fix_renames.py refactor/fixes/from2/fix_repr.py refactor/fixes/from2/fix_set_literal.py refactor/fixes/from2/fix_standarderror.py refactor/fixes/from2/fix_sys_exc.py refactor/fixes/from2/fix_throw.py refactor/fixes/from2/fix_tuple_params.py refactor/fixes/from2/fix_types.py refactor/fixes/from2/fix_unicode.py refactor/fixes/from2/fix_urllib.py refactor/fixes/from2/fix_ws_comma.py re Message-ID: <20090401190212.7BEE31E4070@bag.python.org> Author: paul.kippes Date: Wed Apr 1 21:02:05 2009 New Revision: 70972 Log: Pycon 2009 sprint work containing 2to3 refactoring; based on r70785 of sandbox/2to3 work in progress... Added: sandbox/trunk/refactor_pkg/2to3 (contents, props changed) sandbox/trunk/refactor_pkg/3to2 (contents, props changed) sandbox/trunk/refactor_pkg/HACKING sandbox/trunk/refactor_pkg/README sandbox/trunk/refactor_pkg/TODO sandbox/trunk/refactor_pkg/diff_from_r70785.diff sandbox/trunk/refactor_pkg/example.py sandbox/trunk/refactor_pkg/lib2to3/ sandbox/trunk/refactor_pkg/lib2to3/Grammar.txt sandbox/trunk/refactor_pkg/lib2to3/PatternGrammar.txt sandbox/trunk/refactor_pkg/lib2to3/__init__.py sandbox/trunk/refactor_pkg/lib2to3/fixes/ sandbox/trunk/refactor_pkg/lib2to3/fixes/__init__.py sandbox/trunk/refactor_pkg/lib2to3/pgen2/ sandbox/trunk/refactor_pkg/lib2to3/pgen2/__init__.py sandbox/trunk/refactor_pkg/lib2to3/tests/ sandbox/trunk/refactor_pkg/lib2to3/tests/__init__.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/ sandbox/trunk/refactor_pkg/lib2to3/tests/data/README sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/ sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/bad_order.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/myfixes/ sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/myfixes/__init__.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/myfixes/fix_explicit.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/myfixes/fix_first.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/myfixes/fix_last.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/myfixes/fix_parrot.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/myfixes/fix_preorder.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/no_fixer_cls.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/fixers/parrot_example.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/infinite_recursion.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/py2_test_grammar.py sandbox/trunk/refactor_pkg/lib2to3/tests/data/py3_test_grammar.py sandbox/trunk/refactor_pkg/lib2to3/tests/pytree_idempotency.py (contents, props changed) sandbox/trunk/refactor_pkg/lib2to3/tests/support.py sandbox/trunk/refactor_pkg/lib2to3/tests/test_all_fixers.py sandbox/trunk/refactor_pkg/lib2to3/tests/test_fixers.py (contents, props changed) sandbox/trunk/refactor_pkg/lib2to3/tests/test_parser.py sandbox/trunk/refactor_pkg/lib2to3/tests/test_pytree.py (contents, props changed) sandbox/trunk/refactor_pkg/lib2to3/tests/test_refactor.py sandbox/trunk/refactor_pkg/lib2to3/tests/test_util.py sandbox/trunk/refactor_pkg/refactor/ sandbox/trunk/refactor_pkg/refactor/Grammar.txt sandbox/trunk/refactor_pkg/refactor/PatternGrammar.txt sandbox/trunk/refactor_pkg/refactor/__init__.py sandbox/trunk/refactor_pkg/refactor/fixer_base.py sandbox/trunk/refactor_pkg/refactor/fixer_util.py sandbox/trunk/refactor_pkg/refactor/fixes/ sandbox/trunk/refactor_pkg/refactor/fixes/__init__.py sandbox/trunk/refactor_pkg/refactor/fixes/fixer_common.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/ sandbox/trunk/refactor_pkg/refactor/fixes/from2/__init__.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_apply.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_basestring.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_buffer.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_callable.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_dict.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_except.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_exec.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_execfile.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_filter.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_funcattrs.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_future.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_getcwdu.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_has_key.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_idioms.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_import.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_imports.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_imports2.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_input.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_intern.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_isinstance.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_itertools.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_itertools_imports.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_long.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_map.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_metaclass.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_methodattrs.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_ne.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_next.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_nonzero.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_numliterals.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_paren.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_print.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_raise.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_raw_input.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_reduce.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_renames.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_repr.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_set_literal.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_standarderror.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_sys_exc.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_throw.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_tuple_params.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_types.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_unicode.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_urllib.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_ws_comma.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_xrange.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_xreadlines.py sandbox/trunk/refactor_pkg/refactor/fixes/from2/fix_zip.py sandbox/trunk/refactor_pkg/refactor/fixes/from3/ sandbox/trunk/refactor_pkg/refactor/fixes/from3/__init__.py sandbox/trunk/refactor_pkg/refactor/fixes/from3/fix_range.py sandbox/trunk/refactor_pkg/refactor/fixes/from3/fix_renames.py sandbox/trunk/refactor_pkg/refactor/main.py sandbox/trunk/refactor_pkg/refactor/patcomp.py sandbox/trunk/refactor_pkg/refactor/pgen2/ sandbox/trunk/refactor_pkg/refactor/pgen2/__init__.py sandbox/trunk/refactor_pkg/refactor/pgen2/conv.py sandbox/trunk/refactor_pkg/refactor/pgen2/driver.py sandbox/trunk/refactor_pkg/refactor/pgen2/grammar.py sandbox/trunk/refactor_pkg/refactor/pgen2/literals.py sandbox/trunk/refactor_pkg/refactor/pgen2/parse.py sandbox/trunk/refactor_pkg/refactor/pgen2/pgen.py sandbox/trunk/refactor_pkg/refactor/pgen2/token.py (contents, props changed) sandbox/trunk/refactor_pkg/refactor/pgen2/tokenize.py sandbox/trunk/refactor_pkg/refactor/pygram.py sandbox/trunk/refactor_pkg/refactor/pytree.py sandbox/trunk/refactor_pkg/refactor/refactor.py (contents, props changed) sandbox/trunk/refactor_pkg/refactor/tests/ sandbox/trunk/refactor_pkg/refactor/tests/__init__.py sandbox/trunk/refactor_pkg/refactor/tests/data/ sandbox/trunk/refactor_pkg/refactor/tests/data/README sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/ sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/bad_order.py sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/myfixes/ sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/myfixes/__init__.py sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/myfixes/fix_explicit.py sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/myfixes/fix_first.py sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/myfixes/fix_last.py sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/myfixes/fix_parrot.py sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/myfixes/fix_preorder.py sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/no_fixer_cls.py sandbox/trunk/refactor_pkg/refactor/tests/data/fixers/parrot_example.py sandbox/trunk/refactor_pkg/refactor/tests/data/infinite_recursion.py sandbox/trunk/refactor_pkg/refactor/tests/data/py2_test_grammar.py sandbox/trunk/refactor_pkg/refactor/tests/data/py3_test_grammar.py sandbox/trunk/refactor_pkg/refactor/tests/pytree_idempotency.py (contents, props changed) sandbox/trunk/refactor_pkg/refactor/tests/support.py sandbox/trunk/refactor_pkg/refactor/tests/test_all_fixers.py sandbox/trunk/refactor_pkg/refactor/tests/test_fixers.py (contents, props changed) sandbox/trunk/refactor_pkg/refactor/tests/test_parser.py sandbox/trunk/refactor_pkg/refactor/tests/test_pytree.py (contents, props changed) sandbox/trunk/refactor_pkg/refactor/tests/test_refactor.py sandbox/trunk/refactor_pkg/refactor/tests/test_util.py sandbox/trunk/refactor_pkg/scripts/ sandbox/trunk/refactor_pkg/scripts/benchmark.py sandbox/trunk/refactor_pkg/scripts/find_pattern.py (contents, props changed) sandbox/trunk/refactor_pkg/setup.py sandbox/trunk/refactor_pkg/test.py (contents, props changed) Added: sandbox/trunk/refactor_pkg/2to3 ============================================================================== --- (empty file) +++ sandbox/trunk/refactor_pkg/2to3 Wed Apr 1 21:02:05 2009 @@ -0,0 +1,6 @@ +#!/usr/bin/env python +import lib2to3 +import sys +import os + +sys.exit(lib2to3.main.main("refactor.fixes.from2")) Added: sandbox/trunk/refactor_pkg/3to2 ============================================================================== --- (empty file) +++ sandbox/trunk/refactor_pkg/3to2 Wed Apr 1 21:02:05 2009 @@ -0,0 +1,6 @@ +#!/usr/bin/env python +from refactor.main import main +import sys +import os + +sys.exit(main("refactor.fixes.from3")) Added: sandbox/trunk/refactor_pkg/HACKING ============================================================================== --- (empty file) +++ sandbox/trunk/refactor_pkg/HACKING Wed Apr 1 21:02:05 2009 @@ -0,0 +1,74 @@ +Running tests: + + * 2to3 and 3to2 will fail with Python 2.5, but should work w/ Python trunk + $ python test.py --help + + * test lib2to3 + $ python test.py --base lib2to3 + + * test refactor + $ python test.py + + +Tips/tricks/hints for writing new fixers: + + * Don't write your own PATTERN from scratch; that's what + scripts/find_pattern.py is for. + + e.g. + ./scripts/find_pattern.py + + This will give choices of tokens to parse. + Press enter to skip, any key to see the grammar. + + $ ./scripts/find_pattern.py "print('hello, world')" + "('hello, world')" + + "print('hello, world')" + . + print_stmt< 'print' atom< '(' "'hello, world'" ')' > > + + * If your fixer works by changing a node's children list or a leaf's value, + be sure to call the node/leaf's changed() method. This to be sure the main + script will recognize that the tree has changed. + + +Putting 2to3 to work somewhere else: + + * By default, 2to3 uses a merger of Python 2.x and Python 3's grammars. If + you want to support a different grammar, just replace the Grammar.txt file + with Grammar/Grammar from your chosen Python version. + + * The real heart of 2to3 is the concrete syntax tree parser in pgen2; this + chunk of the system is suitable for a wide range of applications that + require CST transformation. All that's required is to rip off the fixer + layer and replace it with something else that walks the tree. One + application would be a tool to check/enforce style guidelines; this could + leverage 90% of the existing infrastructure with primarily cosmetic + changes (e.g., fixes/fix_*.py -> styles/style_*.py). + + +TODO + + Simple: + ####### + + * Refactor common code out of fixes/fix_*.py into fixer_util (on-going). + + * Document how to write fixers. + + + Complex: + ######## + + * Come up with a scheme to hide the details of suite indentation (some kind + of custom pytree node for suites, probably). This will automatically + reindent all code with spaces, tied into a refactor.py flag that allows + you to specify the indent level. + + * Remove the need to explicitly assign a node's parent attribute. This + could be gone with a magic children list. + + * Import statements are complicated and a pain to handle, and there are many + fixers that manipulate them. It would be nice to have a little API for + manipulating imports in fixers. Added: sandbox/trunk/refactor_pkg/README ============================================================================== --- (empty file) +++ sandbox/trunk/refactor_pkg/README Wed Apr 1 21:02:05 2009 @@ -0,0 +1,260 @@ +Abstract +======== + +The refactor package -- 2to3 and back again -- is a fork of lib2to3. + +lib2to3: + +A refactoring tool for converting Python 2.x code to 3.0. + +This is a work in progress! Bugs should be reported to http://bugs.python.org/ +under the "2to3" category. + + +General usage +============= + +Run ``./2to3`` to convert stdin (``-``), files or directories given as +arguments. By default, the tool outputs a unified diff-formatted patch on +standard output and a "what was changed" summary on standard error, but the +``-w`` option can be given to write back converted files, creating +``.bak``-named backup files. + +2to3 must be run with at least Python 2.5. The intended path for migrating to +Python 3.x is to first migrate to 2.6 (in order to take advantage of Python +2.6's runtime compatibility checks). + + +Files +===== + +README - this file +lib2to3/refactor.py - main program; use this to convert files or directory trees +test.py - runs all unittests for 2to3 +lib2to3/patcomp.py - pattern compiler +lib2to3/pytree.py - parse tree nodes (not specific to Python, despite the name!) +lib2to3/pygram.py - code specific to the Python grammar +example.py - example input for play.py and fix_*.py +find_pattern.py - script to help determine the PATTERN for a new fix +lib2to3/Grammar.txt - Python grammar input (accepts 2.x and 3.x syntax) +lib2to3/Grammar.pickle - pickled grammar tables (generated file, not in subversion) +lib2to3/PatternGrammar.txt - grammar for the pattern language used by patcomp.py +lib2to3/PatternGrammar.pickle - pickled pattern grammar tables (generated file) +lib2to3/pgen2/ - Parser generator and driver ([1]_, [2]_) +lib2to3/fixes/ - Individual transformations +lib2to3tests/ - Test files for pytree, fixers, grammar, etc + + +Capabilities +============ + +A quick run-through of 2to3's current fixers: + +* **fix_apply** - convert apply() calls to real function calls. + +* **fix_callable** - converts callable(obj) into hasattr(obj, '__call__'). + +* **fix_dict** - fix up dict.keys(), .values(), .items() and their iterator + versions. + +* **fix_except** - adjust "except" statements to Python 3 syntax (PEP 3110). + +* **fix_exec** - convert "exec" statements to exec() function calls. + +* **fix_execfile** - execfile(filename, ...) -> exec(open(filename).read()) + +* **fix_filter** - changes filter(F, X) into list(filter(F, X)). + +* **fix_funcattrs** - fix function attribute names (f.func_x -> f.__x__). + +* **fix_has_key** - "d.has_key(x)" -> "x in d". + +* **fix_idioms** - convert type(x) == T to isinstance(x, T), "while 1:" to + "while True:", plus others. This fixer must be explicitly requested + with "-f idioms". + +* **fix_imports** - Fix (some) incompatible imports. + +* **fix_imports2** - Fix (some) incompatible imports that must run after + **test_imports**. + +* **fix_input** - "input()" -> "eval(input())" (PEP 3111). + +* **fix_intern** - "intern(x)" -> "sys.intern(x)". + +* **fix_long** - remove all usage of explicit longs in favor of ints. + +* **fix_map** - generally changes map(F, ...) into list(map(F, ...)). + +* **fix_ne** - convert the "<>" operator to "!=". + +* **fix_next** - fixer for it.next() -> next(it) (PEP 3114). + +* **fix_nonzero** - convert __nonzero__() methods to __bool__() methods. + +* **fix_numliterals** - tweak certain numeric literals to be 3.0-compliant. + +* **fix_paren** - Add parentheses to places where they are needed in list + comprehensions and generator expressions. + +* **fix_print** - convert "print" statements to print() function calls. + +* **fix_raise** - convert "raise" statements to Python 3 syntax (PEP 3109). + +* **fix_raw_input** - "raw_input()" -> "input()" (PEP 3111). + +* **fix_repr** - swap backticks for repr() calls. + +* **fix_standarderror** - StandardError -> Exception. + +* **fix_sys_exc** - Converts * **"sys.exc_info", "sys.exc_type", and + "sys.exc_value" to sys.exc_info() + +* **fix_throw** - fix generator.throw() calls to be 3.0-compliant (PEP 3109). + +* **fix_tuple_params** - remove tuple parameters from function, method and + lambda declarations (PEP 3113). + +* **fix_unicode** - convert, e.g., u"..." to "...", unicode(x) to str(x), etc. + +* **fix_urllib** - Fix imports for urllib and urllib2. + +* **fix_xrange** - "xrange()" -> "range()". + +* **fix_xreadlines** - "for x in f.xreadlines():" -> "for x in f:". Also, + "g(f.xreadlines)" -> "g(f.__iter__)". + +* **fix_metaclass** - move __metaclass__ = M to class X(metaclass=M) + + +Limitations +=========== + +General Limitations +------------------- + +* In general, fixers that convert a function or method call will not detect + something like :: + + a = apply + a(f, *args) + + or :: + + m = d.has_key + if m(5): + ... + +* Fixers that look for attribute references will not detect when getattr() or + setattr() is used to access those attributes. + +* The contents of eval() calls and "exec" statements will not be checked by + 2to3. + + +Caveats for Specific Fixers +--------------------------- + +fix_except +'''''''''' + +"except" statements like :: + + except Exception, (a, b): + ... + +are not fixed up. The ability to treat exceptions as sequences is being +removed in Python 3, so there is no straightforward, automatic way to +adjust these statements. + +This is seen frequently when dealing with OSError. + + +fix_filter +'''''''''' + +The transformation is not correct if the original code depended on +filter(F, X) returning a string if X is a string (or a tuple if X is a +tuple, etc). That would require type inference, which we don't do. Python +2.6's Python 3 compatibility mode should be used to detect such cases. + + +fix_has_key +''''''''''' + +While the primary target of this fixer is dict.has_key(), the +fixer will change any has_key() method call, regardless of what class it +belongs to. Anyone using non-dictionary classes with has_key() methods is +advised to pay close attention when using this fixer. + + +fix_map +''''''' + +The transformation is not correct if the original code was depending on +map(F, X, Y, ...) to go on until the longest argument is exhausted, +substituting None for missing values -- like zip(), it now stops as +soon as the shortest argument is exhausted. + + +fix_raise +''''''''' + +"raise E, V" will be incorrectly translated if V is an exception instance. +The correct Python 3 idiom is :: + + raise E from V + +but since we can't detect instance-hood by syntax alone and since any client +code would have to be changed as well, we don't automate this. + +Another translation problem is this: :: + + t = ((E, E2), E3) + raise t + +2to3 has no way of knowing that t is a tuple, and so this code will raise an +exception at runtime since the ability to raise tuples is going away. + + +Notes +===== + +.. [#1] I modified tokenize.py to yield a NL pseudo-token for backslash + continuations, so the original source can be reproduced exactly. The + modified version can be found at lib2to3/pgen2/tokenize.py. + +.. [#2] I developed pgen2 while I was at Elemental Security. I modified + it while at Google to suit the needs of this refactoring tool. + + +Development +=========== + +The HACKING file has a list of TODOs -- some simple, some complex -- that would +make good introductions for anyone new to 2to3. + + +Licensing +========= + +The original pgen2 module is copyrighted by Elemental Security. All +new code I wrote specifically for this tool is copyrighted by Google. +New code by others is copyrighted by the respective authors. All code +(whether by me or by others) is licensed to the PSF under a contributor +agreement. + +--Guido van Rossum + + +All code I wrote specifically for this tool before 9 April 2007 is +copyrighted by me. All new code I wrote specifically for this tool after +9 April 2007 is copyrighted by Google. Regardless, my contributions are +licensed to the PSF under a contributor agreement. + +--Collin Winter + +All of my contributions are copyrighted to me and licensed to PSF under the +Python contributor agreement. + +--Benjamin Peterson Added: sandbox/trunk/refactor_pkg/TODO ============================================================================== --- (empty file) +++ sandbox/trunk/refactor_pkg/TODO Wed Apr 1 21:02:05 2009 @@ -0,0 +1,9 @@ +2.6: + byte lit. without b + unicode with u + from __future__ import print_statement + +2.5: + from __future__ import with + exceptions + print Added: sandbox/trunk/refactor_pkg/diff_from_r70785.diff ============================================================================== --- (empty file) +++ sandbox/trunk/refactor_pkg/diff_from_r70785.diff Wed Apr 1 21:02:05 2009 @@ -0,0 +1,52557 @@ +diff -r 531f2e948299 .hgignore +--- a/.hgignore Mon Mar 30 20:02:09 2009 -0500 ++++ b/.hgignore Wed Apr 01 13:59:47 2009 -0500 +@@ -4,10 +4,11 @@ + # * hg add + # note that `hg add *` will add files even if they match in this file. + +-syntax: glob +-*.pickle ++# syntax: glob + + syntax: regexp ++\.out$ ++\.pickle$ + \.log$ + ~$ + ^bin/* +@@ -28,7 +29,7 @@ + ^\.# + (^|/)RCS($|/) + ,v$ +-(^|/)\.svn($|/) ++# (^|/)\.svn($|/) + (^|/)\.bzr($|/) + \_darcs$ + (^|/)SCCS($|/) +diff -r 531f2e948299 .svn/entries +--- a/.svn/entries Mon Mar 30 20:02:09 2009 -0500 ++++ b/.svn/entries Wed Apr 01 13:59:47 2009 -0500 +@@ -1,7 +1,7 @@ + 9 + + dir +-70785 ++70822 + http://svn.python.org/projects/sandbox/trunk/2to3 + http://svn.python.org/projects + +diff -r 531f2e948299 2to3 +--- a/2to3 Mon Mar 30 20:02:09 2009 -0500 ++++ b/2to3 Wed Apr 01 13:59:47 2009 -0500 +@@ -1,6 +1,6 @@ + #!/usr/bin/env python +-from lib2to3.main import main ++import lib2to3 + import sys + import os + +-sys.exit(main("lib2to3.fixes")) ++sys.exit(lib2to3.main.main("refactor.fixes.from2")) +diff -r 531f2e948299 3to2 +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/3to2 Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,6 @@ ++#!/usr/bin/env python ++from refactor.main import main ++import sys ++import os ++ ++sys.exit(main("refactor.fixes.from3")) +diff -r 531f2e948299 HACKING +--- a/HACKING Mon Mar 30 20:02:09 2009 -0500 ++++ b/HACKING Wed Apr 01 13:59:47 2009 -0500 +@@ -1,7 +1,32 @@ ++Running tests: ++ ++ * 2to3 and 3to2 will fail with Python 2.5, but should work w/ Python trunk ++ $ python test.py --help ++ ++ * test lib2to3 ++ $ python test.py --base lib2to3 ++ ++ * test refactor ++ $ python test.py ++ ++ + Tips/tricks/hints for writing new fixers: + + * Don't write your own PATTERN from scratch; that's what + scripts/find_pattern.py is for. ++ ++ e.g. ++ ./scripts/find_pattern.py ++ ++ This will give choices of tokens to parse. ++ Press enter to skip, any key to see the grammar. ++ ++ $ ./scripts/find_pattern.py "print('hello, world')" ++ "('hello, world')" ++ ++ "print('hello, world')" ++ . ++ print_stmt< 'print' atom< '(' "'hello, world'" ')' > > + + * If your fixer works by changing a node's children list or a leaf's value, + be sure to call the node/leaf's changed() method. This to be sure the main +diff -r 531f2e948299 README +--- a/README Mon Mar 30 20:02:09 2009 -0500 ++++ b/README Wed Apr 01 13:59:47 2009 -0500 +@@ -1,6 +1,10 @@ + Abstract + ======== + ++The refactor package -- 2to3 and back again -- is a fork of lib2to3. ++ ++lib2to3: ++ + A refactoring tool for converting Python 2.x code to 3.0. + + This is a work in progress! Bugs should be reported to http://bugs.python.org/ +diff -r 531f2e948299 TODO +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/TODO Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++2.6: ++ byte lit. without b ++ unicode with u ++ from __future__ import print_statement ++ ++2.5: ++ from __future__ import with ++ exceptions ++ print +diff -r 531f2e948299 lib2to3/.svn/entries +--- a/lib2to3/.svn/entries Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/.svn/entries Wed Apr 01 13:59:47 2009 -0500 +@@ -1,7 +1,7 @@ + 9 + + dir +-70785 ++70822 + http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3 + http://svn.python.org/projects + +diff -r 531f2e948299 lib2to3/Grammar2.7.0.alpha.0.pickle +Binary file lib2to3/Grammar2.7.0.alpha.0.pickle has changed +diff -r 531f2e948299 lib2to3/PatternGrammar2.7.0.alpha.0.pickle +Binary file lib2to3/PatternGrammar2.7.0.alpha.0.pickle has changed +diff -r 531f2e948299 lib2to3/__init__.py +--- a/lib2to3/__init__.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/__init__.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,1 +1,1 @@ +-#empty ++from refactor import * +diff -r 531f2e948299 lib2to3/fixer_base.py +--- a/lib2to3/fixer_base.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,178 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Base class for fixers (optional, but recommended).""" +- +-# Python imports +-import logging +-import itertools +- +-# Local imports +-from .patcomp import PatternCompiler +-from . import pygram +-from .fixer_util import does_tree_import +- +-class BaseFix(object): +- +- """Optional base class for fixers. +- +- The subclass name must be FixFooBar where FooBar is the result of +- removing underscores and capitalizing the words of the fix name. +- For example, the class name for a fixer named 'has_key' should be +- FixHasKey. +- """ +- +- PATTERN = None # Most subclasses should override with a string literal +- pattern = None # Compiled pattern, set by compile_pattern() +- options = None # Options object passed to initializer +- filename = None # The filename (set by set_filename) +- logger = None # A logger (set by set_filename) +- numbers = itertools.count(1) # For new_name() +- used_names = set() # A set of all used NAMEs +- order = "post" # Does the fixer prefer pre- or post-order traversal +- explicit = False # Is this ignored by refactor.py -f all? +- run_order = 5 # Fixers will be sorted by run order before execution +- # Lower numbers will be run first. +- +- # Shortcut for access to Python grammar symbols +- syms = pygram.python_symbols +- +- def __init__(self, options, log): +- """Initializer. Subclass may override. +- +- Args: +- options: an dict containing the options passed to RefactoringTool +- that could be used to customize the fixer through the command line. +- log: a list to append warnings and other messages to. +- """ +- self.options = options +- self.log = log +- self.compile_pattern() +- +- def compile_pattern(self): +- """Compiles self.PATTERN into self.pattern. +- +- Subclass may override if it doesn't want to use +- self.{pattern,PATTERN} in .match(). +- """ +- if self.PATTERN is not None: +- self.pattern = PatternCompiler().compile_pattern(self.PATTERN) +- +- def set_filename(self, filename): +- """Set the filename, and a logger derived from it. +- +- The main refactoring tool should call this. +- """ +- self.filename = filename +- self.logger = logging.getLogger(filename) +- +- def match(self, node): +- """Returns match for a given parse tree node. +- +- Should return a true or false object (not necessarily a bool). +- It may return a non-empty dict of matching sub-nodes as +- returned by a matching pattern. +- +- Subclass may override. +- """ +- results = {"node": node} +- return self.pattern.match(node, results) and results +- +- def transform(self, node, results): +- """Returns the transformation for a given parse tree node. +- +- Args: +- node: the root of the parse tree that matched the fixer. +- results: a dict mapping symbolic names to part of the match. +- +- Returns: +- None, or a node that is a modified copy of the +- argument node. The node argument may also be modified in-place to +- effect the same change. +- +- Subclass *must* override. +- """ +- raise NotImplementedError() +- +- def new_name(self, template="xxx_todo_changeme"): +- """Return a string suitable for use as an identifier +- +- The new name is guaranteed not to conflict with other identifiers. +- """ +- name = template +- while name in self.used_names: +- name = template + str(self.numbers.next()) +- self.used_names.add(name) +- return name +- +- def log_message(self, message): +- if self.first_log: +- self.first_log = False +- self.log.append("### In file %s ###" % self.filename) +- self.log.append(message) +- +- def cannot_convert(self, node, reason=None): +- """Warn the user that a given chunk of code is not valid Python 3, +- but that it cannot be converted automatically. +- +- First argument is the top-level node for the code in question. +- Optional second argument is why it can't be converted. +- """ +- lineno = node.get_lineno() +- for_output = node.clone() +- for_output.set_prefix("") +- msg = "Line %d: could not convert: %s" +- self.log_message(msg % (lineno, for_output)) +- if reason: +- self.log_message(reason) +- +- def warning(self, node, reason): +- """Used for warning the user about possible uncertainty in the +- translation. +- +- First argument is the top-level node for the code in question. +- Optional second argument is why it can't be converted. +- """ +- lineno = node.get_lineno() +- self.log_message("Line %d: %s" % (lineno, reason)) +- +- def start_tree(self, tree, filename): +- """Some fixers need to maintain tree-wide state. +- This method is called once, at the start of tree fix-up. +- +- tree - the root node of the tree to be processed. +- filename - the name of the file the tree came from. +- """ +- self.used_names = tree.used_names +- self.set_filename(filename) +- self.numbers = itertools.count(1) +- self.first_log = True +- +- def finish_tree(self, tree, filename): +- """Some fixers need to maintain tree-wide state. +- This method is called once, at the conclusion of tree fix-up. +- +- tree - the root node of the tree to be processed. +- filename - the name of the file the tree came from. +- """ +- pass +- +- +-class ConditionalFix(BaseFix): +- """ Base class for fixers which not execute if an import is found. """ +- +- # This is the name of the import which, if found, will cause the test to be skipped +- skip_on = None +- +- def start_tree(self, *args): +- super(ConditionalFix, self).start_tree(*args) +- self._should_skip = None +- +- def should_skip(self, node): +- if self._should_skip is not None: +- return self._should_skip +- pkg = self.skip_on.split(".") +- name = pkg[-1] +- pkg = ".".join(pkg[:-1]) +- self._should_skip = does_tree_import(pkg, name, node) +- return self._should_skip +diff -r 531f2e948299 lib2to3/fixer_util.py +--- a/lib2to3/fixer_util.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,425 +0,0 @@ +-"""Utility functions, node construction macros, etc.""" +-# Author: Collin Winter +- +-# Local imports +-from .pgen2 import token +-from .pytree import Leaf, Node +-from .pygram import python_symbols as syms +-from . import patcomp +- +- +-########################################################### +-### Common node-construction "macros" +-########################################################### +- +-def KeywordArg(keyword, value): +- return Node(syms.argument, +- [keyword, Leaf(token.EQUAL, '='), value]) +- +-def LParen(): +- return Leaf(token.LPAR, "(") +- +-def RParen(): +- return Leaf(token.RPAR, ")") +- +-def Assign(target, source): +- """Build an assignment statement""" +- if not isinstance(target, list): +- target = [target] +- if not isinstance(source, list): +- source.set_prefix(" ") +- source = [source] +- +- return Node(syms.atom, +- target + [Leaf(token.EQUAL, "=", prefix=" ")] + source) +- +-def Name(name, prefix=None): +- """Return a NAME leaf""" +- return Leaf(token.NAME, name, prefix=prefix) +- +-def Attr(obj, attr): +- """A node tuple for obj.attr""" +- return [obj, Node(syms.trailer, [Dot(), attr])] +- +-def Comma(): +- """A comma leaf""" +- return Leaf(token.COMMA, ",") +- +-def Dot(): +- """A period (.) leaf""" +- return Leaf(token.DOT, ".") +- +-def ArgList(args, lparen=LParen(), rparen=RParen()): +- """A parenthesised argument list, used by Call()""" +- node = Node(syms.trailer, [lparen.clone(), rparen.clone()]) +- if args: +- node.insert_child(1, Node(syms.arglist, args)) +- return node +- +-def Call(func_name, args=None, prefix=None): +- """A function call""" +- node = Node(syms.power, [func_name, ArgList(args)]) +- if prefix is not None: +- node.set_prefix(prefix) +- return node +- +-def Newline(): +- """A newline literal""" +- return Leaf(token.NEWLINE, "\n") +- +-def BlankLine(): +- """A blank line""" +- return Leaf(token.NEWLINE, "") +- +-def Number(n, prefix=None): +- return Leaf(token.NUMBER, n, prefix=prefix) +- +-def Subscript(index_node): +- """A numeric or string subscript""" +- return Node(syms.trailer, [Leaf(token.LBRACE, '['), +- index_node, +- Leaf(token.RBRACE, ']')]) +- +-def String(string, prefix=None): +- """A string leaf""" +- return Leaf(token.STRING, string, prefix=prefix) +- +-def ListComp(xp, fp, it, test=None): +- """A list comprehension of the form [xp for fp in it if test]. +- +- If test is None, the "if test" part is omitted. +- """ +- xp.set_prefix("") +- fp.set_prefix(" ") +- it.set_prefix(" ") +- for_leaf = Leaf(token.NAME, "for") +- for_leaf.set_prefix(" ") +- in_leaf = Leaf(token.NAME, "in") +- in_leaf.set_prefix(" ") +- inner_args = [for_leaf, fp, in_leaf, it] +- if test: +- test.set_prefix(" ") +- if_leaf = Leaf(token.NAME, "if") +- if_leaf.set_prefix(" ") +- inner_args.append(Node(syms.comp_if, [if_leaf, test])) +- inner = Node(syms.listmaker, [xp, Node(syms.comp_for, inner_args)]) +- return Node(syms.atom, +- [Leaf(token.LBRACE, "["), +- inner, +- Leaf(token.RBRACE, "]")]) +- +-def FromImport(package_name, name_leafs): +- """ Return an import statement in the form: +- from package import name_leafs""" +- # XXX: May not handle dotted imports properly (eg, package_name='foo.bar') +- #assert package_name == '.' or '.' not in package_name, "FromImport has "\ +- # "not been tested with dotted package names -- use at your own "\ +- # "peril!" +- +- for leaf in name_leafs: +- # Pull the leaves out of their old tree +- leaf.remove() +- +- children = [Leaf(token.NAME, 'from'), +- Leaf(token.NAME, package_name, prefix=" "), +- Leaf(token.NAME, 'import', prefix=" "), +- Node(syms.import_as_names, name_leafs)] +- imp = Node(syms.import_from, children) +- return imp +- +- +-########################################################### +-### Determine whether a node represents a given literal +-########################################################### +- +-def is_tuple(node): +- """Does the node represent a tuple literal?""" +- if isinstance(node, Node) and node.children == [LParen(), RParen()]: +- return True +- return (isinstance(node, Node) +- and len(node.children) == 3 +- and isinstance(node.children[0], Leaf) +- and isinstance(node.children[1], Node) +- and isinstance(node.children[2], Leaf) +- and node.children[0].value == "(" +- and node.children[2].value == ")") +- +-def is_list(node): +- """Does the node represent a list literal?""" +- return (isinstance(node, Node) +- and len(node.children) > 1 +- and isinstance(node.children[0], Leaf) +- and isinstance(node.children[-1], Leaf) +- and node.children[0].value == "[" +- and node.children[-1].value == "]") +- +- +-########################################################### +-### Misc +-########################################################### +- +-def parenthesize(node): +- return Node(syms.atom, [LParen(), node, RParen()]) +- +- +-consuming_calls = set(["sorted", "list", "set", "any", "all", "tuple", "sum", +- "min", "max"]) +- +-def attr_chain(obj, attr): +- """Follow an attribute chain. +- +- If you have a chain of objects where a.foo -> b, b.foo-> c, etc, +- use this to iterate over all objects in the chain. Iteration is +- terminated by getattr(x, attr) is None. +- +- Args: +- obj: the starting object +- attr: the name of the chaining attribute +- +- Yields: +- Each successive object in the chain. +- """ +- next = getattr(obj, attr) +- while next: +- yield next +- next = getattr(next, attr) +- +-p0 = """for_stmt< 'for' any 'in' node=any ':' any* > +- | comp_for< 'for' any 'in' node=any any* > +- """ +-p1 = """ +-power< +- ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | +- 'any' | 'all' | (any* trailer< '.' 'join' >) ) +- trailer< '(' node=any ')' > +- any* +-> +-""" +-p2 = """ +-power< +- 'sorted' +- trailer< '(' arglist ')' > +- any* +-> +-""" +-pats_built = False +-def in_special_context(node): +- """ Returns true if node is in an environment where all that is required +- of it is being itterable (ie, it doesn't matter if it returns a list +- or an itterator). +- See test_map_nochange in test_fixers.py for some examples and tests. +- """ +- global p0, p1, p2, pats_built +- if not pats_built: +- p1 = patcomp.compile_pattern(p1) +- p0 = patcomp.compile_pattern(p0) +- p2 = patcomp.compile_pattern(p2) +- pats_built = True +- patterns = [p0, p1, p2] +- for pattern, parent in zip(patterns, attr_chain(node, "parent")): +- results = {} +- if pattern.match(parent, results) and results["node"] is node: +- return True +- return False +- +-def is_probably_builtin(node): +- """ +- Check that something isn't an attribute or function name etc. +- """ +- prev = node.prev_sibling +- if prev is not None and prev.type == token.DOT: +- # Attribute lookup. +- return False +- parent = node.parent +- if parent.type in (syms.funcdef, syms.classdef): +- return False +- if parent.type == syms.expr_stmt and parent.children[0] is node: +- # Assignment. +- return False +- if parent.type == syms.parameters or \ +- (parent.type == syms.typedargslist and ( +- (prev is not None and prev.type == token.COMMA) or +- parent.children[0] is node +- )): +- # The name of an argument. +- return False +- return True +- +-########################################################### +-### The following functions are to find bindings in a suite +-########################################################### +- +-def make_suite(node): +- if node.type == syms.suite: +- return node +- node = node.clone() +- parent, node.parent = node.parent, None +- suite = Node(syms.suite, [node]) +- suite.parent = parent +- return suite +- +-def find_root(node): +- """Find the top level namespace.""" +- # Scamper up to the top level namespace +- while node.type != syms.file_input: +- assert node.parent, "Tree is insane! root found before "\ +- "file_input node was found." +- node = node.parent +- return node +- +-def does_tree_import(package, name, node): +- """ Returns true if name is imported from package at the +- top level of the tree which node belongs to. +- To cover the case of an import like 'import foo', use +- None for the package and 'foo' for the name. """ +- binding = find_binding(name, find_root(node), package) +- return bool(binding) +- +-def is_import(node): +- """Returns true if the node is an import statement.""" +- return node.type in (syms.import_name, syms.import_from) +- +-def touch_import(package, name, node): +- """ Works like `does_tree_import` but adds an import statement +- if it was not imported. """ +- def is_import_stmt(node): +- return node.type == syms.simple_stmt and node.children and \ +- is_import(node.children[0]) +- +- root = find_root(node) +- +- if does_tree_import(package, name, root): +- return +- +- add_newline_before = False +- +- # figure out where to insert the new import. First try to find +- # the first import and then skip to the last one. +- insert_pos = offset = 0 +- for idx, node in enumerate(root.children): +- if not is_import_stmt(node): +- continue +- for offset, node2 in enumerate(root.children[idx:]): +- if not is_import_stmt(node2): +- break +- insert_pos = idx + offset +- break +- +- # if there are no imports where we can insert, find the docstring. +- # if that also fails, we stick to the beginning of the file +- if insert_pos == 0: +- for idx, node in enumerate(root.children): +- if node.type == syms.simple_stmt and node.children and \ +- node.children[0].type == token.STRING: +- insert_pos = idx + 1 +- add_newline_before +- break +- +- if package is None: +- import_ = Node(syms.import_name, [ +- Leaf(token.NAME, 'import'), +- Leaf(token.NAME, name, prefix=' ') +- ]) +- else: +- import_ = FromImport(package, [Leaf(token.NAME, name, prefix=' ')]) +- +- children = [import_, Newline()] +- if add_newline_before: +- children.insert(0, Newline()) +- root.insert_child(insert_pos, Node(syms.simple_stmt, children)) +- +- +-_def_syms = set([syms.classdef, syms.funcdef]) +-def find_binding(name, node, package=None): +- """ Returns the node which binds variable name, otherwise None. +- If optional argument package is supplied, only imports will +- be returned. +- See test cases for examples.""" +- for child in node.children: +- ret = None +- if child.type == syms.for_stmt: +- if _find(name, child.children[1]): +- return child +- n = find_binding(name, make_suite(child.children[-1]), package) +- if n: ret = n +- elif child.type in (syms.if_stmt, syms.while_stmt): +- n = find_binding(name, make_suite(child.children[-1]), package) +- if n: ret = n +- elif child.type == syms.try_stmt: +- n = find_binding(name, make_suite(child.children[2]), package) +- if n: +- ret = n +- else: +- for i, kid in enumerate(child.children[3:]): +- if kid.type == token.COLON and kid.value == ":": +- # i+3 is the colon, i+4 is the suite +- n = find_binding(name, make_suite(child.children[i+4]), package) +- if n: ret = n +- elif child.type in _def_syms and child.children[1].value == name: +- ret = child +- elif _is_import_binding(child, name, package): +- ret = child +- elif child.type == syms.simple_stmt: +- ret = find_binding(name, child, package) +- elif child.type == syms.expr_stmt: +- if _find(name, child.children[0]): +- ret = child +- +- if ret: +- if not package: +- return ret +- if is_import(ret): +- return ret +- return None +- +-_block_syms = set([syms.funcdef, syms.classdef, syms.trailer]) +-def _find(name, node): +- nodes = [node] +- while nodes: +- node = nodes.pop() +- if node.type > 256 and node.type not in _block_syms: +- nodes.extend(node.children) +- elif node.type == token.NAME and node.value == name: +- return node +- return None +- +-def _is_import_binding(node, name, package=None): +- """ Will reuturn node if node will import name, or node +- will import * from package. None is returned otherwise. +- See test cases for examples. """ +- +- if node.type == syms.import_name and not package: +- imp = node.children[1] +- if imp.type == syms.dotted_as_names: +- for child in imp.children: +- if child.type == syms.dotted_as_name: +- if child.children[2].value == name: +- return node +- elif child.type == token.NAME and child.value == name: +- return node +- elif imp.type == syms.dotted_as_name: +- last = imp.children[-1] +- if last.type == token.NAME and last.value == name: +- return node +- elif imp.type == token.NAME and imp.value == name: +- return node +- elif node.type == syms.import_from: +- # unicode(...) is used to make life easier here, because +- # from a.b import parses to ['import', ['a', '.', 'b'], ...] +- if package and unicode(node.children[1]).strip() != package: +- return None +- n = node.children[3] +- if package and _find('as', n): +- # See test_from_import_as for explanation +- return None +- elif n.type == syms.import_as_names and _find(name, n): +- return node +- elif n.type == syms.import_as_name: +- child = n.children[2] +- if child.type == token.NAME and child.value == name: +- return node +- elif n.type == token.NAME and n.value == name: +- return node +- elif package and n.type == token.STAR: +- return node +- return None +diff -r 531f2e948299 lib2to3/fixes/.svn/all-wcprops +--- a/lib2to3/fixes/.svn/all-wcprops Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,305 +0,0 @@ +-K 25 +-svn:wc:ra_dav:version-url +-V 57 +-/projects/!svn/ver/69679/sandbox/trunk/2to3/lib2to3/fixes +-END +-fix_dict.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/fixes/fix_dict.py +-END +-fix_has_key.py +-K 25 +-svn:wc:ra_dav:version-url +-V 72 +-/projects/!svn/ver/67769/sandbox/trunk/2to3/lib2to3/fixes/fix_has_key.py +-END +-fix_exec.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_exec.py +-END +-fix_idioms.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_idioms.py +-END +-__init__.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/61428/sandbox/trunk/2to3/lib2to3/fixes/__init__.py +-END +-fix_urllib.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/68368/sandbox/trunk/2to3/lib2to3/fixes/fix_urllib.py +-END +-fix_nonzero.py +-K 25 +-svn:wc:ra_dav:version-url +-V 72 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_nonzero.py +-END +-fix_print.py +-K 25 +-svn:wc:ra_dav:version-url +-V 70 +-/projects/!svn/ver/66418/sandbox/trunk/2to3/lib2to3/fixes/fix_print.py +-END +-fix_imports.py +-K 25 +-svn:wc:ra_dav:version-url +-V 72 +-/projects/!svn/ver/69054/sandbox/trunk/2to3/lib2to3/fixes/fix_imports.py +-END +-fix_numliterals.py +-K 25 +-svn:wc:ra_dav:version-url +-V 76 +-/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/fixes/fix_numliterals.py +-END +-fix_input.py +-K 25 +-svn:wc:ra_dav:version-url +-V 70 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_input.py +-END +-fix_itertools_imports.py +-K 25 +-svn:wc:ra_dav:version-url +-V 82 +-/projects/!svn/ver/69673/sandbox/trunk/2to3/lib2to3/fixes/fix_itertools_imports.py +-END +-fix_getcwdu.py +-K 25 +-svn:wc:ra_dav:version-url +-V 72 +-/projects/!svn/ver/66782/sandbox/trunk/2to3/lib2to3/fixes/fix_getcwdu.py +-END +-fix_zip.py +-K 25 +-svn:wc:ra_dav:version-url +-V 68 +-/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_zip.py +-END +-fix_raise.py +-K 25 +-svn:wc:ra_dav:version-url +-V 70 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_raise.py +-END +-fix_throw.py +-K 25 +-svn:wc:ra_dav:version-url +-V 70 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_throw.py +-END +-fix_types.py +-K 25 +-svn:wc:ra_dav:version-url +-V 70 +-/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_types.py +-END +-fix_paren.py +-K 25 +-svn:wc:ra_dav:version-url +-V 70 +-/projects/!svn/ver/65981/sandbox/trunk/2to3/lib2to3/fixes/fix_paren.py +-END +-fix_ws_comma.py +-K 25 +-svn:wc:ra_dav:version-url +-V 73 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_ws_comma.py +-END +-fix_reduce.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/67657/sandbox/trunk/2to3/lib2to3/fixes/fix_reduce.py +-END +-fix_raw_input.py +-K 25 +-svn:wc:ra_dav:version-url +-V 74 +-/projects/!svn/ver/65887/sandbox/trunk/2to3/lib2to3/fixes/fix_raw_input.py +-END +-fix_repr.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/67769/sandbox/trunk/2to3/lib2to3/fixes/fix_repr.py +-END +-fix_buffer.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_buffer.py +-END +-fix_funcattrs.py +-K 25 +-svn:wc:ra_dav:version-url +-V 74 +-/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_funcattrs.py +-END +-fix_import.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/67928/sandbox/trunk/2to3/lib2to3/fixes/fix_import.py +-END +-fix_standarderror.py +-K 25 +-svn:wc:ra_dav:version-url +-V 78 +-/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_standarderror.py +-END +-fix_map.py +-K 25 +-svn:wc:ra_dav:version-url +-V 68 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_map.py +-END +-fix_next.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/fixes/fix_next.py +-END +-fix_itertools.py +-K 25 +-svn:wc:ra_dav:version-url +-V 74 +-/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_itertools.py +-END +-fix_execfile.py +-K 25 +-svn:wc:ra_dav:version-url +-V 73 +-/projects/!svn/ver/67901/sandbox/trunk/2to3/lib2to3/fixes/fix_execfile.py +-END +-fix_xrange.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/67705/sandbox/trunk/2to3/lib2to3/fixes/fix_xrange.py +-END +-fix_apply.py +-K 25 +-svn:wc:ra_dav:version-url +-V 70 +-/projects/!svn/ver/67769/sandbox/trunk/2to3/lib2to3/fixes/fix_apply.py +-END +-fix_filter.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py +-END +-fix_unicode.py +-K 25 +-svn:wc:ra_dav:version-url +-V 72 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_unicode.py +-END +-fix_except.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/68694/sandbox/trunk/2to3/lib2to3/fixes/fix_except.py +-END +-fix_renames.py +-K 25 +-svn:wc:ra_dav:version-url +-V 72 +-/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/fixes/fix_renames.py +-END +-fix_tuple_params.py +-K 25 +-svn:wc:ra_dav:version-url +-V 77 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_tuple_params.py +-END +-fix_methodattrs.py +-K 25 +-svn:wc:ra_dav:version-url +-V 76 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_methodattrs.py +-END +-fix_xreadlines.py +-K 25 +-svn:wc:ra_dav:version-url +-V 75 +-/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_xreadlines.py +-END +-fix_long.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/68110/sandbox/trunk/2to3/lib2to3/fixes/fix_long.py +-END +-fix_intern.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/67657/sandbox/trunk/2to3/lib2to3/fixes/fix_intern.py +-END +-fix_callable.py +-K 25 +-svn:wc:ra_dav:version-url +-V 73 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_callable.py +-END +-fix_isinstance.py +-K 25 +-svn:wc:ra_dav:version-url +-V 75 +-/projects/!svn/ver/67767/sandbox/trunk/2to3/lib2to3/fixes/fix_isinstance.py +-END +-fix_basestring.py +-K 25 +-svn:wc:ra_dav:version-url +-V 75 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_basestring.py +-END +-fix_ne.py +-K 25 +-svn:wc:ra_dav:version-url +-V 67 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_ne.py +-END +-fix_set_literal.py +-K 25 +-svn:wc:ra_dav:version-url +-V 76 +-/projects/!svn/ver/69679/sandbox/trunk/2to3/lib2to3/fixes/fix_set_literal.py +-END +-fix_future.py +-K 25 +-svn:wc:ra_dav:version-url +-V 71 +-/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_future.py +-END +-fix_metaclass.py +-K 25 +-svn:wc:ra_dav:version-url +-V 74 +-/projects/!svn/ver/67371/sandbox/trunk/2to3/lib2to3/fixes/fix_metaclass.py +-END +-fix_sys_exc.py +-K 25 +-svn:wc:ra_dav:version-url +-V 72 +-/projects/!svn/ver/65968/sandbox/trunk/2to3/lib2to3/fixes/fix_sys_exc.py +-END +-fix_imports2.py +-K 25 +-svn:wc:ra_dav:version-url +-V 73 +-/projects/!svn/ver/68422/sandbox/trunk/2to3/lib2to3/fixes/fix_imports2.py +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/dir-prop-base +--- a/lib2to3/fixes/.svn/dir-prop-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,10 +0,0 @@ +-K 10 +-svn:ignore +-V 25 +-*.pyc +-*.pyo +-*.pickle +-@* +- +- +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/entries +--- a/lib2to3/fixes/.svn/entries Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,1728 +0,0 @@ +-9 +- +-dir +-70785 +-http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3/fixes +-http://svn.python.org/projects +- +- +- +-2009-02-16T17:36:06.789054Z +-69679 +-benjamin.peterson +-has-props +- +-svn:special svn:externals svn:needs-lock +- +- +- +- +- +- +- +- +- +- +- +-6015fed2-1504-0410-9fe1-9d1591cc4771 +- +-fix_dict.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-d12677f15a5a34c7754e90cb06bc153e +-2008-11-25T23:13:17.968453Z +-67389 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-3588 +- +-fix_has_key.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-1b88e2b6b4c60df9b85a168a07e13fa7 +-2008-12-14T20:59:10.846867Z +-67769 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-3209 +- +-fix_exec.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-679db75847dfd56367a8cd2b4286949c +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-985 +- +-fix_idioms.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-0281b19c721594c6eb341c83270d37bd +-2008-11-28T23:18:48.744865Z +-67433 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-3939 +- +-__init__.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-97781d2954bbc2eebdc963de519fe2de +-2006-12-12T14:56:29.604692Z +-53006 +-guido.van.rossum +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-47 +- +-fix_urllib.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-c883d34902a6e74c08f4370a978e5b86 +-2009-01-06T23:56:10.682943Z +-68368 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-7484 +- +-fix_nonzero.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-6f8983345b023d63ddce248a93c5db83 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-578 +- +-fix_print.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-478786e57412307d598aee1a20595102 +-2008-09-12T23:49:48.354778Z +-66418 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-2957 +- +-fix_imports.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-fa0f30cff73ee261c93c85286007c761 +-2009-01-28T16:01:54.183761Z +-69054 +-guilherme.polo +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-5692 +- +-fix_numliterals.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-7e04fa79f3ff3ff475ec1716021b8489 +-2008-11-25T23:13:17.968453Z +-67389 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-789 +- +-fix_input.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-3a704e4f30c9f72c236274118f093034 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-692 +- +-fix_itertools_imports.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-05420b3d189c8130eca6bf051bd31a17 +-2009-02-16T15:38:22.416590Z +-69673 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1837 +- +-fix_getcwdu.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-1bf89e0a81cc997173d5d63078f8ea5a +-2008-10-03T22:51:36.115136Z +-66782 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-432 +- +-fix_zip.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-8e61d2105f3122181e793e7c9b4caf31 +-2008-11-28T23:18:48.744865Z +-67433 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-889 +- +-fix_throw.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-52c18fcf966a4c7f44940e331784f51c +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1564 +- +-fix_raise.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-7f69130d4008f2b870fbc5d88ed726de +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-2587 +- +-fix_types.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-08728aeba77665139ce3f967cb24c2f1 +-2008-11-28T23:18:48.744865Z +-67433 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1779 +- +-fix_paren.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-e2b3bd30551e285f3bc45eed6a797014 +-2008-08-22T20:41:30.636639Z +-65981 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1213 +- +-fix_ws_comma.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-8e92e7f56434c9b2263874296578ea53 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1108 +- +-fix_reduce.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-2fee5cc1f796c98a749dc789199da016 +-2008-12-08T00:29:35.627027Z +-67657 +-armin.ronacher +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-816 +- +-fix_raw_input.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-18666e7c36b850f0b6d5666504bec0ae +-2008-08-19T22:45:04.505207Z +-65887 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-435 +- +-fix_repr.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-badd6b1054395732bd64df829d16cf96 +-2008-12-14T20:59:10.846867Z +-67769 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-594 +- +-fix_buffer.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-d6f8cc141ad7ab3f197f1638b9e3e1aa +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-566 +- +-fix_funcattrs.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-7a7b9d2abe6fbecfdf2e5c0095978f0b +-2008-11-28T23:18:48.744865Z +-67433 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-624 +- +-fix_import.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-9973617c9e868b2b9afb0a609ef30b35 +-2008-12-27T02:49:30.983707Z +-67928 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-2953 +- +-fix_standarderror.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-f76efc435650b1eba8bf73dbdfdeef3e +-2008-11-28T23:18:48.744865Z +-67433 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-431 +- +-fix_map.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-0cdf1b348ed0dc9348377ad6ce1aef42 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-2537 +- +-fix_next.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-46917a2b5128a18a5224f6ae5dc021db +-2008-11-25T23:13:17.968453Z +-67389 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-3205 +- +-fix_itertools.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-d2b48acbc9d415b64f3c71575fbfb9df +-2008-11-28T23:18:48.744865Z +-67433 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1483 +- +-fix_execfile.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-f968686ed347fd544fc69fd9cb6073cd +-2008-12-22T20:09:55.444195Z +-67901 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1974 +- +-fix_xrange.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-4f054eb9bb8f4d4916f1b33eec5175f9 +-2008-12-11T19:04:08.320821Z +-67705 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-2291 +- +-fix_apply.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-2a00b679f13c1dca9b45bc23a3b2a695 +-2008-12-14T20:59:10.846867Z +-67769 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1894 +- +-fix_filter.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-0879d4b1af4eeb93b1a8baff1fd298c1 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-2089 +- +-fix_unicode.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-05e9e9ae6cbc1c396bc11b19b5dab25a +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-832 +- +-fix_except.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-450c9cbb28a5be9d21719abcb33a59f5 +-2009-01-17T23:55:59.992428Z +-68694 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-3251 +- +-fix_renames.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-52f66737c4206d8cfa77bbb07af4a056 +-2008-11-25T23:13:17.968453Z +-67389 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-2192 +- +-fix_tuple_params.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-557690cc5399b0ade14c16089df2effb +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-5405 +- +-fix_methodattrs.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-6ee0925ec01e9ae632326855ab5cb016 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-587 +- +-fix_xreadlines.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-ade2c0b61ba9f8effa9df543a2fbdc4a +-2008-11-28T23:18:48.744865Z +-67433 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-670 +- +-fix_long.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-2aaca612bae42bfe84dd0d6139260749 +-2008-12-31T20:13:26.408132Z +-68110 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-538 +- +-fix_intern.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-00e20c0723e807004c3fd0ae88d26b09 +-2008-12-08T00:29:35.627027Z +-67657 +-armin.ronacher +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1368 +- +-fix_callable.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-37990663703ff5ea2fabb3095a9ad189 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-952 +- +-fix_isinstance.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-921a0f20d0a6e47b4b1291d37599bf09 +-2008-12-14T20:28:12.506842Z +-67767 +-benjamin.peterson +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1594 +- +-fix_basestring.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-0fe11afa759b94c75323aa2a3188089d +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-301 +- +-fix_ne.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-a787f8744fda47bffd7f2b6a9ee4ff38 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-590 +- +-fix_set_literal.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-fc4742a5a8d78f9dd84b1c5f0040003b +-2009-02-16T17:36:06.789054Z +-69679 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1699 +- +-fix_future.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-0e2786c94aac6b11a47d8ec46d8b19d6 +-2008-06-01T23:09:38.597843Z +-63880 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-527 +- +-fix_metaclass.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-c608b0bf4a9c0c1028051ffe82d055f4 +-2008-11-24T22:02:00.590445Z +-67371 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-8213 +- +-fix_sys_exc.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-2b412acd29c54b0101163bb8be2ab5c7 +-2008-08-21T23:45:13.840810Z +-65968 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1030 +- +-fix_imports2.py +-file +- +- +- +- +-2009-03-31T00:29:37.000000Z +-15274809df396bec14aeafccd2ab9875 +-2009-01-09T02:01:03.956074Z +-68422 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-289 +- +diff -r 531f2e948299 lib2to3/fixes/.svn/format +--- a/lib2to3/fixes/.svn/format Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,1 +0,0 @@ +-9 +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/__init__.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/__init__.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_apply.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_apply.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_basestring.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_basestring.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_buffer.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_buffer.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_callable.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_callable.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_dict.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_dict.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_except.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_except.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_exec.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_exec.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_execfile.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_execfile.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_filter.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_filter.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_funcattrs.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_funcattrs.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_future.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_future.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 13 +-'Id Revision' +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_getcwdu.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_getcwdu.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_has_key.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_has_key.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_idioms.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_idioms.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_import.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_import.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_imports.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_imports.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_imports2.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_imports2.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_input.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_input.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_intern.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_intern.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_itertools.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_itertools.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_itertools_imports.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_itertools_imports.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_long.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_long.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_map.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_map.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_metaclass.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_metaclass.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_methodattrs.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_methodattrs.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 13 +-'Id Revision' +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_ne.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_ne.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_next.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_next.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_nonzero.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_nonzero.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_numliterals.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_numliterals.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_paren.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_paren.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_print.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_print.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_raise.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_raise.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_raw_input.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_raw_input.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_renames.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_renames.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 13 +-'Id Revision' +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_repr.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_repr.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_set_literal.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_set_literal.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_standarderror.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_standarderror.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_sys_exc.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_sys_exc.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_throw.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_throw.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_tuple_params.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_tuple_params.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_types.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_types.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_unicode.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_unicode.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_urllib.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_urllib.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_ws_comma.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_ws_comma.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_xrange.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_xrange.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_xreadlines.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_xreadlines.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/prop-base/fix_zip.py.svn-base +--- a/lib2to3/fixes/.svn/prop-base/fix_zip.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,5 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-END +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/__init__.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/__init__.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,1 +0,0 @@ +-# Dummy file to make this directory a package. +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_apply.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_apply.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,58 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for apply(). +- +-This converts apply(func, v, k) into (func)(*v, **k).""" +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Call, Comma, parenthesize +- +-class FixApply(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'apply' +- trailer< +- '(' +- arglist< +- (not argument +- ')' +- > +- > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- assert results +- func = results["func"] +- args = results["args"] +- kwds = results.get("kwds") +- prefix = node.get_prefix() +- func = func.clone() +- if (func.type not in (token.NAME, syms.atom) and +- (func.type != syms.power or +- func.children[-2].type == token.DOUBLESTAR)): +- # Need to parenthesize +- func = parenthesize(func) +- func.set_prefix("") +- args = args.clone() +- args.set_prefix("") +- if kwds is not None: +- kwds = kwds.clone() +- kwds.set_prefix("") +- l_newargs = [pytree.Leaf(token.STAR, "*"), args] +- if kwds is not None: +- l_newargs.extend([Comma(), +- pytree.Leaf(token.DOUBLESTAR, "**"), +- kwds]) +- l_newargs[-2].set_prefix(" ") # that's the ** token +- # XXX Sometimes we could be cleverer, e.g. apply(f, (x, y) + t) +- # can be translated into f(x, y, *t) instead of f(*(x, y) + t) +- #new = pytree.Node(syms.power, (func, ArgList(l_newargs))) +- return Call(func, l_newargs, prefix=prefix) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_basestring.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_basestring.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,13 +0,0 @@ +-"""Fixer for basestring -> str.""" +-# Author: Christian Heimes +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-class FixBasestring(fixer_base.BaseFix): +- +- PATTERN = "'basestring'" +- +- def transform(self, node, results): +- return Name("str", prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_buffer.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_buffer.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,21 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that changes buffer(...) into memoryview(...).""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +- +-class FixBuffer(fixer_base.BaseFix): +- +- explicit = True # The user must ask for this fixer +- +- PATTERN = """ +- power< name='buffer' trailer< '(' [any] ')' > > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- name.replace(Name("memoryview", prefix=name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_callable.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_callable.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,31 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for callable(). +- +-This converts callable(obj) into hasattr(obj, '__call__').""" +- +-# Local imports +-from .. import pytree +-from .. import fixer_base +-from ..fixer_util import Call, Name, String +- +-class FixCallable(fixer_base.BaseFix): +- +- # Ignore callable(*args) or use of keywords. +- # Either could be a hint that the builtin callable() is not being used. +- PATTERN = """ +- power< 'callable' +- trailer< lpar='(' +- ( not(arglist | argument) any ','> ) +- rpar=')' > +- after=any* +- > +- """ +- +- def transform(self, node, results): +- func = results["func"] +- +- args = [func.clone(), String(', '), String("'__call__'")] +- return Call(Name("hasattr"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_dict.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_dict.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,99 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for dict methods. +- +-d.keys() -> list(d.keys()) +-d.items() -> list(d.items()) +-d.values() -> list(d.values()) +- +-d.iterkeys() -> iter(d.keys()) +-d.iteritems() -> iter(d.items()) +-d.itervalues() -> iter(d.values()) +- +-Except in certain very specific contexts: the iter() can be dropped +-when the context is list(), sorted(), iter() or for...in; the list() +-can be dropped when the context is list() or sorted() (but not iter() +-or for...in!). Special contexts that apply to both: list(), sorted(), tuple() +-set(), any(), all(), sum(). +- +-Note: iter(d.keys()) could be written as iter(d) but since the +-original d.iterkeys() was also redundant we don't fix this. And there +-are (rare) contexts where it makes a difference (e.g. when passing it +-as an argument to a function that introspects the argument). +-""" +- +-# Local imports +-from .. import pytree +-from .. import patcomp +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, LParen, RParen, ArgList, Dot +-from .. import fixer_util +- +- +-iter_exempt = fixer_util.consuming_calls | set(["iter"]) +- +- +-class FixDict(fixer_base.BaseFix): +- PATTERN = """ +- power< head=any+ +- trailer< '.' method=('keys'|'items'|'values'| +- 'iterkeys'|'iteritems'|'itervalues') > +- parens=trailer< '(' ')' > +- tail=any* +- > +- """ +- +- def transform(self, node, results): +- head = results["head"] +- method = results["method"][0] # Extract node for method name +- tail = results["tail"] +- syms = self.syms +- method_name = method.value +- isiter = method_name.startswith("iter") +- if isiter: +- method_name = method_name[4:] +- assert method_name in ("keys", "items", "values"), repr(method) +- head = [n.clone() for n in head] +- tail = [n.clone() for n in tail] +- special = not tail and self.in_special_context(node, isiter) +- args = head + [pytree.Node(syms.trailer, +- [Dot(), +- Name(method_name, +- prefix=method.get_prefix())]), +- results["parens"].clone()] +- new = pytree.Node(syms.power, args) +- if not special: +- new.set_prefix("") +- new = Call(Name(isiter and "iter" or "list"), [new]) +- if tail: +- new = pytree.Node(syms.power, [new] + tail) +- new.set_prefix(node.get_prefix()) +- return new +- +- P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" +- p1 = patcomp.compile_pattern(P1) +- +- P2 = """for_stmt< 'for' any 'in' node=any ':' any* > +- | comp_for< 'for' any 'in' node=any any* > +- """ +- p2 = patcomp.compile_pattern(P2) +- +- def in_special_context(self, node, isiter): +- if node.parent is None: +- return False +- results = {} +- if (node.parent.parent is not None and +- self.p1.match(node.parent.parent, results) and +- results["node"] is node): +- if isiter: +- # iter(d.iterkeys()) -> iter(d.keys()), etc. +- return results["func"].value in iter_exempt +- else: +- # list(d.keys()) -> list(d.keys()), etc. +- return results["func"].value in fixer_util.consuming_calls +- if not isiter: +- return False +- # for ... in d.iterkeys() -> for ... in d.keys(), etc. +- return self.p2.match(node.parent, results) and results["node"] is node +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_except.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_except.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,92 +0,0 @@ +-"""Fixer for except statements with named exceptions. +- +-The following cases will be converted: +- +-- "except E, T:" where T is a name: +- +- except E as T: +- +-- "except E, T:" where T is not a name, tuple or list: +- +- except E as t: +- T = t +- +- This is done because the target of an "except" clause must be a +- name. +- +-- "except E, T:" where T is a tuple or list literal: +- +- except E as t: +- T = t.args +-""" +-# Author: Collin Winter +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Assign, Attr, Name, is_tuple, is_list, syms +- +-def find_excepts(nodes): +- for i, n in enumerate(nodes): +- if n.type == syms.except_clause: +- if n.children[0].value == 'except': +- yield (n, nodes[i+2]) +- +-class FixExcept(fixer_base.BaseFix): +- +- PATTERN = """ +- try_stmt< 'try' ':' suite +- cleanup=(except_clause ':' suite)+ +- tail=(['except' ':' suite] +- ['else' ':' suite] +- ['finally' ':' suite]) > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- +- tail = [n.clone() for n in results["tail"]] +- +- try_cleanup = [ch.clone() for ch in results["cleanup"]] +- for except_clause, e_suite in find_excepts(try_cleanup): +- if len(except_clause.children) == 4: +- (E, comma, N) = except_clause.children[1:4] +- comma.replace(Name("as", prefix=" ")) +- +- if N.type != token.NAME: +- # Generate a new N for the except clause +- new_N = Name(self.new_name(), prefix=" ") +- target = N.clone() +- target.set_prefix("") +- N.replace(new_N) +- new_N = new_N.clone() +- +- # Insert "old_N = new_N" as the first statement in +- # the except body. This loop skips leading whitespace +- # and indents +- #TODO(cwinter) suite-cleanup +- suite_stmts = e_suite.children +- for i, stmt in enumerate(suite_stmts): +- if isinstance(stmt, pytree.Node): +- break +- +- # The assignment is different if old_N is a tuple or list +- # In that case, the assignment is old_N = new_N.args +- if is_tuple(N) or is_list(N): +- assign = Assign(target, Attr(new_N, Name('args'))) +- else: +- assign = Assign(target, new_N) +- +- #TODO(cwinter) stopgap until children becomes a smart list +- for child in reversed(suite_stmts[:i]): +- e_suite.insert_child(0, child) +- e_suite.insert_child(i, assign) +- elif N.get_prefix() == "": +- # No space after a comma is legal; no space after "as", +- # not so much. +- N.set_prefix(" ") +- +- #TODO(cwinter) fix this when children becomes a smart list +- children = [c.clone() for c in node.children[:3]] + try_cleanup + tail +- return pytree.Node(node.type, children) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_exec.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_exec.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,39 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for exec. +- +-This converts usages of the exec statement into calls to a built-in +-exec() function. +- +-exec code in ns1, ns2 -> exec(code, ns1, ns2) +-""" +- +-# Local imports +-from .. import pytree +-from .. import fixer_base +-from ..fixer_util import Comma, Name, Call +- +- +-class FixExec(fixer_base.BaseFix): +- +- PATTERN = """ +- exec_stmt< 'exec' a=any 'in' b=any [',' c=any] > +- | +- exec_stmt< 'exec' (not atom<'(' [any] ')'>) a=any > +- """ +- +- def transform(self, node, results): +- assert results +- syms = self.syms +- a = results["a"] +- b = results.get("b") +- c = results.get("c") +- args = [a.clone()] +- args[0].set_prefix("") +- if b is not None: +- args.extend([Comma(), b.clone()]) +- if c is not None: +- args.extend([Comma(), c.clone()]) +- +- return Call(Name("exec"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_execfile.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_execfile.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,51 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for execfile. +- +-This converts usages of the execfile function into calls to the built-in +-exec() function. +-""" +- +-from .. import fixer_base +-from ..fixer_util import (Comma, Name, Call, LParen, RParen, Dot, Node, +- ArgList, String, syms) +- +- +-class FixExecfile(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'execfile' trailer< '(' arglist< filename=any [',' globals=any [',' locals=any ] ] > ')' > > +- | +- power< 'execfile' trailer< '(' filename=any ')' > > +- """ +- +- def transform(self, node, results): +- assert results +- filename = results["filename"] +- globals = results.get("globals") +- locals = results.get("locals") +- +- # Copy over the prefix from the right parentheses end of the execfile +- # call. +- execfile_paren = node.children[-1].children[-1].clone() +- # Construct open().read(). +- open_args = ArgList([filename.clone()], rparen=execfile_paren) +- open_call = Node(syms.power, [Name("open"), open_args]) +- read = [Node(syms.trailer, [Dot(), Name('read')]), +- Node(syms.trailer, [LParen(), RParen()])] +- open_expr = [open_call] + read +- # Wrap the open call in a compile call. This is so the filename will be +- # preserved in the execed code. +- filename_arg = filename.clone() +- filename_arg.set_prefix(" ") +- exec_str = String("'exec'", " ") +- compile_args = open_expr + [Comma(), filename_arg, Comma(), exec_str] +- compile_call = Call(Name("compile"), compile_args, "") +- # Finally, replace the execfile call with an exec call. +- args = [compile_call] +- if globals is not None: +- args.extend([Comma(), globals.clone()]) +- if locals is not None: +- args.extend([Comma(), locals.clone()]) +- return Call(Name("exec"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_filter.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_filter.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,75 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that changes filter(F, X) into list(filter(F, X)). +- +-We avoid the transformation if the filter() call is directly contained +-in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or +-for V in <>:. +- +-NOTE: This is still not correct if the original code was depending on +-filter(F, X) to return a string if X is a string and a tuple if X is a +-tuple. That would require type inference, which we don't do. Let +-Python 2.6 figure it out. +-""" +- +-# Local imports +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, ListComp, in_special_context +- +-class FixFilter(fixer_base.ConditionalFix): +- +- PATTERN = """ +- filter_lambda=power< +- 'filter' +- trailer< +- '(' +- arglist< +- lambdef< 'lambda' +- (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any +- > +- ',' +- it=any +- > +- ')' +- > +- > +- | +- power< +- 'filter' +- trailer< '(' arglist< none='None' ',' seq=any > ')' > +- > +- | +- power< +- 'filter' +- args=trailer< '(' [any] ')' > +- > +- """ +- +- skip_on = "future_builtins.filter" +- +- def transform(self, node, results): +- if self.should_skip(node): +- return +- +- if "filter_lambda" in results: +- new = ListComp(results.get("fp").clone(), +- results.get("fp").clone(), +- results.get("it").clone(), +- results.get("xp").clone()) +- +- elif "none" in results: +- new = ListComp(Name("_f"), +- Name("_f"), +- results["seq"].clone(), +- Name("_f")) +- +- else: +- if in_special_context(node): +- return None +- new = node.clone() +- new.set_prefix("") +- new = Call(Name("list"), [new]) +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_funcattrs.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_funcattrs.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,19 +0,0 @@ +-"""Fix function attribute names (f.func_x -> f.__x__).""" +-# Author: Collin Winter +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +- +-class FixFuncattrs(fixer_base.BaseFix): +- PATTERN = """ +- power< any+ trailer< '.' attr=('func_closure' | 'func_doc' | 'func_globals' +- | 'func_name' | 'func_defaults' | 'func_code' +- | 'func_dict') > any* > +- """ +- +- def transform(self, node, results): +- attr = results["attr"][0] +- attr.replace(Name(("__%s__" % attr.value[5:]), +- prefix=attr.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_future.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_future.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,20 +0,0 @@ +-"""Remove __future__ imports +- +-from __future__ import foo is replaced with an empty line. +-""" +-# Author: Christian Heimes +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import BlankLine +- +-class FixFuture(fixer_base.BaseFix): +- PATTERN = """import_from< 'from' module_name="__future__" 'import' any >""" +- +- # This should be run last -- some things check for the import +- run_order = 10 +- +- def transform(self, node, results): +- new = BlankLine() +- new.prefix = node.get_prefix() +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_getcwdu.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_getcwdu.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,18 +0,0 @@ +-""" +-Fixer that changes os.getcwdu() to os.getcwd(). +-""" +-# Author: Victor Stinner +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-class FixGetcwdu(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'os' trailer< dot='.' name='getcwdu' > any* > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- name.replace(Name("getcwd", prefix=name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_has_key.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_has_key.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,109 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for has_key(). +- +-Calls to .has_key() methods are expressed in terms of the 'in' +-operator: +- +- d.has_key(k) -> k in d +- +-CAVEATS: +-1) While the primary target of this fixer is dict.has_key(), the +- fixer will change any has_key() method call, regardless of its +- class. +- +-2) Cases like this will not be converted: +- +- m = d.has_key +- if m(k): +- ... +- +- Only *calls* to has_key() are converted. While it is possible to +- convert the above to something like +- +- m = d.__contains__ +- if m(k): +- ... +- +- this is currently not done. +-""" +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, parenthesize +- +- +-class FixHasKey(fixer_base.BaseFix): +- +- PATTERN = """ +- anchor=power< +- before=any+ +- trailer< '.' 'has_key' > +- trailer< +- '(' +- ( not(arglist | argument) arg=any ','> +- ) +- ')' +- > +- after=any* +- > +- | +- negation=not_test< +- 'not' +- anchor=power< +- before=any+ +- trailer< '.' 'has_key' > +- trailer< +- '(' +- ( not(arglist | argument) arg=any ','> +- ) +- ')' +- > +- > +- > +- """ +- +- def transform(self, node, results): +- assert results +- syms = self.syms +- if (node.parent.type == syms.not_test and +- self.pattern.match(node.parent)): +- # Don't transform a node matching the first alternative of the +- # pattern when its parent matches the second alternative +- return None +- negation = results.get("negation") +- anchor = results["anchor"] +- prefix = node.get_prefix() +- before = [n.clone() for n in results["before"]] +- arg = results["arg"].clone() +- after = results.get("after") +- if after: +- after = [n.clone() for n in after] +- if arg.type in (syms.comparison, syms.not_test, syms.and_test, +- syms.or_test, syms.test, syms.lambdef, syms.argument): +- arg = parenthesize(arg) +- if len(before) == 1: +- before = before[0] +- else: +- before = pytree.Node(syms.power, before) +- before.set_prefix(" ") +- n_op = Name("in", prefix=" ") +- if negation: +- n_not = Name("not", prefix=" ") +- n_op = pytree.Node(syms.comp_op, (n_not, n_op)) +- new = pytree.Node(syms.comparison, (arg, n_op, before)) +- if after: +- new = parenthesize(new) +- new = pytree.Node(syms.power, (new,) + tuple(after)) +- if node.parent.type in (syms.comparison, syms.expr, syms.xor_expr, +- syms.and_expr, syms.shift_expr, +- syms.arith_expr, syms.term, +- syms.factor, syms.power): +- new = parenthesize(new) +- new.set_prefix(prefix) +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_idioms.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_idioms.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,134 +0,0 @@ +-"""Adjust some old Python 2 idioms to their modern counterparts. +- +-* Change some type comparisons to isinstance() calls: +- type(x) == T -> isinstance(x, T) +- type(x) is T -> isinstance(x, T) +- type(x) != T -> not isinstance(x, T) +- type(x) is not T -> not isinstance(x, T) +- +-* Change "while 1:" into "while True:". +- +-* Change both +- +- v = list(EXPR) +- v.sort() +- foo(v) +- +-and the more general +- +- v = EXPR +- v.sort() +- foo(v) +- +-into +- +- v = sorted(EXPR) +- foo(v) +-""" +-# Author: Jacques Frechet, Collin Winter +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Call, Comma, Name, Node, syms +- +-CMP = "(n='!=' | '==' | 'is' | n=comp_op< 'is' 'not' >)" +-TYPE = "power< 'type' trailer< '(' x=any ')' > >" +- +-class FixIdioms(fixer_base.BaseFix): +- +- explicit = True # The user must ask for this fixer +- +- PATTERN = r""" +- isinstance=comparison< %s %s T=any > +- | +- isinstance=comparison< T=any %s %s > +- | +- while_stmt< 'while' while='1' ':' any+ > +- | +- sorted=any< +- any* +- simple_stmt< +- expr_stmt< id1=any '=' +- power< list='list' trailer< '(' (not arglist) any ')' > > +- > +- '\n' +- > +- sort= +- simple_stmt< +- power< id2=any +- trailer< '.' 'sort' > trailer< '(' ')' > +- > +- '\n' +- > +- next=any* +- > +- | +- sorted=any< +- any* +- simple_stmt< expr_stmt< id1=any '=' expr=any > '\n' > +- sort= +- simple_stmt< +- power< id2=any +- trailer< '.' 'sort' > trailer< '(' ')' > +- > +- '\n' +- > +- next=any* +- > +- """ % (TYPE, CMP, CMP, TYPE) +- +- def match(self, node): +- r = super(FixIdioms, self).match(node) +- # If we've matched one of the sort/sorted subpatterns above, we +- # want to reject matches where the initial assignment and the +- # subsequent .sort() call involve different identifiers. +- if r and "sorted" in r: +- if r["id1"] == r["id2"]: +- return r +- return None +- return r +- +- def transform(self, node, results): +- if "isinstance" in results: +- return self.transform_isinstance(node, results) +- elif "while" in results: +- return self.transform_while(node, results) +- elif "sorted" in results: +- return self.transform_sort(node, results) +- else: +- raise RuntimeError("Invalid match") +- +- def transform_isinstance(self, node, results): +- x = results["x"].clone() # The thing inside of type() +- T = results["T"].clone() # The type being compared against +- x.set_prefix("") +- T.set_prefix(" ") +- test = Call(Name("isinstance"), [x, Comma(), T]) +- if "n" in results: +- test.set_prefix(" ") +- test = Node(syms.not_test, [Name("not"), test]) +- test.set_prefix(node.get_prefix()) +- return test +- +- def transform_while(self, node, results): +- one = results["while"] +- one.replace(Name("True", prefix=one.get_prefix())) +- +- def transform_sort(self, node, results): +- sort_stmt = results["sort"] +- next_stmt = results["next"] +- list_call = results.get("list") +- simple_expr = results.get("expr") +- +- if list_call: +- list_call.replace(Name("sorted", prefix=list_call.get_prefix())) +- elif simple_expr: +- new = simple_expr.clone() +- new.set_prefix("") +- simple_expr.replace(Call(Name("sorted"), [new], +- prefix=simple_expr.get_prefix())) +- else: +- raise RuntimeError("should not have reached here") +- sort_stmt.remove() +- if next_stmt: +- next_stmt[0].set_prefix(sort_stmt.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_import.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_import.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,90 +0,0 @@ +-"""Fixer for import statements. +-If spam is being imported from the local directory, this import: +- from spam import eggs +-Becomes: +- from .spam import eggs +- +-And this import: +- import spam +-Becomes: +- from . import spam +-""" +- +-# Local imports +-from .. import fixer_base +-from os.path import dirname, join, exists, pathsep +-from ..fixer_util import FromImport, syms, token +- +- +-def traverse_imports(names): +- """ +- Walks over all the names imported in a dotted_as_names node. +- """ +- pending = [names] +- while pending: +- node = pending.pop() +- if node.type == token.NAME: +- yield node.value +- elif node.type == syms.dotted_name: +- yield "".join([ch.value for ch in node.children]) +- elif node.type == syms.dotted_as_name: +- pending.append(node.children[0]) +- elif node.type == syms.dotted_as_names: +- pending.extend(node.children[::-2]) +- else: +- raise AssertionError("unkown node type") +- +- +-class FixImport(fixer_base.BaseFix): +- +- PATTERN = """ +- import_from< 'from' imp=any 'import' ['('] any [')'] > +- | +- import_name< 'import' imp=any > +- """ +- +- def transform(self, node, results): +- imp = results['imp'] +- +- if node.type == syms.import_from: +- # Some imps are top-level (eg: 'import ham') +- # some are first level (eg: 'import ham.eggs') +- # some are third level (eg: 'import ham.eggs as spam') +- # Hence, the loop +- while not hasattr(imp, 'value'): +- imp = imp.children[0] +- if self.probably_a_local_import(imp.value): +- imp.value = "." + imp.value +- imp.changed() +- return node +- else: +- have_local = False +- have_absolute = False +- for mod_name in traverse_imports(imp): +- if self.probably_a_local_import(mod_name): +- have_local = True +- else: +- have_absolute = True +- if have_absolute: +- if have_local: +- # We won't handle both sibling and absolute imports in the +- # same statement at the moment. +- self.warning(node, "absolute and local imports together") +- return +- +- new = FromImport('.', [imp]) +- new.set_prefix(node.get_prefix()) +- return new +- +- def probably_a_local_import(self, imp_name): +- imp_name = imp_name.split('.', 1)[0] +- base_path = dirname(self.filename) +- base_path = join(base_path, imp_name) +- # If there is no __init__.py next to the file its not in a package +- # so can't be a relative import. +- if not exists(join(dirname(base_path), '__init__.py')): +- return False +- for ext in ['.py', pathsep, '.pyc', '.so', '.sl', '.pyd']: +- if exists(base_path + ext): +- return True +- return False +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_imports.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_imports.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,145 +0,0 @@ +-"""Fix incompatible imports and module references.""" +-# Authors: Collin Winter, Nick Edds +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, attr_chain +- +-MAPPING = {'StringIO': 'io', +- 'cStringIO': 'io', +- 'cPickle': 'pickle', +- '__builtin__' : 'builtins', +- 'copy_reg': 'copyreg', +- 'Queue': 'queue', +- 'SocketServer': 'socketserver', +- 'ConfigParser': 'configparser', +- 'repr': 'reprlib', +- 'FileDialog': 'tkinter.filedialog', +- 'tkFileDialog': 'tkinter.filedialog', +- 'SimpleDialog': 'tkinter.simpledialog', +- 'tkSimpleDialog': 'tkinter.simpledialog', +- 'tkColorChooser': 'tkinter.colorchooser', +- 'tkCommonDialog': 'tkinter.commondialog', +- 'Dialog': 'tkinter.dialog', +- 'Tkdnd': 'tkinter.dnd', +- 'tkFont': 'tkinter.font', +- 'tkMessageBox': 'tkinter.messagebox', +- 'ScrolledText': 'tkinter.scrolledtext', +- 'Tkconstants': 'tkinter.constants', +- 'Tix': 'tkinter.tix', +- 'ttk': 'tkinter.ttk', +- 'Tkinter': 'tkinter', +- 'markupbase': '_markupbase', +- '_winreg': 'winreg', +- 'thread': '_thread', +- 'dummy_thread': '_dummy_thread', +- # anydbm and whichdb are handled by fix_imports2 +- 'dbhash': 'dbm.bsd', +- 'dumbdbm': 'dbm.dumb', +- 'dbm': 'dbm.ndbm', +- 'gdbm': 'dbm.gnu', +- 'xmlrpclib': 'xmlrpc.client', +- 'DocXMLRPCServer': 'xmlrpc.server', +- 'SimpleXMLRPCServer': 'xmlrpc.server', +- 'httplib': 'http.client', +- 'htmlentitydefs' : 'html.entities', +- 'HTMLParser' : 'html.parser', +- 'Cookie': 'http.cookies', +- 'cookielib': 'http.cookiejar', +- 'BaseHTTPServer': 'http.server', +- 'SimpleHTTPServer': 'http.server', +- 'CGIHTTPServer': 'http.server', +- #'test.test_support': 'test.support', +- 'commands': 'subprocess', +- 'UserString' : 'collections', +- 'UserList' : 'collections', +- 'urlparse' : 'urllib.parse', +- 'robotparser' : 'urllib.robotparser', +-} +- +- +-def alternates(members): +- return "(" + "|".join(map(repr, members)) + ")" +- +- +-def build_pattern(mapping=MAPPING): +- mod_list = ' | '.join(["module_name='%s'" % key for key in mapping]) +- bare_names = alternates(mapping.keys()) +- +- yield """name_import=import_name< 'import' ((%s) | +- multiple_imports=dotted_as_names< any* (%s) any* >) > +- """ % (mod_list, mod_list) +- yield """import_from< 'from' (%s) 'import' ['('] +- ( any | import_as_name< any 'as' any > | +- import_as_names< any* >) [')'] > +- """ % mod_list +- yield """import_name< 'import' (dotted_as_name< (%s) 'as' any > | +- multiple_imports=dotted_as_names< +- any* dotted_as_name< (%s) 'as' any > any* >) > +- """ % (mod_list, mod_list) +- +- # Find usages of module members in code e.g. thread.foo(bar) +- yield "power< bare_with_attr=(%s) trailer<'.' any > any* >" % bare_names +- +- +-class FixImports(fixer_base.BaseFix): +- +- order = "pre" # Pre-order tree traversal +- +- # This is overridden in fix_imports2. +- mapping = MAPPING +- +- # We want to run this fixer late, so fix_import doesn't try to make stdlib +- # renames into relative imports. +- run_order = 6 +- +- def build_pattern(self): +- return "|".join(build_pattern(self.mapping)) +- +- def compile_pattern(self): +- # We override this, so MAPPING can be pragmatically altered and the +- # changes will be reflected in PATTERN. +- self.PATTERN = self.build_pattern() +- super(FixImports, self).compile_pattern() +- +- # Don't match the node if it's within another match. +- def match(self, node): +- match = super(FixImports, self).match +- results = match(node) +- if results: +- # Module usage could be in the trailer of an attribute lookup, so we +- # might have nested matches when "bare_with_attr" is present. +- if "bare_with_attr" not in results and \ +- any([match(obj) for obj in attr_chain(node, "parent")]): +- return False +- return results +- return False +- +- def start_tree(self, tree, filename): +- super(FixImports, self).start_tree(tree, filename) +- self.replace = {} +- +- def transform(self, node, results): +- import_mod = results.get("module_name") +- if import_mod: +- mod_name = import_mod.value +- new_name = self.mapping[mod_name] +- import_mod.replace(Name(new_name, prefix=import_mod.get_prefix())) +- if "name_import" in results: +- # If it's not a "from x import x, y" or "import x as y" import, +- # marked its usage to be replaced. +- self.replace[mod_name] = new_name +- if "multiple_imports" in results: +- # This is a nasty hack to fix multiple imports on a line (e.g., +- # "import StringIO, urlparse"). The problem is that I can't +- # figure out an easy way to make a pattern recognize the keys of +- # MAPPING randomly sprinkled in an import statement. +- results = self.match(node) +- if results: +- self.transform(node, results) +- else: +- # Replace usage of the module. +- bare_name = results["bare_with_attr"][0] +- new_name = self.replace.get(bare_name.value) +- if new_name: +- bare_name.replace(Name(new_name, prefix=bare_name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_imports2.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_imports2.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,16 +0,0 @@ +-"""Fix incompatible imports and module references that must be fixed after +-fix_imports.""" +-from . import fix_imports +- +- +-MAPPING = { +- 'whichdb': 'dbm', +- 'anydbm': 'dbm', +- } +- +- +-class FixImports2(fix_imports.FixImports): +- +- run_order = 7 +- +- mapping = MAPPING +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_input.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_input.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,26 +0,0 @@ +-"""Fixer that changes input(...) into eval(input(...)).""" +-# Author: Andre Roberge +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Call, Name +-from .. import patcomp +- +- +-context = patcomp.compile_pattern("power< 'eval' trailer< '(' any ')' > >") +- +- +-class FixInput(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'input' args=trailer< '(' [any] ')' > > +- """ +- +- def transform(self, node, results): +- # If we're already wrapped in a eval() call, we're done. +- if context.match(node.parent.parent): +- return +- +- new = node.clone() +- new.set_prefix("") +- return Call(Name("eval"), [new], prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_intern.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_intern.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,44 +0,0 @@ +-# Copyright 2006 Georg Brandl. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for intern(). +- +-intern(s) -> sys.intern(s)""" +- +-# Local imports +-from .. import pytree +-from .. import fixer_base +-from ..fixer_util import Name, Attr, touch_import +- +- +-class FixIntern(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'intern' +- trailer< lpar='(' +- ( not(arglist | argument) any ','> ) +- rpar=')' > +- after=any* +- > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- obj = results["obj"].clone() +- if obj.type == syms.arglist: +- newarglist = obj.clone() +- else: +- newarglist = pytree.Node(syms.arglist, [obj.clone()]) +- after = results["after"] +- if after: +- after = [n.clone() for n in after] +- new = pytree.Node(syms.power, +- Attr(Name("sys"), Name("intern")) + +- [pytree.Node(syms.trailer, +- [results["lpar"].clone(), +- newarglist, +- results["rpar"].clone()])] + after) +- new.set_prefix(node.get_prefix()) +- touch_import(None, 'sys', node) +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_isinstance.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_isinstance.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,52 +0,0 @@ +-# Copyright 2008 Armin Ronacher. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that cleans up a tuple argument to isinstance after the tokens +-in it were fixed. This is mainly used to remove double occurrences of +-tokens as a leftover of the long -> int / unicode -> str conversion. +- +-eg. isinstance(x, (int, long)) -> isinstance(x, (int, int)) +- -> isinstance(x, int) +-""" +- +-from .. import fixer_base +-from ..fixer_util import token +- +- +-class FixIsinstance(fixer_base.BaseFix): +- +- PATTERN = """ +- power< +- 'isinstance' +- trailer< '(' arglist< any ',' atom< '(' +- args=testlist_gexp< any+ > +- ')' > > ')' > +- > +- """ +- +- run_order = 6 +- +- def transform(self, node, results): +- names_inserted = set() +- testlist = results["args"] +- args = testlist.children +- new_args = [] +- iterator = enumerate(args) +- for idx, arg in iterator: +- if arg.type == token.NAME and arg.value in names_inserted: +- if idx < len(args) - 1 and args[idx + 1].type == token.COMMA: +- iterator.next() +- continue +- else: +- new_args.append(arg) +- if arg.type == token.NAME: +- names_inserted.add(arg.value) +- if new_args and new_args[-1].type == token.COMMA: +- del new_args[-1] +- if len(new_args) == 1: +- atom = testlist.parent +- new_args[0].set_prefix(atom.get_prefix()) +- atom.replace(new_args[0]) +- else: +- args[:] = new_args +- node.changed() +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_itertools.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_itertools.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,41 +0,0 @@ +-""" Fixer for itertools.(imap|ifilter|izip) --> (map|filter|zip) and +- itertools.ifilterfalse --> itertools.filterfalse (bugs 2360-2363) +- +- imports from itertools are fixed in fix_itertools_import.py +- +- If itertools is imported as something else (ie: import itertools as it; +- it.izip(spam, eggs)) method calls will not get fixed. +- """ +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-class FixItertools(fixer_base.BaseFix): +- it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" +- PATTERN = """ +- power< it='itertools' +- trailer< +- dot='.' func=%(it_funcs)s > trailer< '(' [any] ')' > > +- | +- power< func=%(it_funcs)s trailer< '(' [any] ')' > > +- """ %(locals()) +- +- # Needs to be run after fix_(map|zip|filter) +- run_order = 6 +- +- def transform(self, node, results): +- prefix = None +- func = results['func'][0] +- if 'it' in results and func.value != 'ifilterfalse': +- dot, it = (results['dot'], results['it']) +- # Remove the 'itertools' +- prefix = it.get_prefix() +- it.remove() +- # Replace the node wich contains ('.', 'function') with the +- # function (to be consistant with the second part of the pattern) +- dot.remove() +- func.parent.replace(func) +- +- prefix = prefix or func.get_prefix() +- func.replace(Name(func.value[1:], prefix=prefix)) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_itertools_imports.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_itertools_imports.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,52 +0,0 @@ +-""" Fixer for imports of itertools.(imap|ifilter|izip|ifilterfalse) """ +- +-# Local imports +-from lib2to3 import fixer_base +-from lib2to3.fixer_util import BlankLine, syms, token +- +- +-class FixItertoolsImports(fixer_base.BaseFix): +- PATTERN = """ +- import_from< 'from' 'itertools' 'import' imports=any > +- """ %(locals()) +- +- def transform(self, node, results): +- imports = results['imports'] +- if imports.type == syms.import_as_name or not imports.children: +- children = [imports] +- else: +- children = imports.children +- for child in children[::2]: +- if child.type == token.NAME: +- member = child.value +- name_node = child +- else: +- assert child.type == syms.import_as_name +- name_node = child.children[0] +- member_name = name_node.value +- if member_name in ('imap', 'izip', 'ifilter'): +- child.value = None +- child.remove() +- elif member_name == 'ifilterfalse': +- node.changed() +- name_node.value = 'filterfalse' +- +- # Make sure the import statement is still sane +- children = imports.children[:] or [imports] +- remove_comma = True +- for child in children: +- if remove_comma and child.type == token.COMMA: +- child.remove() +- else: +- remove_comma ^= True +- +- if children[-1].type == token.COMMA: +- children[-1].remove() +- +- # If there are no imports left, just get rid of the entire statement +- if not (imports.children or getattr(imports, 'value', None)) or \ +- imports.parent is None: +- p = node.get_prefix() +- node = BlankLine() +- node.prefix = p +- return node +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_long.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_long.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,22 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that turns 'long' into 'int' everywhere. +-""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, Number, is_probably_builtin +- +- +-class FixLong(fixer_base.BaseFix): +- +- PATTERN = "'long'" +- +- static_int = Name("int") +- +- def transform(self, node, results): +- if is_probably_builtin(node): +- new = self.static_int.clone() +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_map.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_map.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,82 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that changes map(F, ...) into list(map(F, ...)) unless there +-exists a 'from future_builtins import map' statement in the top-level +-namespace. +- +-As a special case, map(None, X) is changed into list(X). (This is +-necessary because the semantics are changed in this case -- the new +-map(None, X) is equivalent to [(x,) for x in X].) +- +-We avoid the transformation (except for the special case mentioned +-above) if the map() call is directly contained in iter(<>), list(<>), +-tuple(<>), sorted(<>), ...join(<>), or for V in <>:. +- +-NOTE: This is still not correct if the original code was depending on +-map(F, X, Y, ...) to go on until the longest argument is exhausted, +-substituting None for missing values -- like zip(), it now stops as +-soon as the shortest argument is exhausted. +-""" +- +-# Local imports +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, ListComp, in_special_context +-from ..pygram import python_symbols as syms +- +-class FixMap(fixer_base.ConditionalFix): +- +- PATTERN = """ +- map_none=power< +- 'map' +- trailer< '(' arglist< 'None' ',' arg=any [','] > ')' > +- > +- | +- map_lambda=power< +- 'map' +- trailer< +- '(' +- arglist< +- lambdef< 'lambda' +- (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any +- > +- ',' +- it=any +- > +- ')' +- > +- > +- | +- power< +- 'map' +- args=trailer< '(' [any] ')' > +- > +- """ +- +- skip_on = 'future_builtins.map' +- +- def transform(self, node, results): +- if self.should_skip(node): +- return +- +- if node.parent.type == syms.simple_stmt: +- self.warning(node, "You should use a for loop here") +- new = node.clone() +- new.set_prefix("") +- new = Call(Name("list"), [new]) +- elif "map_lambda" in results: +- new = ListComp(results.get("xp").clone(), +- results.get("fp").clone(), +- results.get("it").clone()) +- else: +- if "map_none" in results: +- new = results["arg"].clone() +- else: +- if in_special_context(node): +- return None +- new = node.clone() +- new.set_prefix("") +- new = Call(Name("list"), [new]) +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_metaclass.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_metaclass.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,227 +0,0 @@ +-"""Fixer for __metaclass__ = X -> (metaclass=X) methods. +- +- The various forms of classef (inherits nothing, inherits once, inherints +- many) don't parse the same in the CST so we look at ALL classes for +- a __metaclass__ and if we find one normalize the inherits to all be +- an arglist. +- +- For one-liner classes ('class X: pass') there is no indent/dedent so +- we normalize those into having a suite. +- +- Moving the __metaclass__ into the classdef can also cause the class +- body to be empty so there is some special casing for that as well. +- +- This fixer also tries very hard to keep original indenting and spacing +- in all those corner cases. +- +-""" +-# Author: Jack Diederich +- +-# Local imports +-from .. import fixer_base +-from ..pygram import token +-from ..fixer_util import Name, syms, Node, Leaf +- +- +-def has_metaclass(parent): +- """ we have to check the cls_node without changing it. +- There are two possiblities: +- 1) clsdef => suite => simple_stmt => expr_stmt => Leaf('__meta') +- 2) clsdef => simple_stmt => expr_stmt => Leaf('__meta') +- """ +- for node in parent.children: +- if node.type == syms.suite: +- return has_metaclass(node) +- elif node.type == syms.simple_stmt and node.children: +- expr_node = node.children[0] +- if expr_node.type == syms.expr_stmt and expr_node.children: +- left_side = expr_node.children[0] +- if isinstance(left_side, Leaf) and \ +- left_side.value == '__metaclass__': +- return True +- return False +- +- +-def fixup_parse_tree(cls_node): +- """ one-line classes don't get a suite in the parse tree so we add +- one to normalize the tree +- """ +- for node in cls_node.children: +- if node.type == syms.suite: +- # already in the prefered format, do nothing +- return +- +- # !%@#! oneliners have no suite node, we have to fake one up +- for i, node in enumerate(cls_node.children): +- if node.type == token.COLON: +- break +- else: +- raise ValueError("No class suite and no ':'!") +- +- # move everything into a suite node +- suite = Node(syms.suite, []) +- while cls_node.children[i+1:]: +- move_node = cls_node.children[i+1] +- suite.append_child(move_node.clone()) +- move_node.remove() +- cls_node.append_child(suite) +- node = suite +- +- +-def fixup_simple_stmt(parent, i, stmt_node): +- """ if there is a semi-colon all the parts count as part of the same +- simple_stmt. We just want the __metaclass__ part so we move +- everything efter the semi-colon into its own simple_stmt node +- """ +- for semi_ind, node in enumerate(stmt_node.children): +- if node.type == token.SEMI: # *sigh* +- break +- else: +- return +- +- node.remove() # kill the semicolon +- new_expr = Node(syms.expr_stmt, []) +- new_stmt = Node(syms.simple_stmt, [new_expr]) +- while stmt_node.children[semi_ind:]: +- move_node = stmt_node.children[semi_ind] +- new_expr.append_child(move_node.clone()) +- move_node.remove() +- parent.insert_child(i, new_stmt) +- new_leaf1 = new_stmt.children[0].children[0] +- old_leaf1 = stmt_node.children[0].children[0] +- new_leaf1.set_prefix(old_leaf1.get_prefix()) +- +- +-def remove_trailing_newline(node): +- if node.children and node.children[-1].type == token.NEWLINE: +- node.children[-1].remove() +- +- +-def find_metas(cls_node): +- # find the suite node (Mmm, sweet nodes) +- for node in cls_node.children: +- if node.type == syms.suite: +- break +- else: +- raise ValueError("No class suite!") +- +- # look for simple_stmt[ expr_stmt[ Leaf('__metaclass__') ] ] +- for i, simple_node in list(enumerate(node.children)): +- if simple_node.type == syms.simple_stmt and simple_node.children: +- expr_node = simple_node.children[0] +- if expr_node.type == syms.expr_stmt and expr_node.children: +- # Check if the expr_node is a simple assignment. +- left_node = expr_node.children[0] +- if isinstance(left_node, Leaf) and \ +- left_node.value == '__metaclass__': +- # We found a assignment to __metaclass__. +- fixup_simple_stmt(node, i, simple_node) +- remove_trailing_newline(simple_node) +- yield (node, i, simple_node) +- +- +-def fixup_indent(suite): +- """ If an INDENT is followed by a thing with a prefix then nuke the prefix +- Otherwise we get in trouble when removing __metaclass__ at suite start +- """ +- kids = suite.children[::-1] +- # find the first indent +- while kids: +- node = kids.pop() +- if node.type == token.INDENT: +- break +- +- # find the first Leaf +- while kids: +- node = kids.pop() +- if isinstance(node, Leaf) and node.type != token.DEDENT: +- if node.prefix: +- node.set_prefix('') +- return +- else: +- kids.extend(node.children[::-1]) +- +- +-class FixMetaclass(fixer_base.BaseFix): +- +- PATTERN = """ +- classdef +- """ +- +- def transform(self, node, results): +- if not has_metaclass(node): +- return node +- +- fixup_parse_tree(node) +- +- # find metaclasses, keep the last one +- last_metaclass = None +- for suite, i, stmt in find_metas(node): +- last_metaclass = stmt +- stmt.remove() +- +- text_type = node.children[0].type # always Leaf(nnn, 'class') +- +- # figure out what kind of classdef we have +- if len(node.children) == 7: +- # Node(classdef, ['class', 'name', '(', arglist, ')', ':', suite]) +- # 0 1 2 3 4 5 6 +- if node.children[3].type == syms.arglist: +- arglist = node.children[3] +- # Node(classdef, ['class', 'name', '(', 'Parent', ')', ':', suite]) +- else: +- parent = node.children[3].clone() +- arglist = Node(syms.arglist, [parent]) +- node.set_child(3, arglist) +- elif len(node.children) == 6: +- # Node(classdef, ['class', 'name', '(', ')', ':', suite]) +- # 0 1 2 3 4 5 +- arglist = Node(syms.arglist, []) +- node.insert_child(3, arglist) +- elif len(node.children) == 4: +- # Node(classdef, ['class', 'name', ':', suite]) +- # 0 1 2 3 +- arglist = Node(syms.arglist, []) +- node.insert_child(2, Leaf(token.RPAR, ')')) +- node.insert_child(2, arglist) +- node.insert_child(2, Leaf(token.LPAR, '(')) +- else: +- raise ValueError("Unexpected class definition") +- +- # now stick the metaclass in the arglist +- meta_txt = last_metaclass.children[0].children[0] +- meta_txt.value = 'metaclass' +- orig_meta_prefix = meta_txt.get_prefix() +- +- if arglist.children: +- arglist.append_child(Leaf(token.COMMA, ',')) +- meta_txt.set_prefix(' ') +- else: +- meta_txt.set_prefix('') +- +- # compact the expression "metaclass = Meta" -> "metaclass=Meta" +- expr_stmt = last_metaclass.children[0] +- assert expr_stmt.type == syms.expr_stmt +- expr_stmt.children[1].set_prefix('') +- expr_stmt.children[2].set_prefix('') +- +- arglist.append_child(last_metaclass) +- +- fixup_indent(suite) +- +- # check for empty suite +- if not suite.children: +- # one-liner that was just __metaclass_ +- suite.remove() +- pass_leaf = Leaf(text_type, 'pass') +- pass_leaf.set_prefix(orig_meta_prefix) +- node.append_child(pass_leaf) +- node.append_child(Leaf(token.NEWLINE, '\n')) +- +- elif len(suite.children) > 1 and \ +- (suite.children[-2].type == token.INDENT and +- suite.children[-1].type == token.DEDENT): +- # there was only one line in the class body and it was __metaclass__ +- pass_leaf = Leaf(text_type, 'pass') +- suite.insert_child(-1, pass_leaf) +- suite.insert_child(-1, Leaf(token.NEWLINE, '\n')) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_methodattrs.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_methodattrs.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,23 +0,0 @@ +-"""Fix bound method attributes (method.im_? -> method.__?__). +-""" +-# Author: Christian Heimes +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-MAP = { +- "im_func" : "__func__", +- "im_self" : "__self__", +- "im_class" : "__self__.__class__" +- } +- +-class FixMethodattrs(fixer_base.BaseFix): +- PATTERN = """ +- power< any+ trailer< '.' attr=('im_func' | 'im_self' | 'im_class') > any* > +- """ +- +- def transform(self, node, results): +- attr = results["attr"][0] +- new = MAP[attr.value] +- attr.replace(Name(new, prefix=attr.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_ne.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_ne.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,22 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that turns <> into !=.""" +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +- +- +-class FixNe(fixer_base.BaseFix): +- # This is so simple that we don't need the pattern compiler. +- +- def match(self, node): +- # Override +- return node.type == token.NOTEQUAL and node.value == "<>" +- +- def transform(self, node, results): +- new = pytree.Leaf(token.NOTEQUAL, "!=") +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_next.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_next.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,103 +0,0 @@ +-"""Fixer for it.next() -> next(it), per PEP 3114.""" +-# Author: Collin Winter +- +-# Things that currently aren't covered: +-# - listcomp "next" names aren't warned +-# - "with" statement targets aren't checked +- +-# Local imports +-from ..pgen2 import token +-from ..pygram import python_symbols as syms +-from .. import fixer_base +-from ..fixer_util import Name, Call, find_binding +- +-bind_warning = "Calls to builtin next() possibly shadowed by global binding" +- +- +-class FixNext(fixer_base.BaseFix): +- PATTERN = """ +- power< base=any+ trailer< '.' attr='next' > trailer< '(' ')' > > +- | +- power< head=any+ trailer< '.' attr='next' > not trailer< '(' ')' > > +- | +- classdef< 'class' any+ ':' +- suite< any* +- funcdef< 'def' +- name='next' +- parameters< '(' NAME ')' > any+ > +- any* > > +- | +- global=global_stmt< 'global' any* 'next' any* > +- """ +- +- order = "pre" # Pre-order tree traversal +- +- def start_tree(self, tree, filename): +- super(FixNext, self).start_tree(tree, filename) +- +- n = find_binding('next', tree) +- if n: +- self.warning(n, bind_warning) +- self.shadowed_next = True +- else: +- self.shadowed_next = False +- +- def transform(self, node, results): +- assert results +- +- base = results.get("base") +- attr = results.get("attr") +- name = results.get("name") +- mod = results.get("mod") +- +- if base: +- if self.shadowed_next: +- attr.replace(Name("__next__", prefix=attr.get_prefix())) +- else: +- base = [n.clone() for n in base] +- base[0].set_prefix("") +- node.replace(Call(Name("next", prefix=node.get_prefix()), base)) +- elif name: +- n = Name("__next__", prefix=name.get_prefix()) +- name.replace(n) +- elif attr: +- # We don't do this transformation if we're assigning to "x.next". +- # Unfortunately, it doesn't seem possible to do this in PATTERN, +- # so it's being done here. +- if is_assign_target(node): +- head = results["head"] +- if "".join([str(n) for n in head]).strip() == '__builtin__': +- self.warning(node, bind_warning) +- return +- attr.replace(Name("__next__")) +- elif "global" in results: +- self.warning(node, bind_warning) +- self.shadowed_next = True +- +- +-### The following functions help test if node is part of an assignment +-### target. +- +-def is_assign_target(node): +- assign = find_assign(node) +- if assign is None: +- return False +- +- for child in assign.children: +- if child.type == token.EQUAL: +- return False +- elif is_subtree(child, node): +- return True +- return False +- +-def find_assign(node): +- if node.type == syms.expr_stmt: +- return node +- if node.type == syms.simple_stmt or node.parent is None: +- return None +- return find_assign(node.parent) +- +-def is_subtree(root, node): +- if root == node: +- return True +- return any([is_subtree(c, node) for c in root.children]) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_nonzero.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_nonzero.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,20 +0,0 @@ +-"""Fixer for __nonzero__ -> __bool__ methods.""" +-# Author: Collin Winter +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, syms +- +-class FixNonzero(fixer_base.BaseFix): +- PATTERN = """ +- classdef< 'class' any+ ':' +- suite< any* +- funcdef< 'def' name='__nonzero__' +- parameters< '(' NAME ')' > any+ > +- any* > > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- new = Name("__bool__", prefix=name.get_prefix()) +- name.replace(new) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_numliterals.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_numliterals.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,27 +0,0 @@ +-"""Fixer that turns 1L into 1, 0755 into 0o755. +-""" +-# Copyright 2007 Georg Brandl. +-# Licensed to PSF under a Contributor Agreement. +- +-# Local imports +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Number +- +- +-class FixNumliterals(fixer_base.BaseFix): +- # This is so simple that we don't need the pattern compiler. +- +- def match(self, node): +- # Override +- return (node.type == token.NUMBER and +- (node.value.startswith("0") or node.value[-1] in "Ll")) +- +- def transform(self, node, results): +- val = node.value +- if val[-1] in 'Ll': +- val = val[:-1] +- elif val.startswith('0') and val.isdigit() and len(set(val)) > 1: +- val = "0o" + val[1:] +- +- return Number(val, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_paren.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_paren.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,42 +0,0 @@ +-"""Fixer that addes parentheses where they are required +- +-This converts ``[x for x in 1, 2]`` to ``[x for x in (1, 2)]``.""" +- +-# By Taek Joo Kim and Benjamin Peterson +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import LParen, RParen +- +-# XXX This doesn't support nested for loops like [x for x in 1, 2 for x in 1, 2] +-class FixParen(fixer_base.BaseFix): +- PATTERN = """ +- atom< ('[' | '(') +- (listmaker< any +- comp_for< +- 'for' NAME 'in' +- target=testlist_safe< any (',' any)+ [','] +- > +- [any] +- > +- > +- | +- testlist_gexp< any +- comp_for< +- 'for' NAME 'in' +- target=testlist_safe< any (',' any)+ [','] +- > +- [any] +- > +- >) +- (']' | ')') > +- """ +- +- def transform(self, node, results): +- target = results["target"] +- +- lparen = LParen() +- lparen.set_prefix(target.get_prefix()) +- target.set_prefix("") # Make it hug the parentheses +- target.insert_child(0, lparen) +- target.append_child(RParen()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_print.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_print.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,90 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for print. +- +-Change: +- 'print' into 'print()' +- 'print ...' into 'print(...)' +- 'print ... ,' into 'print(..., end=" ")' +- 'print >>x, ...' into 'print(..., file=x)' +- +-No changes are applied if print_function is imported from __future__ +- +-""" +- +-# Local imports +-from .. import patcomp +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, Comma, String, is_tuple +- +- +-parend_expr = patcomp.compile_pattern( +- """atom< '(' [atom|STRING|NAME] ')' >""" +- ) +- +- +-class FixPrint(fixer_base.ConditionalFix): +- +- PATTERN = """ +- simple_stmt< any* bare='print' any* > | print_stmt +- """ +- +- skip_on = '__future__.print_function' +- +- def transform(self, node, results): +- assert results +- +- if self.should_skip(node): +- return +- +- bare_print = results.get("bare") +- +- if bare_print: +- # Special-case print all by itself +- bare_print.replace(Call(Name("print"), [], +- prefix=bare_print.get_prefix())) +- return +- assert node.children[0] == Name("print") +- args = node.children[1:] +- if len(args) == 1 and parend_expr.match(args[0]): +- # We don't want to keep sticking parens around an +- # already-parenthesised expression. +- return +- +- sep = end = file = None +- if args and args[-1] == Comma(): +- args = args[:-1] +- end = " " +- if args and args[0] == pytree.Leaf(token.RIGHTSHIFT, ">>"): +- assert len(args) >= 2 +- file = args[1].clone() +- args = args[3:] # Strip a possible comma after the file expression +- # Now synthesize a print(args, sep=..., end=..., file=...) node. +- l_args = [arg.clone() for arg in args] +- if l_args: +- l_args[0].set_prefix("") +- if sep is not None or end is not None or file is not None: +- if sep is not None: +- self.add_kwarg(l_args, "sep", String(repr(sep))) +- if end is not None: +- self.add_kwarg(l_args, "end", String(repr(end))) +- if file is not None: +- self.add_kwarg(l_args, "file", file) +- n_stmt = Call(Name("print"), l_args) +- n_stmt.set_prefix(node.get_prefix()) +- return n_stmt +- +- def add_kwarg(self, l_nodes, s_kwd, n_expr): +- # XXX All this prefix-setting may lose comments (though rarely) +- n_expr.set_prefix("") +- n_argument = pytree.Node(self.syms.argument, +- (Name(s_kwd), +- pytree.Leaf(token.EQUAL, "="), +- n_expr)) +- if l_nodes: +- l_nodes.append(Comma()) +- n_argument.set_prefix(" ") +- l_nodes.append(n_argument) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_raise.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_raise.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,82 +0,0 @@ +-"""Fixer for 'raise E, V, T' +- +-raise -> raise +-raise E -> raise E +-raise E, V -> raise E(V) +-raise E, V, T -> raise E(V).with_traceback(T) +- +-raise (((E, E'), E''), E'''), V -> raise E(V) +-raise "foo", V, T -> warns about string exceptions +- +- +-CAVEATS: +-1) "raise E, V" will be incorrectly translated if V is an exception +- instance. The correct Python 3 idiom is +- +- raise E from V +- +- but since we can't detect instance-hood by syntax alone and since +- any client code would have to be changed as well, we don't automate +- this. +-""" +-# Author: Collin Winter +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, Attr, ArgList, is_tuple +- +-class FixRaise(fixer_base.BaseFix): +- +- PATTERN = """ +- raise_stmt< 'raise' exc=any [',' val=any [',' tb=any]] > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- +- exc = results["exc"].clone() +- if exc.type is token.STRING: +- self.cannot_convert(node, "Python 3 does not support string exceptions") +- return +- +- # Python 2 supports +- # raise ((((E1, E2), E3), E4), E5), V +- # as a synonym for +- # raise E1, V +- # Since Python 3 will not support this, we recurse down any tuple +- # literals, always taking the first element. +- if is_tuple(exc): +- while is_tuple(exc): +- # exc.children[1:-1] is the unparenthesized tuple +- # exc.children[1].children[0] is the first element of the tuple +- exc = exc.children[1].children[0].clone() +- exc.set_prefix(" ") +- +- if "val" not in results: +- # One-argument raise +- new = pytree.Node(syms.raise_stmt, [Name("raise"), exc]) +- new.set_prefix(node.get_prefix()) +- return new +- +- val = results["val"].clone() +- if is_tuple(val): +- args = [c.clone() for c in val.children[1:-1]] +- else: +- val.set_prefix("") +- args = [val] +- +- if "tb" in results: +- tb = results["tb"].clone() +- tb.set_prefix("") +- +- e = Call(exc, args) +- with_tb = Attr(e, Name('with_traceback')) + [ArgList([tb])] +- new = pytree.Node(syms.simple_stmt, [Name("raise")] + with_tb) +- new.set_prefix(node.get_prefix()) +- return new +- else: +- return pytree.Node(syms.raise_stmt, +- [Name("raise"), Call(exc, args)], +- prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_raw_input.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_raw_input.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,16 +0,0 @@ +-"""Fixer that changes raw_input(...) into input(...).""" +-# Author: Andre Roberge +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-class FixRawInput(fixer_base.BaseFix): +- +- PATTERN = """ +- power< name='raw_input' trailer< '(' [any] ')' > any* > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- name.replace(Name("input", prefix=name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_reduce.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_reduce.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,33 +0,0 @@ +-# Copyright 2008 Armin Ronacher. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for reduce(). +- +-Makes sure reduce() is imported from the functools module if reduce is +-used in that module. +-""" +- +-from .. import pytree +-from .. import fixer_base +-from ..fixer_util import Name, Attr, touch_import +- +- +- +-class FixReduce(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'reduce' +- trailer< '(' +- arglist< ( +- (not(argument) any ',' +- not(argument +- > +- """ +- +- def transform(self, node, results): +- touch_import('functools', 'reduce', node) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_renames.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_renames.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,69 +0,0 @@ +-"""Fix incompatible renames +- +-Fixes: +- * sys.maxint -> sys.maxsize +-""" +-# Author: Christian Heimes +-# based on Collin Winter's fix_import +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, attr_chain +- +-MAPPING = {"sys": {"maxint" : "maxsize"}, +- } +-LOOKUP = {} +- +-def alternates(members): +- return "(" + "|".join(map(repr, members)) + ")" +- +- +-def build_pattern(): +- #bare = set() +- for module, replace in MAPPING.items(): +- for old_attr, new_attr in replace.items(): +- LOOKUP[(module, old_attr)] = new_attr +- #bare.add(module) +- #bare.add(old_attr) +- #yield """ +- # import_name< 'import' (module=%r +- # | dotted_as_names< any* module=%r any* >) > +- # """ % (module, module) +- yield """ +- import_from< 'from' module_name=%r 'import' +- ( attr_name=%r | import_as_name< attr_name=%r 'as' any >) > +- """ % (module, old_attr, old_attr) +- yield """ +- power< module_name=%r trailer< '.' attr_name=%r > any* > +- """ % (module, old_attr) +- #yield """bare_name=%s""" % alternates(bare) +- +- +-class FixRenames(fixer_base.BaseFix): +- PATTERN = "|".join(build_pattern()) +- +- order = "pre" # Pre-order tree traversal +- +- # Don't match the node if it's within another match +- def match(self, node): +- match = super(FixRenames, self).match +- results = match(node) +- if results: +- if any([match(obj) for obj in attr_chain(node, "parent")]): +- return False +- return results +- return False +- +- #def start_tree(self, tree, filename): +- # super(FixRenames, self).start_tree(tree, filename) +- # self.replace = {} +- +- def transform(self, node, results): +- mod_name = results.get("module_name") +- attr_name = results.get("attr_name") +- #bare_name = results.get("bare_name") +- #import_mod = results.get("module") +- +- if mod_name and attr_name: +- new_attr = LOOKUP[(mod_name.value, attr_name.value)] +- attr_name.replace(Name(new_attr, prefix=attr_name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_repr.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_repr.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,22 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that transforms `xyzzy` into repr(xyzzy).""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Call, Name, parenthesize +- +- +-class FixRepr(fixer_base.BaseFix): +- +- PATTERN = """ +- atom < '`' expr=any '`' > +- """ +- +- def transform(self, node, results): +- expr = results["expr"].clone() +- +- if expr.type == self.syms.testlist1: +- expr = parenthesize(expr) +- return Call(Name("repr"), [expr], prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_set_literal.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_set_literal.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,52 +0,0 @@ +-""" +-Optional fixer to transform set() calls to set literals. +-""" +- +-# Author: Benjamin Peterson +- +-from lib2to3 import fixer_base, pytree +-from lib2to3.fixer_util import token, syms +- +- +- +-class FixSetLiteral(fixer_base.BaseFix): +- +- explicit = True +- +- PATTERN = """power< 'set' trailer< '(' +- (atom=atom< '[' (items=listmaker< any ((',' any)* [',']) > +- | +- single=any) ']' > +- | +- atom< '(' items=testlist_gexp< any ((',' any)* [',']) > ')' > +- ) +- ')' > > +- """ +- +- def transform(self, node, results): +- single = results.get("single") +- if single: +- # Make a fake listmaker +- fake = pytree.Node(syms.listmaker, [single.clone()]) +- single.replace(fake) +- items = fake +- else: +- items = results["items"] +- +- # Build the contents of the literal +- literal = [pytree.Leaf(token.LBRACE, "{")] +- literal.extend(n.clone() for n in items.children) +- literal.append(pytree.Leaf(token.RBRACE, "}")) +- # Set the prefix of the right brace to that of the ')' or ']' +- literal[-1].set_prefix(items.next_sibling.get_prefix()) +- maker = pytree.Node(syms.dictsetmaker, literal) +- maker.set_prefix(node.get_prefix()) +- +- # If the original was a one tuple, we need to remove the extra comma. +- if len(maker.children) == 4: +- n = maker.children[2] +- n.remove() +- maker.children[-1].set_prefix(n.get_prefix()) +- +- # Finally, replace the set call with our shiny new literal. +- return maker +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_standarderror.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_standarderror.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,18 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for StandardError -> Exception.""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +- +-class FixStandarderror(fixer_base.BaseFix): +- +- PATTERN = """ +- 'StandardError' +- """ +- +- def transform(self, node, results): +- return Name("Exception", prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_sys_exc.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_sys_exc.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,29 +0,0 @@ +-"""Fixer for sys.exc_{type, value, traceback} +- +-sys.exc_type -> sys.exc_info()[0] +-sys.exc_value -> sys.exc_info()[1] +-sys.exc_traceback -> sys.exc_info()[2] +-""" +- +-# By Jeff Balogh and Benjamin Peterson +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Attr, Call, Name, Number, Subscript, Node, syms +- +-class FixSysExc(fixer_base.BaseFix): +- # This order matches the ordering of sys.exc_info(). +- exc_info = ["exc_type", "exc_value", "exc_traceback"] +- PATTERN = """ +- power< 'sys' trailer< dot='.' attribute=(%s) > > +- """ % '|'.join("'%s'" % e for e in exc_info) +- +- def transform(self, node, results): +- sys_attr = results["attribute"][0] +- index = Number(self.exc_info.index(sys_attr.value)) +- +- call = Call(Name("exc_info"), prefix=sys_attr.get_prefix()) +- attr = Attr(Name("sys"), call) +- attr[1].children[0].set_prefix(results["dot"].get_prefix()) +- attr.append(Subscript(index)) +- return Node(syms.power, attr, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_throw.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_throw.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,56 +0,0 @@ +-"""Fixer for generator.throw(E, V, T). +- +-g.throw(E) -> g.throw(E) +-g.throw(E, V) -> g.throw(E(V)) +-g.throw(E, V, T) -> g.throw(E(V).with_traceback(T)) +- +-g.throw("foo"[, V[, T]]) will warn about string exceptions.""" +-# Author: Collin Winter +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, ArgList, Attr, is_tuple +- +-class FixThrow(fixer_base.BaseFix): +- +- PATTERN = """ +- power< any trailer< '.' 'throw' > +- trailer< '(' args=arglist< exc=any ',' val=any [',' tb=any] > ')' > +- > +- | +- power< any trailer< '.' 'throw' > trailer< '(' exc=any ')' > > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- +- exc = results["exc"].clone() +- if exc.type is token.STRING: +- self.cannot_convert(node, "Python 3 does not support string exceptions") +- return +- +- # Leave "g.throw(E)" alone +- val = results.get("val") +- if val is None: +- return +- +- val = val.clone() +- if is_tuple(val): +- args = [c.clone() for c in val.children[1:-1]] +- else: +- val.set_prefix("") +- args = [val] +- +- throw_args = results["args"] +- +- if "tb" in results: +- tb = results["tb"].clone() +- tb.set_prefix("") +- +- e = Call(exc, args) +- with_tb = Attr(e, Name('with_traceback')) + [ArgList([tb])] +- throw_args.replace(pytree.Node(syms.power, with_tb)) +- else: +- throw_args.replace(Call(exc, args)) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_tuple_params.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_tuple_params.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,169 +0,0 @@ +-"""Fixer for function definitions with tuple parameters. +- +-def func(((a, b), c), d): +- ... +- +- -> +- +-def func(x, d): +- ((a, b), c) = x +- ... +- +-It will also support lambdas: +- +- lambda (x, y): x + y -> lambda t: t[0] + t[1] +- +- # The parens are a syntax error in Python 3 +- lambda (x): x + y -> lambda x: x + y +-""" +-# Author: Collin Winter +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Assign, Name, Newline, Number, Subscript, syms +- +-def is_docstring(stmt): +- return isinstance(stmt, pytree.Node) and \ +- stmt.children[0].type == token.STRING +- +-class FixTupleParams(fixer_base.BaseFix): +- PATTERN = """ +- funcdef< 'def' any parameters< '(' args=any ')' > +- ['->' any] ':' suite=any+ > +- | +- lambda= +- lambdef< 'lambda' args=vfpdef< '(' inner=any ')' > +- ':' body=any +- > +- """ +- +- def transform(self, node, results): +- if "lambda" in results: +- return self.transform_lambda(node, results) +- +- new_lines = [] +- suite = results["suite"] +- args = results["args"] +- # This crap is so "def foo(...): x = 5; y = 7" is handled correctly. +- # TODO(cwinter): suite-cleanup +- if suite[0].children[1].type == token.INDENT: +- start = 2 +- indent = suite[0].children[1].value +- end = Newline() +- else: +- start = 0 +- indent = "; " +- end = pytree.Leaf(token.INDENT, "") +- +- # We need access to self for new_name(), and making this a method +- # doesn't feel right. Closing over self and new_lines makes the +- # code below cleaner. +- def handle_tuple(tuple_arg, add_prefix=False): +- n = Name(self.new_name()) +- arg = tuple_arg.clone() +- arg.set_prefix("") +- stmt = Assign(arg, n.clone()) +- if add_prefix: +- n.set_prefix(" ") +- tuple_arg.replace(n) +- new_lines.append(pytree.Node(syms.simple_stmt, +- [stmt, end.clone()])) +- +- if args.type == syms.tfpdef: +- handle_tuple(args) +- elif args.type == syms.typedargslist: +- for i, arg in enumerate(args.children): +- if arg.type == syms.tfpdef: +- # Without add_prefix, the emitted code is correct, +- # just ugly. +- handle_tuple(arg, add_prefix=(i > 0)) +- +- if not new_lines: +- return node +- +- # This isn't strictly necessary, but it plays nicely with other fixers. +- # TODO(cwinter) get rid of this when children becomes a smart list +- for line in new_lines: +- line.parent = suite[0] +- +- # TODO(cwinter) suite-cleanup +- after = start +- if start == 0: +- new_lines[0].set_prefix(" ") +- elif is_docstring(suite[0].children[start]): +- new_lines[0].set_prefix(indent) +- after = start + 1 +- +- suite[0].children[after:after] = new_lines +- for i in range(after+1, after+len(new_lines)+1): +- suite[0].children[i].set_prefix(indent) +- suite[0].changed() +- +- def transform_lambda(self, node, results): +- args = results["args"] +- body = results["body"] +- inner = simplify_args(results["inner"]) +- +- # Replace lambda ((((x)))): x with lambda x: x +- if inner.type == token.NAME: +- inner = inner.clone() +- inner.set_prefix(" ") +- args.replace(inner) +- return +- +- params = find_params(args) +- to_index = map_to_index(params) +- tup_name = self.new_name(tuple_name(params)) +- +- new_param = Name(tup_name, prefix=" ") +- args.replace(new_param.clone()) +- for n in body.post_order(): +- if n.type == token.NAME and n.value in to_index: +- subscripts = [c.clone() for c in to_index[n.value]] +- new = pytree.Node(syms.power, +- [new_param.clone()] + subscripts) +- new.set_prefix(n.get_prefix()) +- n.replace(new) +- +- +-### Helper functions for transform_lambda() +- +-def simplify_args(node): +- if node.type in (syms.vfplist, token.NAME): +- return node +- elif node.type == syms.vfpdef: +- # These look like vfpdef< '(' x ')' > where x is NAME +- # or another vfpdef instance (leading to recursion). +- while node.type == syms.vfpdef: +- node = node.children[1] +- return node +- raise RuntimeError("Received unexpected node %s" % node) +- +-def find_params(node): +- if node.type == syms.vfpdef: +- return find_params(node.children[1]) +- elif node.type == token.NAME: +- return node.value +- return [find_params(c) for c in node.children if c.type != token.COMMA] +- +-def map_to_index(param_list, prefix=[], d=None): +- if d is None: +- d = {} +- for i, obj in enumerate(param_list): +- trailer = [Subscript(Number(i))] +- if isinstance(obj, list): +- map_to_index(obj, trailer, d=d) +- else: +- d[obj] = prefix + trailer +- return d +- +-def tuple_name(param_list): +- l = [] +- for obj in param_list: +- if isinstance(obj, list): +- l.append(tuple_name(obj)) +- else: +- l.append(obj) +- return "_".join(l) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_types.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_types.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,62 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for removing uses of the types module. +- +-These work for only the known names in the types module. The forms above +-can include types. or not. ie, It is assumed the module is imported either as: +- +- import types +- from types import ... # either * or specific types +- +-The import statements are not modified. +- +-There should be another fixer that handles at least the following constants: +- +- type([]) -> list +- type(()) -> tuple +- type('') -> str +- +-""" +- +-# Local imports +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name +- +-_TYPE_MAPPING = { +- 'BooleanType' : 'bool', +- 'BufferType' : 'memoryview', +- 'ClassType' : 'type', +- 'ComplexType' : 'complex', +- 'DictType': 'dict', +- 'DictionaryType' : 'dict', +- 'EllipsisType' : 'type(Ellipsis)', +- #'FileType' : 'io.IOBase', +- 'FloatType': 'float', +- 'IntType': 'int', +- 'ListType': 'list', +- 'LongType': 'int', +- 'ObjectType' : 'object', +- 'NoneType': 'type(None)', +- 'NotImplementedType' : 'type(NotImplemented)', +- 'SliceType' : 'slice', +- 'StringType': 'bytes', # XXX ? +- 'StringTypes' : 'str', # XXX ? +- 'TupleType': 'tuple', +- 'TypeType' : 'type', +- 'UnicodeType': 'str', +- 'XRangeType' : 'range', +- } +- +-_pats = ["power< 'types' trailer< '.' name='%s' > >" % t for t in _TYPE_MAPPING] +- +-class FixTypes(fixer_base.BaseFix): +- +- PATTERN = '|'.join(_pats) +- +- def transform(self, node, results): +- new_value = _TYPE_MAPPING.get(results["name"].value) +- if new_value: +- return Name(new_value, prefix=node.get_prefix()) +- return None +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_unicode.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_unicode.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,28 +0,0 @@ +-"""Fixer that changes unicode to str, unichr to chr, and u"..." into "...". +- +-""" +- +-import re +-from ..pgen2 import token +-from .. import fixer_base +- +-class FixUnicode(fixer_base.BaseFix): +- +- PATTERN = "STRING | NAME<'unicode' | 'unichr'>" +- +- def transform(self, node, results): +- if node.type == token.NAME: +- if node.value == "unicode": +- new = node.clone() +- new.value = "str" +- return new +- if node.value == "unichr": +- new = node.clone() +- new.value = "chr" +- return new +- # XXX Warn when __unicode__ found? +- elif node.type == token.STRING: +- if re.match(r"[uU][rR]?[\'\"]", node.value): +- new = node.clone() +- new.value = new.value[1:] +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_urllib.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_urllib.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,180 +0,0 @@ +-"""Fix changes imports of urllib which are now incompatible. +- This is rather similar to fix_imports, but because of the more +- complex nature of the fixing for urllib, it has its own fixer. +-""" +-# Author: Nick Edds +- +-# Local imports +-from .fix_imports import alternates, FixImports +-from .. import fixer_base +-from ..fixer_util import Name, Comma, FromImport, Newline, attr_chain +- +-MAPPING = {'urllib': [ +- ('urllib.request', +- ['URLOpener', 'FancyURLOpener', 'urlretrieve', +- '_urlopener', 'urlcleanup']), +- ('urllib.parse', +- ['quote', 'quote_plus', 'unquote', 'unquote_plus', +- 'urlencode', 'pathname2url', 'url2pathname', 'splitattr', +- 'splithost', 'splitnport', 'splitpasswd', 'splitport', +- 'splitquery', 'splittag', 'splittype', 'splituser', +- 'splitvalue', ]), +- ('urllib.error', +- ['ContentTooShortError'])], +- 'urllib2' : [ +- ('urllib.request', +- ['urlopen', 'install_opener', 'build_opener', +- 'Request', 'OpenerDirector', 'BaseHandler', +- 'HTTPDefaultErrorHandler', 'HTTPRedirectHandler', +- 'HTTPCookieProcessor', 'ProxyHandler', +- 'HTTPPasswordMgr', +- 'HTTPPasswordMgrWithDefaultRealm', +- 'AbstractBasicAuthHandler', +- 'HTTPBasicAuthHandler', 'ProxyBasicAuthHandler', +- 'AbstractDigestAuthHandler', +- 'HTTPDigestAuthHandler', 'ProxyDigestAuthHandler', +- 'HTTPHandler', 'HTTPSHandler', 'FileHandler', +- 'FTPHandler', 'CacheFTPHandler', +- 'UnknownHandler']), +- ('urllib.error', +- ['URLError', 'HTTPError']), +- ] +-} +- +-# Duplicate the url parsing functions for urllib2. +-MAPPING["urllib2"].append(MAPPING["urllib"][1]) +- +- +-def build_pattern(): +- bare = set() +- for old_module, changes in MAPPING.items(): +- for change in changes: +- new_module, members = change +- members = alternates(members) +- yield """import_name< 'import' (module=%r +- | dotted_as_names< any* module=%r any* >) > +- """ % (old_module, old_module) +- yield """import_from< 'from' mod_member=%r 'import' +- ( member=%s | import_as_name< member=%s 'as' any > | +- import_as_names< members=any* >) > +- """ % (old_module, members, members) +- yield """import_from< 'from' module_star=%r 'import' star='*' > +- """ % old_module +- yield """import_name< 'import' +- dotted_as_name< module_as=%r 'as' any > > +- """ % old_module +- yield """power< module_dot=%r trailer< '.' member=%s > any* > +- """ % (old_module, members) +- +- +-class FixUrllib(FixImports): +- +- def build_pattern(self): +- return "|".join(build_pattern()) +- +- def transform_import(self, node, results): +- """Transform for the basic import case. Replaces the old +- import name with a comma separated list of its +- replacements. +- """ +- import_mod = results.get('module') +- pref = import_mod.get_prefix() +- +- names = [] +- +- # create a Node list of the replacement modules +- for name in MAPPING[import_mod.value][:-1]: +- names.extend([Name(name[0], prefix=pref), Comma()]) +- names.append(Name(MAPPING[import_mod.value][-1][0], prefix=pref)) +- import_mod.replace(names) +- +- def transform_member(self, node, results): +- """Transform for imports of specific module elements. Replaces +- the module to be imported from with the appropriate new +- module. +- """ +- mod_member = results.get('mod_member') +- pref = mod_member.get_prefix() +- member = results.get('member') +- +- # Simple case with only a single member being imported +- if member: +- # this may be a list of length one, or just a node +- if isinstance(member, list): +- member = member[0] +- new_name = None +- for change in MAPPING[mod_member.value]: +- if member.value in change[1]: +- new_name = change[0] +- break +- if new_name: +- mod_member.replace(Name(new_name, prefix=pref)) +- else: +- self.cannot_convert(node, +- 'This is an invalid module element') +- +- # Multiple members being imported +- else: +- # a dictionary for replacements, order matters +- modules = [] +- mod_dict = {} +- members = results.get('members') +- for member in members: +- member = member.value +- # we only care about the actual members +- if member != ',': +- for change in MAPPING[mod_member.value]: +- if member in change[1]: +- if change[0] in mod_dict: +- mod_dict[change[0]].append(member) +- else: +- mod_dict[change[0]] = [member] +- modules.append(change[0]) +- +- new_nodes = [] +- for module in modules: +- elts = mod_dict[module] +- names = [] +- for elt in elts[:-1]: +- names.extend([Name(elt, prefix=pref), Comma()]) +- names.append(Name(elts[-1], prefix=pref)) +- new_nodes.append(FromImport(module, names)) +- if new_nodes: +- nodes = [] +- for new_node in new_nodes[:-1]: +- nodes.extend([new_node, Newline()]) +- nodes.append(new_nodes[-1]) +- node.replace(nodes) +- else: +- self.cannot_convert(node, 'All module elements are invalid') +- +- def transform_dot(self, node, results): +- """Transform for calls to module members in code.""" +- module_dot = results.get('module_dot') +- member = results.get('member') +- # this may be a list of length one, or just a node +- if isinstance(member, list): +- member = member[0] +- new_name = None +- for change in MAPPING[module_dot.value]: +- if member.value in change[1]: +- new_name = change[0] +- break +- if new_name: +- module_dot.replace(Name(new_name, +- prefix=module_dot.get_prefix())) +- else: +- self.cannot_convert(node, 'This is an invalid module element') +- +- def transform(self, node, results): +- if results.get('module'): +- self.transform_import(node, results) +- elif results.get('mod_member'): +- self.transform_member(node, results) +- elif results.get('module_dot'): +- self.transform_dot(node, results) +- # Renaming and star imports are not supported for these modules. +- elif results.get('module_star'): +- self.cannot_convert(node, 'Cannot handle star imports.') +- elif results.get('module_as'): +- self.cannot_convert(node, 'This module is now multiple modules') +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_ws_comma.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_ws_comma.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,39 +0,0 @@ +-"""Fixer that changes 'a ,b' into 'a, b'. +- +-This also changes '{a :b}' into '{a: b}', but does not touch other +-uses of colons. It does not touch other uses of whitespace. +- +-""" +- +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +- +-class FixWsComma(fixer_base.BaseFix): +- +- explicit = True # The user must ask for this fixers +- +- PATTERN = """ +- any<(not(',') any)+ ',' ((not(',') any)+ ',')* [not(',') any]> +- """ +- +- COMMA = pytree.Leaf(token.COMMA, ",") +- COLON = pytree.Leaf(token.COLON, ":") +- SEPS = (COMMA, COLON) +- +- def transform(self, node, results): +- new = node.clone() +- comma = False +- for child in new.children: +- if child in self.SEPS: +- prefix = child.get_prefix() +- if prefix.isspace() and "\n" not in prefix: +- child.set_prefix("") +- comma = True +- else: +- if comma: +- prefix = child.get_prefix() +- if not prefix: +- child.set_prefix(" ") +- comma = False +- return new +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_xrange.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_xrange.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,64 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that changes xrange(...) into range(...).""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, Call, consuming_calls +-from .. import patcomp +- +- +-class FixXrange(fixer_base.BaseFix): +- +- PATTERN = """ +- power< +- (name='range'|name='xrange') trailer< '(' args=any ')' > +- rest=any* > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- if name.value == "xrange": +- return self.transform_xrange(node, results) +- elif name.value == "range": +- return self.transform_range(node, results) +- else: +- raise ValueError(repr(name)) +- +- def transform_xrange(self, node, results): +- name = results["name"] +- name.replace(Name("range", prefix=name.get_prefix())) +- +- def transform_range(self, node, results): +- if not self.in_special_context(node): +- range_call = Call(Name("range"), [results["args"].clone()]) +- # Encase the range call in list(). +- list_call = Call(Name("list"), [range_call], +- prefix=node.get_prefix()) +- # Put things that were after the range() call after the list call. +- for n in results["rest"]: +- list_call.append_child(n) +- return list_call +- return node +- +- P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" +- p1 = patcomp.compile_pattern(P1) +- +- P2 = """for_stmt< 'for' any 'in' node=any ':' any* > +- | comp_for< 'for' any 'in' node=any any* > +- | comparison< any 'in' node=any any*> +- """ +- p2 = patcomp.compile_pattern(P2) +- +- def in_special_context(self, node): +- if node.parent is None: +- return False +- results = {} +- if (node.parent.parent is not None and +- self.p1.match(node.parent.parent, results) and +- results["node"] is node): +- # list(d.keys()) -> list(d.keys()), etc. +- return results["func"].value in consuming_calls +- # for ... in d.iterkeys() -> for ... in d.keys(), etc. +- return self.p2.match(node.parent, results) and results["node"] is node +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_xreadlines.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_xreadlines.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,24 +0,0 @@ +-"""Fix "for x in f.xreadlines()" -> "for x in f". +- +-This fixer will also convert g(f.xreadlines) into g(f.__iter__).""" +-# Author: Collin Winter +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +- +-class FixXreadlines(fixer_base.BaseFix): +- PATTERN = """ +- power< call=any+ trailer< '.' 'xreadlines' > trailer< '(' ')' > > +- | +- power< any+ trailer< '.' no_call='xreadlines' > > +- """ +- +- def transform(self, node, results): +- no_call = results.get("no_call") +- +- if no_call: +- no_call.replace(Name("__iter__", prefix=no_call.get_prefix())) +- else: +- node.replace([x.clone() for x in results["call"]]) +diff -r 531f2e948299 lib2to3/fixes/.svn/text-base/fix_zip.py.svn-base +--- a/lib2to3/fixes/.svn/text-base/fix_zip.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,34 +0,0 @@ +-""" +-Fixer that changes zip(seq0, seq1, ...) into list(zip(seq0, seq1, ...) +-unless there exists a 'from future_builtins import zip' statement in the +-top-level namespace. +- +-We avoid the transformation if the zip() call is directly contained in +-iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. +-""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, Call, in_special_context +- +-class FixZip(fixer_base.ConditionalFix): +- +- PATTERN = """ +- power< 'zip' args=trailer< '(' [any] ')' > +- > +- """ +- +- skip_on = "future_builtins.zip" +- +- def transform(self, node, results): +- if self.should_skip(node): +- return +- +- if in_special_context(node): +- return None +- +- new = node.clone() +- new.set_prefix("") +- new = Call(Name("list"), [new]) +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/__init__.py +--- a/lib2to3/fixes/__init__.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/fixes/__init__.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,1 +1,2 @@ +-# Dummy file to make this directory a package. ++from refactor.fixes import from2 ++from refactor.fixes.from2 import * +diff -r 531f2e948299 lib2to3/fixes/fix_apply.py +--- a/lib2to3/fixes/fix_apply.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,58 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for apply(). +- +-This converts apply(func, v, k) into (func)(*v, **k).""" +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Call, Comma, parenthesize +- +-class FixApply(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'apply' +- trailer< +- '(' +- arglist< +- (not argument +- ')' +- > +- > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- assert results +- func = results["func"] +- args = results["args"] +- kwds = results.get("kwds") +- prefix = node.get_prefix() +- func = func.clone() +- if (func.type not in (token.NAME, syms.atom) and +- (func.type != syms.power or +- func.children[-2].type == token.DOUBLESTAR)): +- # Need to parenthesize +- func = parenthesize(func) +- func.set_prefix("") +- args = args.clone() +- args.set_prefix("") +- if kwds is not None: +- kwds = kwds.clone() +- kwds.set_prefix("") +- l_newargs = [pytree.Leaf(token.STAR, "*"), args] +- if kwds is not None: +- l_newargs.extend([Comma(), +- pytree.Leaf(token.DOUBLESTAR, "**"), +- kwds]) +- l_newargs[-2].set_prefix(" ") # that's the ** token +- # XXX Sometimes we could be cleverer, e.g. apply(f, (x, y) + t) +- # can be translated into f(x, y, *t) instead of f(*(x, y) + t) +- #new = pytree.Node(syms.power, (func, ArgList(l_newargs))) +- return Call(func, l_newargs, prefix=prefix) +diff -r 531f2e948299 lib2to3/fixes/fix_basestring.py +--- a/lib2to3/fixes/fix_basestring.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,13 +0,0 @@ +-"""Fixer for basestring -> str.""" +-# Author: Christian Heimes +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-class FixBasestring(fixer_base.BaseFix): +- +- PATTERN = "'basestring'" +- +- def transform(self, node, results): +- return Name("str", prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_buffer.py +--- a/lib2to3/fixes/fix_buffer.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,21 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that changes buffer(...) into memoryview(...).""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +- +-class FixBuffer(fixer_base.BaseFix): +- +- explicit = True # The user must ask for this fixer +- +- PATTERN = """ +- power< name='buffer' trailer< '(' [any] ')' > > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- name.replace(Name("memoryview", prefix=name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/fix_callable.py +--- a/lib2to3/fixes/fix_callable.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,31 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for callable(). +- +-This converts callable(obj) into hasattr(obj, '__call__').""" +- +-# Local imports +-from .. import pytree +-from .. import fixer_base +-from ..fixer_util import Call, Name, String +- +-class FixCallable(fixer_base.BaseFix): +- +- # Ignore callable(*args) or use of keywords. +- # Either could be a hint that the builtin callable() is not being used. +- PATTERN = """ +- power< 'callable' +- trailer< lpar='(' +- ( not(arglist | argument) any ','> ) +- rpar=')' > +- after=any* +- > +- """ +- +- def transform(self, node, results): +- func = results["func"] +- +- args = [func.clone(), String(', '), String("'__call__'")] +- return Call(Name("hasattr"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_dict.py +--- a/lib2to3/fixes/fix_dict.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,99 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for dict methods. +- +-d.keys() -> list(d.keys()) +-d.items() -> list(d.items()) +-d.values() -> list(d.values()) +- +-d.iterkeys() -> iter(d.keys()) +-d.iteritems() -> iter(d.items()) +-d.itervalues() -> iter(d.values()) +- +-Except in certain very specific contexts: the iter() can be dropped +-when the context is list(), sorted(), iter() or for...in; the list() +-can be dropped when the context is list() or sorted() (but not iter() +-or for...in!). Special contexts that apply to both: list(), sorted(), tuple() +-set(), any(), all(), sum(). +- +-Note: iter(d.keys()) could be written as iter(d) but since the +-original d.iterkeys() was also redundant we don't fix this. And there +-are (rare) contexts where it makes a difference (e.g. when passing it +-as an argument to a function that introspects the argument). +-""" +- +-# Local imports +-from .. import pytree +-from .. import patcomp +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, LParen, RParen, ArgList, Dot +-from .. import fixer_util +- +- +-iter_exempt = fixer_util.consuming_calls | set(["iter"]) +- +- +-class FixDict(fixer_base.BaseFix): +- PATTERN = """ +- power< head=any+ +- trailer< '.' method=('keys'|'items'|'values'| +- 'iterkeys'|'iteritems'|'itervalues') > +- parens=trailer< '(' ')' > +- tail=any* +- > +- """ +- +- def transform(self, node, results): +- head = results["head"] +- method = results["method"][0] # Extract node for method name +- tail = results["tail"] +- syms = self.syms +- method_name = method.value +- isiter = method_name.startswith("iter") +- if isiter: +- method_name = method_name[4:] +- assert method_name in ("keys", "items", "values"), repr(method) +- head = [n.clone() for n in head] +- tail = [n.clone() for n in tail] +- special = not tail and self.in_special_context(node, isiter) +- args = head + [pytree.Node(syms.trailer, +- [Dot(), +- Name(method_name, +- prefix=method.get_prefix())]), +- results["parens"].clone()] +- new = pytree.Node(syms.power, args) +- if not special: +- new.set_prefix("") +- new = Call(Name(isiter and "iter" or "list"), [new]) +- if tail: +- new = pytree.Node(syms.power, [new] + tail) +- new.set_prefix(node.get_prefix()) +- return new +- +- P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" +- p1 = patcomp.compile_pattern(P1) +- +- P2 = """for_stmt< 'for' any 'in' node=any ':' any* > +- | comp_for< 'for' any 'in' node=any any* > +- """ +- p2 = patcomp.compile_pattern(P2) +- +- def in_special_context(self, node, isiter): +- if node.parent is None: +- return False +- results = {} +- if (node.parent.parent is not None and +- self.p1.match(node.parent.parent, results) and +- results["node"] is node): +- if isiter: +- # iter(d.iterkeys()) -> iter(d.keys()), etc. +- return results["func"].value in iter_exempt +- else: +- # list(d.keys()) -> list(d.keys()), etc. +- return results["func"].value in fixer_util.consuming_calls +- if not isiter: +- return False +- # for ... in d.iterkeys() -> for ... in d.keys(), etc. +- return self.p2.match(node.parent, results) and results["node"] is node +diff -r 531f2e948299 lib2to3/fixes/fix_except.py +--- a/lib2to3/fixes/fix_except.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,92 +0,0 @@ +-"""Fixer for except statements with named exceptions. +- +-The following cases will be converted: +- +-- "except E, T:" where T is a name: +- +- except E as T: +- +-- "except E, T:" where T is not a name, tuple or list: +- +- except E as t: +- T = t +- +- This is done because the target of an "except" clause must be a +- name. +- +-- "except E, T:" where T is a tuple or list literal: +- +- except E as t: +- T = t.args +-""" +-# Author: Collin Winter +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Assign, Attr, Name, is_tuple, is_list, syms +- +-def find_excepts(nodes): +- for i, n in enumerate(nodes): +- if n.type == syms.except_clause: +- if n.children[0].value == 'except': +- yield (n, nodes[i+2]) +- +-class FixExcept(fixer_base.BaseFix): +- +- PATTERN = """ +- try_stmt< 'try' ':' suite +- cleanup=(except_clause ':' suite)+ +- tail=(['except' ':' suite] +- ['else' ':' suite] +- ['finally' ':' suite]) > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- +- tail = [n.clone() for n in results["tail"]] +- +- try_cleanup = [ch.clone() for ch in results["cleanup"]] +- for except_clause, e_suite in find_excepts(try_cleanup): +- if len(except_clause.children) == 4: +- (E, comma, N) = except_clause.children[1:4] +- comma.replace(Name("as", prefix=" ")) +- +- if N.type != token.NAME: +- # Generate a new N for the except clause +- new_N = Name(self.new_name(), prefix=" ") +- target = N.clone() +- target.set_prefix("") +- N.replace(new_N) +- new_N = new_N.clone() +- +- # Insert "old_N = new_N" as the first statement in +- # the except body. This loop skips leading whitespace +- # and indents +- #TODO(cwinter) suite-cleanup +- suite_stmts = e_suite.children +- for i, stmt in enumerate(suite_stmts): +- if isinstance(stmt, pytree.Node): +- break +- +- # The assignment is different if old_N is a tuple or list +- # In that case, the assignment is old_N = new_N.args +- if is_tuple(N) or is_list(N): +- assign = Assign(target, Attr(new_N, Name('args'))) +- else: +- assign = Assign(target, new_N) +- +- #TODO(cwinter) stopgap until children becomes a smart list +- for child in reversed(suite_stmts[:i]): +- e_suite.insert_child(0, child) +- e_suite.insert_child(i, assign) +- elif N.get_prefix() == "": +- # No space after a comma is legal; no space after "as", +- # not so much. +- N.set_prefix(" ") +- +- #TODO(cwinter) fix this when children becomes a smart list +- children = [c.clone() for c in node.children[:3]] + try_cleanup + tail +- return pytree.Node(node.type, children) +diff -r 531f2e948299 lib2to3/fixes/fix_exec.py +--- a/lib2to3/fixes/fix_exec.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,39 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for exec. +- +-This converts usages of the exec statement into calls to a built-in +-exec() function. +- +-exec code in ns1, ns2 -> exec(code, ns1, ns2) +-""" +- +-# Local imports +-from .. import pytree +-from .. import fixer_base +-from ..fixer_util import Comma, Name, Call +- +- +-class FixExec(fixer_base.BaseFix): +- +- PATTERN = """ +- exec_stmt< 'exec' a=any 'in' b=any [',' c=any] > +- | +- exec_stmt< 'exec' (not atom<'(' [any] ')'>) a=any > +- """ +- +- def transform(self, node, results): +- assert results +- syms = self.syms +- a = results["a"] +- b = results.get("b") +- c = results.get("c") +- args = [a.clone()] +- args[0].set_prefix("") +- if b is not None: +- args.extend([Comma(), b.clone()]) +- if c is not None: +- args.extend([Comma(), c.clone()]) +- +- return Call(Name("exec"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_execfile.py +--- a/lib2to3/fixes/fix_execfile.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,51 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for execfile. +- +-This converts usages of the execfile function into calls to the built-in +-exec() function. +-""" +- +-from .. import fixer_base +-from ..fixer_util import (Comma, Name, Call, LParen, RParen, Dot, Node, +- ArgList, String, syms) +- +- +-class FixExecfile(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'execfile' trailer< '(' arglist< filename=any [',' globals=any [',' locals=any ] ] > ')' > > +- | +- power< 'execfile' trailer< '(' filename=any ')' > > +- """ +- +- def transform(self, node, results): +- assert results +- filename = results["filename"] +- globals = results.get("globals") +- locals = results.get("locals") +- +- # Copy over the prefix from the right parentheses end of the execfile +- # call. +- execfile_paren = node.children[-1].children[-1].clone() +- # Construct open().read(). +- open_args = ArgList([filename.clone()], rparen=execfile_paren) +- open_call = Node(syms.power, [Name("open"), open_args]) +- read = [Node(syms.trailer, [Dot(), Name('read')]), +- Node(syms.trailer, [LParen(), RParen()])] +- open_expr = [open_call] + read +- # Wrap the open call in a compile call. This is so the filename will be +- # preserved in the execed code. +- filename_arg = filename.clone() +- filename_arg.set_prefix(" ") +- exec_str = String("'exec'", " ") +- compile_args = open_expr + [Comma(), filename_arg, Comma(), exec_str] +- compile_call = Call(Name("compile"), compile_args, "") +- # Finally, replace the execfile call with an exec call. +- args = [compile_call] +- if globals is not None: +- args.extend([Comma(), globals.clone()]) +- if locals is not None: +- args.extend([Comma(), locals.clone()]) +- return Call(Name("exec"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_filter.py +--- a/lib2to3/fixes/fix_filter.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,75 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that changes filter(F, X) into list(filter(F, X)). +- +-We avoid the transformation if the filter() call is directly contained +-in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or +-for V in <>:. +- +-NOTE: This is still not correct if the original code was depending on +-filter(F, X) to return a string if X is a string and a tuple if X is a +-tuple. That would require type inference, which we don't do. Let +-Python 2.6 figure it out. +-""" +- +-# Local imports +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, ListComp, in_special_context +- +-class FixFilter(fixer_base.ConditionalFix): +- +- PATTERN = """ +- filter_lambda=power< +- 'filter' +- trailer< +- '(' +- arglist< +- lambdef< 'lambda' +- (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any +- > +- ',' +- it=any +- > +- ')' +- > +- > +- | +- power< +- 'filter' +- trailer< '(' arglist< none='None' ',' seq=any > ')' > +- > +- | +- power< +- 'filter' +- args=trailer< '(' [any] ')' > +- > +- """ +- +- skip_on = "future_builtins.filter" +- +- def transform(self, node, results): +- if self.should_skip(node): +- return +- +- if "filter_lambda" in results: +- new = ListComp(results.get("fp").clone(), +- results.get("fp").clone(), +- results.get("it").clone(), +- results.get("xp").clone()) +- +- elif "none" in results: +- new = ListComp(Name("_f"), +- Name("_f"), +- results["seq"].clone(), +- Name("_f")) +- +- else: +- if in_special_context(node): +- return None +- new = node.clone() +- new.set_prefix("") +- new = Call(Name("list"), [new]) +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_funcattrs.py +--- a/lib2to3/fixes/fix_funcattrs.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,19 +0,0 @@ +-"""Fix function attribute names (f.func_x -> f.__x__).""" +-# Author: Collin Winter +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +- +-class FixFuncattrs(fixer_base.BaseFix): +- PATTERN = """ +- power< any+ trailer< '.' attr=('func_closure' | 'func_doc' | 'func_globals' +- | 'func_name' | 'func_defaults' | 'func_code' +- | 'func_dict') > any* > +- """ +- +- def transform(self, node, results): +- attr = results["attr"][0] +- attr.replace(Name(("__%s__" % attr.value[5:]), +- prefix=attr.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/fix_future.py +--- a/lib2to3/fixes/fix_future.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,20 +0,0 @@ +-"""Remove __future__ imports +- +-from __future__ import foo is replaced with an empty line. +-""" +-# Author: Christian Heimes +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import BlankLine +- +-class FixFuture(fixer_base.BaseFix): +- PATTERN = """import_from< 'from' module_name="__future__" 'import' any >""" +- +- # This should be run last -- some things check for the import +- run_order = 10 +- +- def transform(self, node, results): +- new = BlankLine() +- new.prefix = node.get_prefix() +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_getcwdu.py +--- a/lib2to3/fixes/fix_getcwdu.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,18 +0,0 @@ +-""" +-Fixer that changes os.getcwdu() to os.getcwd(). +-""" +-# Author: Victor Stinner +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-class FixGetcwdu(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'os' trailer< dot='.' name='getcwdu' > any* > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- name.replace(Name("getcwd", prefix=name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/fix_has_key.py +--- a/lib2to3/fixes/fix_has_key.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,109 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for has_key(). +- +-Calls to .has_key() methods are expressed in terms of the 'in' +-operator: +- +- d.has_key(k) -> k in d +- +-CAVEATS: +-1) While the primary target of this fixer is dict.has_key(), the +- fixer will change any has_key() method call, regardless of its +- class. +- +-2) Cases like this will not be converted: +- +- m = d.has_key +- if m(k): +- ... +- +- Only *calls* to has_key() are converted. While it is possible to +- convert the above to something like +- +- m = d.__contains__ +- if m(k): +- ... +- +- this is currently not done. +-""" +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, parenthesize +- +- +-class FixHasKey(fixer_base.BaseFix): +- +- PATTERN = """ +- anchor=power< +- before=any+ +- trailer< '.' 'has_key' > +- trailer< +- '(' +- ( not(arglist | argument) arg=any ','> +- ) +- ')' +- > +- after=any* +- > +- | +- negation=not_test< +- 'not' +- anchor=power< +- before=any+ +- trailer< '.' 'has_key' > +- trailer< +- '(' +- ( not(arglist | argument) arg=any ','> +- ) +- ')' +- > +- > +- > +- """ +- +- def transform(self, node, results): +- assert results +- syms = self.syms +- if (node.parent.type == syms.not_test and +- self.pattern.match(node.parent)): +- # Don't transform a node matching the first alternative of the +- # pattern when its parent matches the second alternative +- return None +- negation = results.get("negation") +- anchor = results["anchor"] +- prefix = node.get_prefix() +- before = [n.clone() for n in results["before"]] +- arg = results["arg"].clone() +- after = results.get("after") +- if after: +- after = [n.clone() for n in after] +- if arg.type in (syms.comparison, syms.not_test, syms.and_test, +- syms.or_test, syms.test, syms.lambdef, syms.argument): +- arg = parenthesize(arg) +- if len(before) == 1: +- before = before[0] +- else: +- before = pytree.Node(syms.power, before) +- before.set_prefix(" ") +- n_op = Name("in", prefix=" ") +- if negation: +- n_not = Name("not", prefix=" ") +- n_op = pytree.Node(syms.comp_op, (n_not, n_op)) +- new = pytree.Node(syms.comparison, (arg, n_op, before)) +- if after: +- new = parenthesize(new) +- new = pytree.Node(syms.power, (new,) + tuple(after)) +- if node.parent.type in (syms.comparison, syms.expr, syms.xor_expr, +- syms.and_expr, syms.shift_expr, +- syms.arith_expr, syms.term, +- syms.factor, syms.power): +- new = parenthesize(new) +- new.set_prefix(prefix) +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_idioms.py +--- a/lib2to3/fixes/fix_idioms.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,134 +0,0 @@ +-"""Adjust some old Python 2 idioms to their modern counterparts. +- +-* Change some type comparisons to isinstance() calls: +- type(x) == T -> isinstance(x, T) +- type(x) is T -> isinstance(x, T) +- type(x) != T -> not isinstance(x, T) +- type(x) is not T -> not isinstance(x, T) +- +-* Change "while 1:" into "while True:". +- +-* Change both +- +- v = list(EXPR) +- v.sort() +- foo(v) +- +-and the more general +- +- v = EXPR +- v.sort() +- foo(v) +- +-into +- +- v = sorted(EXPR) +- foo(v) +-""" +-# Author: Jacques Frechet, Collin Winter +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Call, Comma, Name, Node, syms +- +-CMP = "(n='!=' | '==' | 'is' | n=comp_op< 'is' 'not' >)" +-TYPE = "power< 'type' trailer< '(' x=any ')' > >" +- +-class FixIdioms(fixer_base.BaseFix): +- +- explicit = True # The user must ask for this fixer +- +- PATTERN = r""" +- isinstance=comparison< %s %s T=any > +- | +- isinstance=comparison< T=any %s %s > +- | +- while_stmt< 'while' while='1' ':' any+ > +- | +- sorted=any< +- any* +- simple_stmt< +- expr_stmt< id1=any '=' +- power< list='list' trailer< '(' (not arglist) any ')' > > +- > +- '\n' +- > +- sort= +- simple_stmt< +- power< id2=any +- trailer< '.' 'sort' > trailer< '(' ')' > +- > +- '\n' +- > +- next=any* +- > +- | +- sorted=any< +- any* +- simple_stmt< expr_stmt< id1=any '=' expr=any > '\n' > +- sort= +- simple_stmt< +- power< id2=any +- trailer< '.' 'sort' > trailer< '(' ')' > +- > +- '\n' +- > +- next=any* +- > +- """ % (TYPE, CMP, CMP, TYPE) +- +- def match(self, node): +- r = super(FixIdioms, self).match(node) +- # If we've matched one of the sort/sorted subpatterns above, we +- # want to reject matches where the initial assignment and the +- # subsequent .sort() call involve different identifiers. +- if r and "sorted" in r: +- if r["id1"] == r["id2"]: +- return r +- return None +- return r +- +- def transform(self, node, results): +- if "isinstance" in results: +- return self.transform_isinstance(node, results) +- elif "while" in results: +- return self.transform_while(node, results) +- elif "sorted" in results: +- return self.transform_sort(node, results) +- else: +- raise RuntimeError("Invalid match") +- +- def transform_isinstance(self, node, results): +- x = results["x"].clone() # The thing inside of type() +- T = results["T"].clone() # The type being compared against +- x.set_prefix("") +- T.set_prefix(" ") +- test = Call(Name("isinstance"), [x, Comma(), T]) +- if "n" in results: +- test.set_prefix(" ") +- test = Node(syms.not_test, [Name("not"), test]) +- test.set_prefix(node.get_prefix()) +- return test +- +- def transform_while(self, node, results): +- one = results["while"] +- one.replace(Name("True", prefix=one.get_prefix())) +- +- def transform_sort(self, node, results): +- sort_stmt = results["sort"] +- next_stmt = results["next"] +- list_call = results.get("list") +- simple_expr = results.get("expr") +- +- if list_call: +- list_call.replace(Name("sorted", prefix=list_call.get_prefix())) +- elif simple_expr: +- new = simple_expr.clone() +- new.set_prefix("") +- simple_expr.replace(Call(Name("sorted"), [new], +- prefix=simple_expr.get_prefix())) +- else: +- raise RuntimeError("should not have reached here") +- sort_stmt.remove() +- if next_stmt: +- next_stmt[0].set_prefix(sort_stmt.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_import.py +--- a/lib2to3/fixes/fix_import.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,90 +0,0 @@ +-"""Fixer for import statements. +-If spam is being imported from the local directory, this import: +- from spam import eggs +-Becomes: +- from .spam import eggs +- +-And this import: +- import spam +-Becomes: +- from . import spam +-""" +- +-# Local imports +-from .. import fixer_base +-from os.path import dirname, join, exists, pathsep +-from ..fixer_util import FromImport, syms, token +- +- +-def traverse_imports(names): +- """ +- Walks over all the names imported in a dotted_as_names node. +- """ +- pending = [names] +- while pending: +- node = pending.pop() +- if node.type == token.NAME: +- yield node.value +- elif node.type == syms.dotted_name: +- yield "".join([ch.value for ch in node.children]) +- elif node.type == syms.dotted_as_name: +- pending.append(node.children[0]) +- elif node.type == syms.dotted_as_names: +- pending.extend(node.children[::-2]) +- else: +- raise AssertionError("unkown node type") +- +- +-class FixImport(fixer_base.BaseFix): +- +- PATTERN = """ +- import_from< 'from' imp=any 'import' ['('] any [')'] > +- | +- import_name< 'import' imp=any > +- """ +- +- def transform(self, node, results): +- imp = results['imp'] +- +- if node.type == syms.import_from: +- # Some imps are top-level (eg: 'import ham') +- # some are first level (eg: 'import ham.eggs') +- # some are third level (eg: 'import ham.eggs as spam') +- # Hence, the loop +- while not hasattr(imp, 'value'): +- imp = imp.children[0] +- if self.probably_a_local_import(imp.value): +- imp.value = "." + imp.value +- imp.changed() +- return node +- else: +- have_local = False +- have_absolute = False +- for mod_name in traverse_imports(imp): +- if self.probably_a_local_import(mod_name): +- have_local = True +- else: +- have_absolute = True +- if have_absolute: +- if have_local: +- # We won't handle both sibling and absolute imports in the +- # same statement at the moment. +- self.warning(node, "absolute and local imports together") +- return +- +- new = FromImport('.', [imp]) +- new.set_prefix(node.get_prefix()) +- return new +- +- def probably_a_local_import(self, imp_name): +- imp_name = imp_name.split('.', 1)[0] +- base_path = dirname(self.filename) +- base_path = join(base_path, imp_name) +- # If there is no __init__.py next to the file its not in a package +- # so can't be a relative import. +- if not exists(join(dirname(base_path), '__init__.py')): +- return False +- for ext in ['.py', pathsep, '.pyc', '.so', '.sl', '.pyd']: +- if exists(base_path + ext): +- return True +- return False +diff -r 531f2e948299 lib2to3/fixes/fix_imports.py +--- a/lib2to3/fixes/fix_imports.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,145 +0,0 @@ +-"""Fix incompatible imports and module references.""" +-# Authors: Collin Winter, Nick Edds +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, attr_chain +- +-MAPPING = {'StringIO': 'io', +- 'cStringIO': 'io', +- 'cPickle': 'pickle', +- '__builtin__' : 'builtins', +- 'copy_reg': 'copyreg', +- 'Queue': 'queue', +- 'SocketServer': 'socketserver', +- 'ConfigParser': 'configparser', +- 'repr': 'reprlib', +- 'FileDialog': 'tkinter.filedialog', +- 'tkFileDialog': 'tkinter.filedialog', +- 'SimpleDialog': 'tkinter.simpledialog', +- 'tkSimpleDialog': 'tkinter.simpledialog', +- 'tkColorChooser': 'tkinter.colorchooser', +- 'tkCommonDialog': 'tkinter.commondialog', +- 'Dialog': 'tkinter.dialog', +- 'Tkdnd': 'tkinter.dnd', +- 'tkFont': 'tkinter.font', +- 'tkMessageBox': 'tkinter.messagebox', +- 'ScrolledText': 'tkinter.scrolledtext', +- 'Tkconstants': 'tkinter.constants', +- 'Tix': 'tkinter.tix', +- 'ttk': 'tkinter.ttk', +- 'Tkinter': 'tkinter', +- 'markupbase': '_markupbase', +- '_winreg': 'winreg', +- 'thread': '_thread', +- 'dummy_thread': '_dummy_thread', +- # anydbm and whichdb are handled by fix_imports2 +- 'dbhash': 'dbm.bsd', +- 'dumbdbm': 'dbm.dumb', +- 'dbm': 'dbm.ndbm', +- 'gdbm': 'dbm.gnu', +- 'xmlrpclib': 'xmlrpc.client', +- 'DocXMLRPCServer': 'xmlrpc.server', +- 'SimpleXMLRPCServer': 'xmlrpc.server', +- 'httplib': 'http.client', +- 'htmlentitydefs' : 'html.entities', +- 'HTMLParser' : 'html.parser', +- 'Cookie': 'http.cookies', +- 'cookielib': 'http.cookiejar', +- 'BaseHTTPServer': 'http.server', +- 'SimpleHTTPServer': 'http.server', +- 'CGIHTTPServer': 'http.server', +- #'test.test_support': 'test.support', +- 'commands': 'subprocess', +- 'UserString' : 'collections', +- 'UserList' : 'collections', +- 'urlparse' : 'urllib.parse', +- 'robotparser' : 'urllib.robotparser', +-} +- +- +-def alternates(members): +- return "(" + "|".join(map(repr, members)) + ")" +- +- +-def build_pattern(mapping=MAPPING): +- mod_list = ' | '.join(["module_name='%s'" % key for key in mapping]) +- bare_names = alternates(mapping.keys()) +- +- yield """name_import=import_name< 'import' ((%s) | +- multiple_imports=dotted_as_names< any* (%s) any* >) > +- """ % (mod_list, mod_list) +- yield """import_from< 'from' (%s) 'import' ['('] +- ( any | import_as_name< any 'as' any > | +- import_as_names< any* >) [')'] > +- """ % mod_list +- yield """import_name< 'import' (dotted_as_name< (%s) 'as' any > | +- multiple_imports=dotted_as_names< +- any* dotted_as_name< (%s) 'as' any > any* >) > +- """ % (mod_list, mod_list) +- +- # Find usages of module members in code e.g. thread.foo(bar) +- yield "power< bare_with_attr=(%s) trailer<'.' any > any* >" % bare_names +- +- +-class FixImports(fixer_base.BaseFix): +- +- order = "pre" # Pre-order tree traversal +- +- # This is overridden in fix_imports2. +- mapping = MAPPING +- +- # We want to run this fixer late, so fix_import doesn't try to make stdlib +- # renames into relative imports. +- run_order = 6 +- +- def build_pattern(self): +- return "|".join(build_pattern(self.mapping)) +- +- def compile_pattern(self): +- # We override this, so MAPPING can be pragmatically altered and the +- # changes will be reflected in PATTERN. +- self.PATTERN = self.build_pattern() +- super(FixImports, self).compile_pattern() +- +- # Don't match the node if it's within another match. +- def match(self, node): +- match = super(FixImports, self).match +- results = match(node) +- if results: +- # Module usage could be in the trailer of an attribute lookup, so we +- # might have nested matches when "bare_with_attr" is present. +- if "bare_with_attr" not in results and \ +- any([match(obj) for obj in attr_chain(node, "parent")]): +- return False +- return results +- return False +- +- def start_tree(self, tree, filename): +- super(FixImports, self).start_tree(tree, filename) +- self.replace = {} +- +- def transform(self, node, results): +- import_mod = results.get("module_name") +- if import_mod: +- mod_name = import_mod.value +- new_name = self.mapping[mod_name] +- import_mod.replace(Name(new_name, prefix=import_mod.get_prefix())) +- if "name_import" in results: +- # If it's not a "from x import x, y" or "import x as y" import, +- # marked its usage to be replaced. +- self.replace[mod_name] = new_name +- if "multiple_imports" in results: +- # This is a nasty hack to fix multiple imports on a line (e.g., +- # "import StringIO, urlparse"). The problem is that I can't +- # figure out an easy way to make a pattern recognize the keys of +- # MAPPING randomly sprinkled in an import statement. +- results = self.match(node) +- if results: +- self.transform(node, results) +- else: +- # Replace usage of the module. +- bare_name = results["bare_with_attr"][0] +- new_name = self.replace.get(bare_name.value) +- if new_name: +- bare_name.replace(Name(new_name, prefix=bare_name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/fix_imports2.py +--- a/lib2to3/fixes/fix_imports2.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,16 +0,0 @@ +-"""Fix incompatible imports and module references that must be fixed after +-fix_imports.""" +-from . import fix_imports +- +- +-MAPPING = { +- 'whichdb': 'dbm', +- 'anydbm': 'dbm', +- } +- +- +-class FixImports2(fix_imports.FixImports): +- +- run_order = 7 +- +- mapping = MAPPING +diff -r 531f2e948299 lib2to3/fixes/fix_input.py +--- a/lib2to3/fixes/fix_input.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,26 +0,0 @@ +-"""Fixer that changes input(...) into eval(input(...)).""" +-# Author: Andre Roberge +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Call, Name +-from .. import patcomp +- +- +-context = patcomp.compile_pattern("power< 'eval' trailer< '(' any ')' > >") +- +- +-class FixInput(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'input' args=trailer< '(' [any] ')' > > +- """ +- +- def transform(self, node, results): +- # If we're already wrapped in a eval() call, we're done. +- if context.match(node.parent.parent): +- return +- +- new = node.clone() +- new.set_prefix("") +- return Call(Name("eval"), [new], prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_intern.py +--- a/lib2to3/fixes/fix_intern.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,44 +0,0 @@ +-# Copyright 2006 Georg Brandl. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for intern(). +- +-intern(s) -> sys.intern(s)""" +- +-# Local imports +-from .. import pytree +-from .. import fixer_base +-from ..fixer_util import Name, Attr, touch_import +- +- +-class FixIntern(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'intern' +- trailer< lpar='(' +- ( not(arglist | argument) any ','> ) +- rpar=')' > +- after=any* +- > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- obj = results["obj"].clone() +- if obj.type == syms.arglist: +- newarglist = obj.clone() +- else: +- newarglist = pytree.Node(syms.arglist, [obj.clone()]) +- after = results["after"] +- if after: +- after = [n.clone() for n in after] +- new = pytree.Node(syms.power, +- Attr(Name("sys"), Name("intern")) + +- [pytree.Node(syms.trailer, +- [results["lpar"].clone(), +- newarglist, +- results["rpar"].clone()])] + after) +- new.set_prefix(node.get_prefix()) +- touch_import(None, 'sys', node) +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_isinstance.py +--- a/lib2to3/fixes/fix_isinstance.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,52 +0,0 @@ +-# Copyright 2008 Armin Ronacher. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that cleans up a tuple argument to isinstance after the tokens +-in it were fixed. This is mainly used to remove double occurrences of +-tokens as a leftover of the long -> int / unicode -> str conversion. +- +-eg. isinstance(x, (int, long)) -> isinstance(x, (int, int)) +- -> isinstance(x, int) +-""" +- +-from .. import fixer_base +-from ..fixer_util import token +- +- +-class FixIsinstance(fixer_base.BaseFix): +- +- PATTERN = """ +- power< +- 'isinstance' +- trailer< '(' arglist< any ',' atom< '(' +- args=testlist_gexp< any+ > +- ')' > > ')' > +- > +- """ +- +- run_order = 6 +- +- def transform(self, node, results): +- names_inserted = set() +- testlist = results["args"] +- args = testlist.children +- new_args = [] +- iterator = enumerate(args) +- for idx, arg in iterator: +- if arg.type == token.NAME and arg.value in names_inserted: +- if idx < len(args) - 1 and args[idx + 1].type == token.COMMA: +- iterator.next() +- continue +- else: +- new_args.append(arg) +- if arg.type == token.NAME: +- names_inserted.add(arg.value) +- if new_args and new_args[-1].type == token.COMMA: +- del new_args[-1] +- if len(new_args) == 1: +- atom = testlist.parent +- new_args[0].set_prefix(atom.get_prefix()) +- atom.replace(new_args[0]) +- else: +- args[:] = new_args +- node.changed() +diff -r 531f2e948299 lib2to3/fixes/fix_itertools.py +--- a/lib2to3/fixes/fix_itertools.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,41 +0,0 @@ +-""" Fixer for itertools.(imap|ifilter|izip) --> (map|filter|zip) and +- itertools.ifilterfalse --> itertools.filterfalse (bugs 2360-2363) +- +- imports from itertools are fixed in fix_itertools_import.py +- +- If itertools is imported as something else (ie: import itertools as it; +- it.izip(spam, eggs)) method calls will not get fixed. +- """ +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-class FixItertools(fixer_base.BaseFix): +- it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" +- PATTERN = """ +- power< it='itertools' +- trailer< +- dot='.' func=%(it_funcs)s > trailer< '(' [any] ')' > > +- | +- power< func=%(it_funcs)s trailer< '(' [any] ')' > > +- """ %(locals()) +- +- # Needs to be run after fix_(map|zip|filter) +- run_order = 6 +- +- def transform(self, node, results): +- prefix = None +- func = results['func'][0] +- if 'it' in results and func.value != 'ifilterfalse': +- dot, it = (results['dot'], results['it']) +- # Remove the 'itertools' +- prefix = it.get_prefix() +- it.remove() +- # Replace the node wich contains ('.', 'function') with the +- # function (to be consistant with the second part of the pattern) +- dot.remove() +- func.parent.replace(func) +- +- prefix = prefix or func.get_prefix() +- func.replace(Name(func.value[1:], prefix=prefix)) +diff -r 531f2e948299 lib2to3/fixes/fix_itertools_imports.py +--- a/lib2to3/fixes/fix_itertools_imports.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,52 +0,0 @@ +-""" Fixer for imports of itertools.(imap|ifilter|izip|ifilterfalse) """ +- +-# Local imports +-from lib2to3 import fixer_base +-from lib2to3.fixer_util import BlankLine, syms, token +- +- +-class FixItertoolsImports(fixer_base.BaseFix): +- PATTERN = """ +- import_from< 'from' 'itertools' 'import' imports=any > +- """ %(locals()) +- +- def transform(self, node, results): +- imports = results['imports'] +- if imports.type == syms.import_as_name or not imports.children: +- children = [imports] +- else: +- children = imports.children +- for child in children[::2]: +- if child.type == token.NAME: +- member = child.value +- name_node = child +- else: +- assert child.type == syms.import_as_name +- name_node = child.children[0] +- member_name = name_node.value +- if member_name in ('imap', 'izip', 'ifilter'): +- child.value = None +- child.remove() +- elif member_name == 'ifilterfalse': +- node.changed() +- name_node.value = 'filterfalse' +- +- # Make sure the import statement is still sane +- children = imports.children[:] or [imports] +- remove_comma = True +- for child in children: +- if remove_comma and child.type == token.COMMA: +- child.remove() +- else: +- remove_comma ^= True +- +- if children[-1].type == token.COMMA: +- children[-1].remove() +- +- # If there are no imports left, just get rid of the entire statement +- if not (imports.children or getattr(imports, 'value', None)) or \ +- imports.parent is None: +- p = node.get_prefix() +- node = BlankLine() +- node.prefix = p +- return node +diff -r 531f2e948299 lib2to3/fixes/fix_long.py +--- a/lib2to3/fixes/fix_long.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,22 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that turns 'long' into 'int' everywhere. +-""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, Number, is_probably_builtin +- +- +-class FixLong(fixer_base.BaseFix): +- +- PATTERN = "'long'" +- +- static_int = Name("int") +- +- def transform(self, node, results): +- if is_probably_builtin(node): +- new = self.static_int.clone() +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_map.py +--- a/lib2to3/fixes/fix_map.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,82 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that changes map(F, ...) into list(map(F, ...)) unless there +-exists a 'from future_builtins import map' statement in the top-level +-namespace. +- +-As a special case, map(None, X) is changed into list(X). (This is +-necessary because the semantics are changed in this case -- the new +-map(None, X) is equivalent to [(x,) for x in X].) +- +-We avoid the transformation (except for the special case mentioned +-above) if the map() call is directly contained in iter(<>), list(<>), +-tuple(<>), sorted(<>), ...join(<>), or for V in <>:. +- +-NOTE: This is still not correct if the original code was depending on +-map(F, X, Y, ...) to go on until the longest argument is exhausted, +-substituting None for missing values -- like zip(), it now stops as +-soon as the shortest argument is exhausted. +-""" +- +-# Local imports +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, ListComp, in_special_context +-from ..pygram import python_symbols as syms +- +-class FixMap(fixer_base.ConditionalFix): +- +- PATTERN = """ +- map_none=power< +- 'map' +- trailer< '(' arglist< 'None' ',' arg=any [','] > ')' > +- > +- | +- map_lambda=power< +- 'map' +- trailer< +- '(' +- arglist< +- lambdef< 'lambda' +- (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any +- > +- ',' +- it=any +- > +- ')' +- > +- > +- | +- power< +- 'map' +- args=trailer< '(' [any] ')' > +- > +- """ +- +- skip_on = 'future_builtins.map' +- +- def transform(self, node, results): +- if self.should_skip(node): +- return +- +- if node.parent.type == syms.simple_stmt: +- self.warning(node, "You should use a for loop here") +- new = node.clone() +- new.set_prefix("") +- new = Call(Name("list"), [new]) +- elif "map_lambda" in results: +- new = ListComp(results.get("xp").clone(), +- results.get("fp").clone(), +- results.get("it").clone()) +- else: +- if "map_none" in results: +- new = results["arg"].clone() +- else: +- if in_special_context(node): +- return None +- new = node.clone() +- new.set_prefix("") +- new = Call(Name("list"), [new]) +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_metaclass.py +--- a/lib2to3/fixes/fix_metaclass.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,227 +0,0 @@ +-"""Fixer for __metaclass__ = X -> (metaclass=X) methods. +- +- The various forms of classef (inherits nothing, inherits once, inherints +- many) don't parse the same in the CST so we look at ALL classes for +- a __metaclass__ and if we find one normalize the inherits to all be +- an arglist. +- +- For one-liner classes ('class X: pass') there is no indent/dedent so +- we normalize those into having a suite. +- +- Moving the __metaclass__ into the classdef can also cause the class +- body to be empty so there is some special casing for that as well. +- +- This fixer also tries very hard to keep original indenting and spacing +- in all those corner cases. +- +-""" +-# Author: Jack Diederich +- +-# Local imports +-from .. import fixer_base +-from ..pygram import token +-from ..fixer_util import Name, syms, Node, Leaf +- +- +-def has_metaclass(parent): +- """ we have to check the cls_node without changing it. +- There are two possiblities: +- 1) clsdef => suite => simple_stmt => expr_stmt => Leaf('__meta') +- 2) clsdef => simple_stmt => expr_stmt => Leaf('__meta') +- """ +- for node in parent.children: +- if node.type == syms.suite: +- return has_metaclass(node) +- elif node.type == syms.simple_stmt and node.children: +- expr_node = node.children[0] +- if expr_node.type == syms.expr_stmt and expr_node.children: +- left_side = expr_node.children[0] +- if isinstance(left_side, Leaf) and \ +- left_side.value == '__metaclass__': +- return True +- return False +- +- +-def fixup_parse_tree(cls_node): +- """ one-line classes don't get a suite in the parse tree so we add +- one to normalize the tree +- """ +- for node in cls_node.children: +- if node.type == syms.suite: +- # already in the prefered format, do nothing +- return +- +- # !%@#! oneliners have no suite node, we have to fake one up +- for i, node in enumerate(cls_node.children): +- if node.type == token.COLON: +- break +- else: +- raise ValueError("No class suite and no ':'!") +- +- # move everything into a suite node +- suite = Node(syms.suite, []) +- while cls_node.children[i+1:]: +- move_node = cls_node.children[i+1] +- suite.append_child(move_node.clone()) +- move_node.remove() +- cls_node.append_child(suite) +- node = suite +- +- +-def fixup_simple_stmt(parent, i, stmt_node): +- """ if there is a semi-colon all the parts count as part of the same +- simple_stmt. We just want the __metaclass__ part so we move +- everything efter the semi-colon into its own simple_stmt node +- """ +- for semi_ind, node in enumerate(stmt_node.children): +- if node.type == token.SEMI: # *sigh* +- break +- else: +- return +- +- node.remove() # kill the semicolon +- new_expr = Node(syms.expr_stmt, []) +- new_stmt = Node(syms.simple_stmt, [new_expr]) +- while stmt_node.children[semi_ind:]: +- move_node = stmt_node.children[semi_ind] +- new_expr.append_child(move_node.clone()) +- move_node.remove() +- parent.insert_child(i, new_stmt) +- new_leaf1 = new_stmt.children[0].children[0] +- old_leaf1 = stmt_node.children[0].children[0] +- new_leaf1.set_prefix(old_leaf1.get_prefix()) +- +- +-def remove_trailing_newline(node): +- if node.children and node.children[-1].type == token.NEWLINE: +- node.children[-1].remove() +- +- +-def find_metas(cls_node): +- # find the suite node (Mmm, sweet nodes) +- for node in cls_node.children: +- if node.type == syms.suite: +- break +- else: +- raise ValueError("No class suite!") +- +- # look for simple_stmt[ expr_stmt[ Leaf('__metaclass__') ] ] +- for i, simple_node in list(enumerate(node.children)): +- if simple_node.type == syms.simple_stmt and simple_node.children: +- expr_node = simple_node.children[0] +- if expr_node.type == syms.expr_stmt and expr_node.children: +- # Check if the expr_node is a simple assignment. +- left_node = expr_node.children[0] +- if isinstance(left_node, Leaf) and \ +- left_node.value == '__metaclass__': +- # We found a assignment to __metaclass__. +- fixup_simple_stmt(node, i, simple_node) +- remove_trailing_newline(simple_node) +- yield (node, i, simple_node) +- +- +-def fixup_indent(suite): +- """ If an INDENT is followed by a thing with a prefix then nuke the prefix +- Otherwise we get in trouble when removing __metaclass__ at suite start +- """ +- kids = suite.children[::-1] +- # find the first indent +- while kids: +- node = kids.pop() +- if node.type == token.INDENT: +- break +- +- # find the first Leaf +- while kids: +- node = kids.pop() +- if isinstance(node, Leaf) and node.type != token.DEDENT: +- if node.prefix: +- node.set_prefix('') +- return +- else: +- kids.extend(node.children[::-1]) +- +- +-class FixMetaclass(fixer_base.BaseFix): +- +- PATTERN = """ +- classdef +- """ +- +- def transform(self, node, results): +- if not has_metaclass(node): +- return node +- +- fixup_parse_tree(node) +- +- # find metaclasses, keep the last one +- last_metaclass = None +- for suite, i, stmt in find_metas(node): +- last_metaclass = stmt +- stmt.remove() +- +- text_type = node.children[0].type # always Leaf(nnn, 'class') +- +- # figure out what kind of classdef we have +- if len(node.children) == 7: +- # Node(classdef, ['class', 'name', '(', arglist, ')', ':', suite]) +- # 0 1 2 3 4 5 6 +- if node.children[3].type == syms.arglist: +- arglist = node.children[3] +- # Node(classdef, ['class', 'name', '(', 'Parent', ')', ':', suite]) +- else: +- parent = node.children[3].clone() +- arglist = Node(syms.arglist, [parent]) +- node.set_child(3, arglist) +- elif len(node.children) == 6: +- # Node(classdef, ['class', 'name', '(', ')', ':', suite]) +- # 0 1 2 3 4 5 +- arglist = Node(syms.arglist, []) +- node.insert_child(3, arglist) +- elif len(node.children) == 4: +- # Node(classdef, ['class', 'name', ':', suite]) +- # 0 1 2 3 +- arglist = Node(syms.arglist, []) +- node.insert_child(2, Leaf(token.RPAR, ')')) +- node.insert_child(2, arglist) +- node.insert_child(2, Leaf(token.LPAR, '(')) +- else: +- raise ValueError("Unexpected class definition") +- +- # now stick the metaclass in the arglist +- meta_txt = last_metaclass.children[0].children[0] +- meta_txt.value = 'metaclass' +- orig_meta_prefix = meta_txt.get_prefix() +- +- if arglist.children: +- arglist.append_child(Leaf(token.COMMA, ',')) +- meta_txt.set_prefix(' ') +- else: +- meta_txt.set_prefix('') +- +- # compact the expression "metaclass = Meta" -> "metaclass=Meta" +- expr_stmt = last_metaclass.children[0] +- assert expr_stmt.type == syms.expr_stmt +- expr_stmt.children[1].set_prefix('') +- expr_stmt.children[2].set_prefix('') +- +- arglist.append_child(last_metaclass) +- +- fixup_indent(suite) +- +- # check for empty suite +- if not suite.children: +- # one-liner that was just __metaclass_ +- suite.remove() +- pass_leaf = Leaf(text_type, 'pass') +- pass_leaf.set_prefix(orig_meta_prefix) +- node.append_child(pass_leaf) +- node.append_child(Leaf(token.NEWLINE, '\n')) +- +- elif len(suite.children) > 1 and \ +- (suite.children[-2].type == token.INDENT and +- suite.children[-1].type == token.DEDENT): +- # there was only one line in the class body and it was __metaclass__ +- pass_leaf = Leaf(text_type, 'pass') +- suite.insert_child(-1, pass_leaf) +- suite.insert_child(-1, Leaf(token.NEWLINE, '\n')) +diff -r 531f2e948299 lib2to3/fixes/fix_methodattrs.py +--- a/lib2to3/fixes/fix_methodattrs.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,23 +0,0 @@ +-"""Fix bound method attributes (method.im_? -> method.__?__). +-""" +-# Author: Christian Heimes +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-MAP = { +- "im_func" : "__func__", +- "im_self" : "__self__", +- "im_class" : "__self__.__class__" +- } +- +-class FixMethodattrs(fixer_base.BaseFix): +- PATTERN = """ +- power< any+ trailer< '.' attr=('im_func' | 'im_self' | 'im_class') > any* > +- """ +- +- def transform(self, node, results): +- attr = results["attr"][0] +- new = MAP[attr.value] +- attr.replace(Name(new, prefix=attr.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/fix_ne.py +--- a/lib2to3/fixes/fix_ne.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,22 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that turns <> into !=.""" +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +- +- +-class FixNe(fixer_base.BaseFix): +- # This is so simple that we don't need the pattern compiler. +- +- def match(self, node): +- # Override +- return node.type == token.NOTEQUAL and node.value == "<>" +- +- def transform(self, node, results): +- new = pytree.Leaf(token.NOTEQUAL, "!=") +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_next.py +--- a/lib2to3/fixes/fix_next.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,103 +0,0 @@ +-"""Fixer for it.next() -> next(it), per PEP 3114.""" +-# Author: Collin Winter +- +-# Things that currently aren't covered: +-# - listcomp "next" names aren't warned +-# - "with" statement targets aren't checked +- +-# Local imports +-from ..pgen2 import token +-from ..pygram import python_symbols as syms +-from .. import fixer_base +-from ..fixer_util import Name, Call, find_binding +- +-bind_warning = "Calls to builtin next() possibly shadowed by global binding" +- +- +-class FixNext(fixer_base.BaseFix): +- PATTERN = """ +- power< base=any+ trailer< '.' attr='next' > trailer< '(' ')' > > +- | +- power< head=any+ trailer< '.' attr='next' > not trailer< '(' ')' > > +- | +- classdef< 'class' any+ ':' +- suite< any* +- funcdef< 'def' +- name='next' +- parameters< '(' NAME ')' > any+ > +- any* > > +- | +- global=global_stmt< 'global' any* 'next' any* > +- """ +- +- order = "pre" # Pre-order tree traversal +- +- def start_tree(self, tree, filename): +- super(FixNext, self).start_tree(tree, filename) +- +- n = find_binding('next', tree) +- if n: +- self.warning(n, bind_warning) +- self.shadowed_next = True +- else: +- self.shadowed_next = False +- +- def transform(self, node, results): +- assert results +- +- base = results.get("base") +- attr = results.get("attr") +- name = results.get("name") +- mod = results.get("mod") +- +- if base: +- if self.shadowed_next: +- attr.replace(Name("__next__", prefix=attr.get_prefix())) +- else: +- base = [n.clone() for n in base] +- base[0].set_prefix("") +- node.replace(Call(Name("next", prefix=node.get_prefix()), base)) +- elif name: +- n = Name("__next__", prefix=name.get_prefix()) +- name.replace(n) +- elif attr: +- # We don't do this transformation if we're assigning to "x.next". +- # Unfortunately, it doesn't seem possible to do this in PATTERN, +- # so it's being done here. +- if is_assign_target(node): +- head = results["head"] +- if "".join([str(n) for n in head]).strip() == '__builtin__': +- self.warning(node, bind_warning) +- return +- attr.replace(Name("__next__")) +- elif "global" in results: +- self.warning(node, bind_warning) +- self.shadowed_next = True +- +- +-### The following functions help test if node is part of an assignment +-### target. +- +-def is_assign_target(node): +- assign = find_assign(node) +- if assign is None: +- return False +- +- for child in assign.children: +- if child.type == token.EQUAL: +- return False +- elif is_subtree(child, node): +- return True +- return False +- +-def find_assign(node): +- if node.type == syms.expr_stmt: +- return node +- if node.type == syms.simple_stmt or node.parent is None: +- return None +- return find_assign(node.parent) +- +-def is_subtree(root, node): +- if root == node: +- return True +- return any([is_subtree(c, node) for c in root.children]) +diff -r 531f2e948299 lib2to3/fixes/fix_nonzero.py +--- a/lib2to3/fixes/fix_nonzero.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,20 +0,0 @@ +-"""Fixer for __nonzero__ -> __bool__ methods.""" +-# Author: Collin Winter +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, syms +- +-class FixNonzero(fixer_base.BaseFix): +- PATTERN = """ +- classdef< 'class' any+ ':' +- suite< any* +- funcdef< 'def' name='__nonzero__' +- parameters< '(' NAME ')' > any+ > +- any* > > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- new = Name("__bool__", prefix=name.get_prefix()) +- name.replace(new) +diff -r 531f2e948299 lib2to3/fixes/fix_numliterals.py +--- a/lib2to3/fixes/fix_numliterals.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,27 +0,0 @@ +-"""Fixer that turns 1L into 1, 0755 into 0o755. +-""" +-# Copyright 2007 Georg Brandl. +-# Licensed to PSF under a Contributor Agreement. +- +-# Local imports +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Number +- +- +-class FixNumliterals(fixer_base.BaseFix): +- # This is so simple that we don't need the pattern compiler. +- +- def match(self, node): +- # Override +- return (node.type == token.NUMBER and +- (node.value.startswith("0") or node.value[-1] in "Ll")) +- +- def transform(self, node, results): +- val = node.value +- if val[-1] in 'Ll': +- val = val[:-1] +- elif val.startswith('0') and val.isdigit() and len(set(val)) > 1: +- val = "0o" + val[1:] +- +- return Number(val, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_paren.py +--- a/lib2to3/fixes/fix_paren.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,42 +0,0 @@ +-"""Fixer that addes parentheses where they are required +- +-This converts ``[x for x in 1, 2]`` to ``[x for x in (1, 2)]``.""" +- +-# By Taek Joo Kim and Benjamin Peterson +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import LParen, RParen +- +-# XXX This doesn't support nested for loops like [x for x in 1, 2 for x in 1, 2] +-class FixParen(fixer_base.BaseFix): +- PATTERN = """ +- atom< ('[' | '(') +- (listmaker< any +- comp_for< +- 'for' NAME 'in' +- target=testlist_safe< any (',' any)+ [','] +- > +- [any] +- > +- > +- | +- testlist_gexp< any +- comp_for< +- 'for' NAME 'in' +- target=testlist_safe< any (',' any)+ [','] +- > +- [any] +- > +- >) +- (']' | ')') > +- """ +- +- def transform(self, node, results): +- target = results["target"] +- +- lparen = LParen() +- lparen.set_prefix(target.get_prefix()) +- target.set_prefix("") # Make it hug the parentheses +- target.insert_child(0, lparen) +- target.append_child(RParen()) +diff -r 531f2e948299 lib2to3/fixes/fix_print.py +--- a/lib2to3/fixes/fix_print.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,90 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for print. +- +-Change: +- 'print' into 'print()' +- 'print ...' into 'print(...)' +- 'print ... ,' into 'print(..., end=" ")' +- 'print >>x, ...' into 'print(..., file=x)' +- +-No changes are applied if print_function is imported from __future__ +- +-""" +- +-# Local imports +-from .. import patcomp +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, Comma, String, is_tuple +- +- +-parend_expr = patcomp.compile_pattern( +- """atom< '(' [atom|STRING|NAME] ')' >""" +- ) +- +- +-class FixPrint(fixer_base.ConditionalFix): +- +- PATTERN = """ +- simple_stmt< any* bare='print' any* > | print_stmt +- """ +- +- skip_on = '__future__.print_function' +- +- def transform(self, node, results): +- assert results +- +- if self.should_skip(node): +- return +- +- bare_print = results.get("bare") +- +- if bare_print: +- # Special-case print all by itself +- bare_print.replace(Call(Name("print"), [], +- prefix=bare_print.get_prefix())) +- return +- assert node.children[0] == Name("print") +- args = node.children[1:] +- if len(args) == 1 and parend_expr.match(args[0]): +- # We don't want to keep sticking parens around an +- # already-parenthesised expression. +- return +- +- sep = end = file = None +- if args and args[-1] == Comma(): +- args = args[:-1] +- end = " " +- if args and args[0] == pytree.Leaf(token.RIGHTSHIFT, ">>"): +- assert len(args) >= 2 +- file = args[1].clone() +- args = args[3:] # Strip a possible comma after the file expression +- # Now synthesize a print(args, sep=..., end=..., file=...) node. +- l_args = [arg.clone() for arg in args] +- if l_args: +- l_args[0].set_prefix("") +- if sep is not None or end is not None or file is not None: +- if sep is not None: +- self.add_kwarg(l_args, "sep", String(repr(sep))) +- if end is not None: +- self.add_kwarg(l_args, "end", String(repr(end))) +- if file is not None: +- self.add_kwarg(l_args, "file", file) +- n_stmt = Call(Name("print"), l_args) +- n_stmt.set_prefix(node.get_prefix()) +- return n_stmt +- +- def add_kwarg(self, l_nodes, s_kwd, n_expr): +- # XXX All this prefix-setting may lose comments (though rarely) +- n_expr.set_prefix("") +- n_argument = pytree.Node(self.syms.argument, +- (Name(s_kwd), +- pytree.Leaf(token.EQUAL, "="), +- n_expr)) +- if l_nodes: +- l_nodes.append(Comma()) +- n_argument.set_prefix(" ") +- l_nodes.append(n_argument) +diff -r 531f2e948299 lib2to3/fixes/fix_raise.py +--- a/lib2to3/fixes/fix_raise.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,82 +0,0 @@ +-"""Fixer for 'raise E, V, T' +- +-raise -> raise +-raise E -> raise E +-raise E, V -> raise E(V) +-raise E, V, T -> raise E(V).with_traceback(T) +- +-raise (((E, E'), E''), E'''), V -> raise E(V) +-raise "foo", V, T -> warns about string exceptions +- +- +-CAVEATS: +-1) "raise E, V" will be incorrectly translated if V is an exception +- instance. The correct Python 3 idiom is +- +- raise E from V +- +- but since we can't detect instance-hood by syntax alone and since +- any client code would have to be changed as well, we don't automate +- this. +-""" +-# Author: Collin Winter +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, Attr, ArgList, is_tuple +- +-class FixRaise(fixer_base.BaseFix): +- +- PATTERN = """ +- raise_stmt< 'raise' exc=any [',' val=any [',' tb=any]] > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- +- exc = results["exc"].clone() +- if exc.type is token.STRING: +- self.cannot_convert(node, "Python 3 does not support string exceptions") +- return +- +- # Python 2 supports +- # raise ((((E1, E2), E3), E4), E5), V +- # as a synonym for +- # raise E1, V +- # Since Python 3 will not support this, we recurse down any tuple +- # literals, always taking the first element. +- if is_tuple(exc): +- while is_tuple(exc): +- # exc.children[1:-1] is the unparenthesized tuple +- # exc.children[1].children[0] is the first element of the tuple +- exc = exc.children[1].children[0].clone() +- exc.set_prefix(" ") +- +- if "val" not in results: +- # One-argument raise +- new = pytree.Node(syms.raise_stmt, [Name("raise"), exc]) +- new.set_prefix(node.get_prefix()) +- return new +- +- val = results["val"].clone() +- if is_tuple(val): +- args = [c.clone() for c in val.children[1:-1]] +- else: +- val.set_prefix("") +- args = [val] +- +- if "tb" in results: +- tb = results["tb"].clone() +- tb.set_prefix("") +- +- e = Call(exc, args) +- with_tb = Attr(e, Name('with_traceback')) + [ArgList([tb])] +- new = pytree.Node(syms.simple_stmt, [Name("raise")] + with_tb) +- new.set_prefix(node.get_prefix()) +- return new +- else: +- return pytree.Node(syms.raise_stmt, +- [Name("raise"), Call(exc, args)], +- prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_raw_input.py +--- a/lib2to3/fixes/fix_raw_input.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,16 +0,0 @@ +-"""Fixer that changes raw_input(...) into input(...).""" +-# Author: Andre Roberge +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +-class FixRawInput(fixer_base.BaseFix): +- +- PATTERN = """ +- power< name='raw_input' trailer< '(' [any] ')' > any* > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- name.replace(Name("input", prefix=name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/fix_reduce.py +--- a/lib2to3/fixes/fix_reduce.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,33 +0,0 @@ +-# Copyright 2008 Armin Ronacher. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for reduce(). +- +-Makes sure reduce() is imported from the functools module if reduce is +-used in that module. +-""" +- +-from .. import pytree +-from .. import fixer_base +-from ..fixer_util import Name, Attr, touch_import +- +- +- +-class FixReduce(fixer_base.BaseFix): +- +- PATTERN = """ +- power< 'reduce' +- trailer< '(' +- arglist< ( +- (not(argument) any ',' +- not(argument +- > +- """ +- +- def transform(self, node, results): +- touch_import('functools', 'reduce', node) +diff -r 531f2e948299 lib2to3/fixes/fix_renames.py +--- a/lib2to3/fixes/fix_renames.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,69 +0,0 @@ +-"""Fix incompatible renames +- +-Fixes: +- * sys.maxint -> sys.maxsize +-""" +-# Author: Christian Heimes +-# based on Collin Winter's fix_import +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, attr_chain +- +-MAPPING = {"sys": {"maxint" : "maxsize"}, +- } +-LOOKUP = {} +- +-def alternates(members): +- return "(" + "|".join(map(repr, members)) + ")" +- +- +-def build_pattern(): +- #bare = set() +- for module, replace in MAPPING.items(): +- for old_attr, new_attr in replace.items(): +- LOOKUP[(module, old_attr)] = new_attr +- #bare.add(module) +- #bare.add(old_attr) +- #yield """ +- # import_name< 'import' (module=%r +- # | dotted_as_names< any* module=%r any* >) > +- # """ % (module, module) +- yield """ +- import_from< 'from' module_name=%r 'import' +- ( attr_name=%r | import_as_name< attr_name=%r 'as' any >) > +- """ % (module, old_attr, old_attr) +- yield """ +- power< module_name=%r trailer< '.' attr_name=%r > any* > +- """ % (module, old_attr) +- #yield """bare_name=%s""" % alternates(bare) +- +- +-class FixRenames(fixer_base.BaseFix): +- PATTERN = "|".join(build_pattern()) +- +- order = "pre" # Pre-order tree traversal +- +- # Don't match the node if it's within another match +- def match(self, node): +- match = super(FixRenames, self).match +- results = match(node) +- if results: +- if any([match(obj) for obj in attr_chain(node, "parent")]): +- return False +- return results +- return False +- +- #def start_tree(self, tree, filename): +- # super(FixRenames, self).start_tree(tree, filename) +- # self.replace = {} +- +- def transform(self, node, results): +- mod_name = results.get("module_name") +- attr_name = results.get("attr_name") +- #bare_name = results.get("bare_name") +- #import_mod = results.get("module") +- +- if mod_name and attr_name: +- new_attr = LOOKUP[(mod_name.value, attr_name.value)] +- attr_name.replace(Name(new_attr, prefix=attr_name.get_prefix())) +diff -r 531f2e948299 lib2to3/fixes/fix_repr.py +--- a/lib2to3/fixes/fix_repr.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,22 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that transforms `xyzzy` into repr(xyzzy).""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Call, Name, parenthesize +- +- +-class FixRepr(fixer_base.BaseFix): +- +- PATTERN = """ +- atom < '`' expr=any '`' > +- """ +- +- def transform(self, node, results): +- expr = results["expr"].clone() +- +- if expr.type == self.syms.testlist1: +- expr = parenthesize(expr) +- return Call(Name("repr"), [expr], prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_set_literal.py +--- a/lib2to3/fixes/fix_set_literal.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,52 +0,0 @@ +-""" +-Optional fixer to transform set() calls to set literals. +-""" +- +-# Author: Benjamin Peterson +- +-from lib2to3 import fixer_base, pytree +-from lib2to3.fixer_util import token, syms +- +- +- +-class FixSetLiteral(fixer_base.BaseFix): +- +- explicit = True +- +- PATTERN = """power< 'set' trailer< '(' +- (atom=atom< '[' (items=listmaker< any ((',' any)* [',']) > +- | +- single=any) ']' > +- | +- atom< '(' items=testlist_gexp< any ((',' any)* [',']) > ')' > +- ) +- ')' > > +- """ +- +- def transform(self, node, results): +- single = results.get("single") +- if single: +- # Make a fake listmaker +- fake = pytree.Node(syms.listmaker, [single.clone()]) +- single.replace(fake) +- items = fake +- else: +- items = results["items"] +- +- # Build the contents of the literal +- literal = [pytree.Leaf(token.LBRACE, "{")] +- literal.extend(n.clone() for n in items.children) +- literal.append(pytree.Leaf(token.RBRACE, "}")) +- # Set the prefix of the right brace to that of the ')' or ']' +- literal[-1].set_prefix(items.next_sibling.get_prefix()) +- maker = pytree.Node(syms.dictsetmaker, literal) +- maker.set_prefix(node.get_prefix()) +- +- # If the original was a one tuple, we need to remove the extra comma. +- if len(maker.children) == 4: +- n = maker.children[2] +- n.remove() +- maker.children[-1].set_prefix(n.get_prefix()) +- +- # Finally, replace the set call with our shiny new literal. +- return maker +diff -r 531f2e948299 lib2to3/fixes/fix_standarderror.py +--- a/lib2to3/fixes/fix_standarderror.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,18 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for StandardError -> Exception.""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +- +-class FixStandarderror(fixer_base.BaseFix): +- +- PATTERN = """ +- 'StandardError' +- """ +- +- def transform(self, node, results): +- return Name("Exception", prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_sys_exc.py +--- a/lib2to3/fixes/fix_sys_exc.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,29 +0,0 @@ +-"""Fixer for sys.exc_{type, value, traceback} +- +-sys.exc_type -> sys.exc_info()[0] +-sys.exc_value -> sys.exc_info()[1] +-sys.exc_traceback -> sys.exc_info()[2] +-""" +- +-# By Jeff Balogh and Benjamin Peterson +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Attr, Call, Name, Number, Subscript, Node, syms +- +-class FixSysExc(fixer_base.BaseFix): +- # This order matches the ordering of sys.exc_info(). +- exc_info = ["exc_type", "exc_value", "exc_traceback"] +- PATTERN = """ +- power< 'sys' trailer< dot='.' attribute=(%s) > > +- """ % '|'.join("'%s'" % e for e in exc_info) +- +- def transform(self, node, results): +- sys_attr = results["attribute"][0] +- index = Number(self.exc_info.index(sys_attr.value)) +- +- call = Call(Name("exc_info"), prefix=sys_attr.get_prefix()) +- attr = Attr(Name("sys"), call) +- attr[1].children[0].set_prefix(results["dot"].get_prefix()) +- attr.append(Subscript(index)) +- return Node(syms.power, attr, prefix=node.get_prefix()) +diff -r 531f2e948299 lib2to3/fixes/fix_throw.py +--- a/lib2to3/fixes/fix_throw.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,56 +0,0 @@ +-"""Fixer for generator.throw(E, V, T). +- +-g.throw(E) -> g.throw(E) +-g.throw(E, V) -> g.throw(E(V)) +-g.throw(E, V, T) -> g.throw(E(V).with_traceback(T)) +- +-g.throw("foo"[, V[, T]]) will warn about string exceptions.""" +-# Author: Collin Winter +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name, Call, ArgList, Attr, is_tuple +- +-class FixThrow(fixer_base.BaseFix): +- +- PATTERN = """ +- power< any trailer< '.' 'throw' > +- trailer< '(' args=arglist< exc=any ',' val=any [',' tb=any] > ')' > +- > +- | +- power< any trailer< '.' 'throw' > trailer< '(' exc=any ')' > > +- """ +- +- def transform(self, node, results): +- syms = self.syms +- +- exc = results["exc"].clone() +- if exc.type is token.STRING: +- self.cannot_convert(node, "Python 3 does not support string exceptions") +- return +- +- # Leave "g.throw(E)" alone +- val = results.get("val") +- if val is None: +- return +- +- val = val.clone() +- if is_tuple(val): +- args = [c.clone() for c in val.children[1:-1]] +- else: +- val.set_prefix("") +- args = [val] +- +- throw_args = results["args"] +- +- if "tb" in results: +- tb = results["tb"].clone() +- tb.set_prefix("") +- +- e = Call(exc, args) +- with_tb = Attr(e, Name('with_traceback')) + [ArgList([tb])] +- throw_args.replace(pytree.Node(syms.power, with_tb)) +- else: +- throw_args.replace(Call(exc, args)) +diff -r 531f2e948299 lib2to3/fixes/fix_tuple_params.py +--- a/lib2to3/fixes/fix_tuple_params.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,169 +0,0 @@ +-"""Fixer for function definitions with tuple parameters. +- +-def func(((a, b), c), d): +- ... +- +- -> +- +-def func(x, d): +- ((a, b), c) = x +- ... +- +-It will also support lambdas: +- +- lambda (x, y): x + y -> lambda t: t[0] + t[1] +- +- # The parens are a syntax error in Python 3 +- lambda (x): x + y -> lambda x: x + y +-""" +-# Author: Collin Winter +- +-# Local imports +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Assign, Name, Newline, Number, Subscript, syms +- +-def is_docstring(stmt): +- return isinstance(stmt, pytree.Node) and \ +- stmt.children[0].type == token.STRING +- +-class FixTupleParams(fixer_base.BaseFix): +- PATTERN = """ +- funcdef< 'def' any parameters< '(' args=any ')' > +- ['->' any] ':' suite=any+ > +- | +- lambda= +- lambdef< 'lambda' args=vfpdef< '(' inner=any ')' > +- ':' body=any +- > +- """ +- +- def transform(self, node, results): +- if "lambda" in results: +- return self.transform_lambda(node, results) +- +- new_lines = [] +- suite = results["suite"] +- args = results["args"] +- # This crap is so "def foo(...): x = 5; y = 7" is handled correctly. +- # TODO(cwinter): suite-cleanup +- if suite[0].children[1].type == token.INDENT: +- start = 2 +- indent = suite[0].children[1].value +- end = Newline() +- else: +- start = 0 +- indent = "; " +- end = pytree.Leaf(token.INDENT, "") +- +- # We need access to self for new_name(), and making this a method +- # doesn't feel right. Closing over self and new_lines makes the +- # code below cleaner. +- def handle_tuple(tuple_arg, add_prefix=False): +- n = Name(self.new_name()) +- arg = tuple_arg.clone() +- arg.set_prefix("") +- stmt = Assign(arg, n.clone()) +- if add_prefix: +- n.set_prefix(" ") +- tuple_arg.replace(n) +- new_lines.append(pytree.Node(syms.simple_stmt, +- [stmt, end.clone()])) +- +- if args.type == syms.tfpdef: +- handle_tuple(args) +- elif args.type == syms.typedargslist: +- for i, arg in enumerate(args.children): +- if arg.type == syms.tfpdef: +- # Without add_prefix, the emitted code is correct, +- # just ugly. +- handle_tuple(arg, add_prefix=(i > 0)) +- +- if not new_lines: +- return node +- +- # This isn't strictly necessary, but it plays nicely with other fixers. +- # TODO(cwinter) get rid of this when children becomes a smart list +- for line in new_lines: +- line.parent = suite[0] +- +- # TODO(cwinter) suite-cleanup +- after = start +- if start == 0: +- new_lines[0].set_prefix(" ") +- elif is_docstring(suite[0].children[start]): +- new_lines[0].set_prefix(indent) +- after = start + 1 +- +- suite[0].children[after:after] = new_lines +- for i in range(after+1, after+len(new_lines)+1): +- suite[0].children[i].set_prefix(indent) +- suite[0].changed() +- +- def transform_lambda(self, node, results): +- args = results["args"] +- body = results["body"] +- inner = simplify_args(results["inner"]) +- +- # Replace lambda ((((x)))): x with lambda x: x +- if inner.type == token.NAME: +- inner = inner.clone() +- inner.set_prefix(" ") +- args.replace(inner) +- return +- +- params = find_params(args) +- to_index = map_to_index(params) +- tup_name = self.new_name(tuple_name(params)) +- +- new_param = Name(tup_name, prefix=" ") +- args.replace(new_param.clone()) +- for n in body.post_order(): +- if n.type == token.NAME and n.value in to_index: +- subscripts = [c.clone() for c in to_index[n.value]] +- new = pytree.Node(syms.power, +- [new_param.clone()] + subscripts) +- new.set_prefix(n.get_prefix()) +- n.replace(new) +- +- +-### Helper functions for transform_lambda() +- +-def simplify_args(node): +- if node.type in (syms.vfplist, token.NAME): +- return node +- elif node.type == syms.vfpdef: +- # These look like vfpdef< '(' x ')' > where x is NAME +- # or another vfpdef instance (leading to recursion). +- while node.type == syms.vfpdef: +- node = node.children[1] +- return node +- raise RuntimeError("Received unexpected node %s" % node) +- +-def find_params(node): +- if node.type == syms.vfpdef: +- return find_params(node.children[1]) +- elif node.type == token.NAME: +- return node.value +- return [find_params(c) for c in node.children if c.type != token.COMMA] +- +-def map_to_index(param_list, prefix=[], d=None): +- if d is None: +- d = {} +- for i, obj in enumerate(param_list): +- trailer = [Subscript(Number(i))] +- if isinstance(obj, list): +- map_to_index(obj, trailer, d=d) +- else: +- d[obj] = prefix + trailer +- return d +- +-def tuple_name(param_list): +- l = [] +- for obj in param_list: +- if isinstance(obj, list): +- l.append(tuple_name(obj)) +- else: +- l.append(obj) +- return "_".join(l) +diff -r 531f2e948299 lib2to3/fixes/fix_types.py +--- a/lib2to3/fixes/fix_types.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,62 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer for removing uses of the types module. +- +-These work for only the known names in the types module. The forms above +-can include types. or not. ie, It is assumed the module is imported either as: +- +- import types +- from types import ... # either * or specific types +- +-The import statements are not modified. +- +-There should be another fixer that handles at least the following constants: +- +- type([]) -> list +- type(()) -> tuple +- type('') -> str +- +-""" +- +-# Local imports +-from ..pgen2 import token +-from .. import fixer_base +-from ..fixer_util import Name +- +-_TYPE_MAPPING = { +- 'BooleanType' : 'bool', +- 'BufferType' : 'memoryview', +- 'ClassType' : 'type', +- 'ComplexType' : 'complex', +- 'DictType': 'dict', +- 'DictionaryType' : 'dict', +- 'EllipsisType' : 'type(Ellipsis)', +- #'FileType' : 'io.IOBase', +- 'FloatType': 'float', +- 'IntType': 'int', +- 'ListType': 'list', +- 'LongType': 'int', +- 'ObjectType' : 'object', +- 'NoneType': 'type(None)', +- 'NotImplementedType' : 'type(NotImplemented)', +- 'SliceType' : 'slice', +- 'StringType': 'bytes', # XXX ? +- 'StringTypes' : 'str', # XXX ? +- 'TupleType': 'tuple', +- 'TypeType' : 'type', +- 'UnicodeType': 'str', +- 'XRangeType' : 'range', +- } +- +-_pats = ["power< 'types' trailer< '.' name='%s' > >" % t for t in _TYPE_MAPPING] +- +-class FixTypes(fixer_base.BaseFix): +- +- PATTERN = '|'.join(_pats) +- +- def transform(self, node, results): +- new_value = _TYPE_MAPPING.get(results["name"].value) +- if new_value: +- return Name(new_value, prefix=node.get_prefix()) +- return None +diff -r 531f2e948299 lib2to3/fixes/fix_unicode.py +--- a/lib2to3/fixes/fix_unicode.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,28 +0,0 @@ +-"""Fixer that changes unicode to str, unichr to chr, and u"..." into "...". +- +-""" +- +-import re +-from ..pgen2 import token +-from .. import fixer_base +- +-class FixUnicode(fixer_base.BaseFix): +- +- PATTERN = "STRING | NAME<'unicode' | 'unichr'>" +- +- def transform(self, node, results): +- if node.type == token.NAME: +- if node.value == "unicode": +- new = node.clone() +- new.value = "str" +- return new +- if node.value == "unichr": +- new = node.clone() +- new.value = "chr" +- return new +- # XXX Warn when __unicode__ found? +- elif node.type == token.STRING: +- if re.match(r"[uU][rR]?[\'\"]", node.value): +- new = node.clone() +- new.value = new.value[1:] +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_urllib.py +--- a/lib2to3/fixes/fix_urllib.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,180 +0,0 @@ +-"""Fix changes imports of urllib which are now incompatible. +- This is rather similar to fix_imports, but because of the more +- complex nature of the fixing for urllib, it has its own fixer. +-""" +-# Author: Nick Edds +- +-# Local imports +-from .fix_imports import alternates, FixImports +-from .. import fixer_base +-from ..fixer_util import Name, Comma, FromImport, Newline, attr_chain +- +-MAPPING = {'urllib': [ +- ('urllib.request', +- ['URLOpener', 'FancyURLOpener', 'urlretrieve', +- '_urlopener', 'urlcleanup']), +- ('urllib.parse', +- ['quote', 'quote_plus', 'unquote', 'unquote_plus', +- 'urlencode', 'pathname2url', 'url2pathname', 'splitattr', +- 'splithost', 'splitnport', 'splitpasswd', 'splitport', +- 'splitquery', 'splittag', 'splittype', 'splituser', +- 'splitvalue', ]), +- ('urllib.error', +- ['ContentTooShortError'])], +- 'urllib2' : [ +- ('urllib.request', +- ['urlopen', 'install_opener', 'build_opener', +- 'Request', 'OpenerDirector', 'BaseHandler', +- 'HTTPDefaultErrorHandler', 'HTTPRedirectHandler', +- 'HTTPCookieProcessor', 'ProxyHandler', +- 'HTTPPasswordMgr', +- 'HTTPPasswordMgrWithDefaultRealm', +- 'AbstractBasicAuthHandler', +- 'HTTPBasicAuthHandler', 'ProxyBasicAuthHandler', +- 'AbstractDigestAuthHandler', +- 'HTTPDigestAuthHandler', 'ProxyDigestAuthHandler', +- 'HTTPHandler', 'HTTPSHandler', 'FileHandler', +- 'FTPHandler', 'CacheFTPHandler', +- 'UnknownHandler']), +- ('urllib.error', +- ['URLError', 'HTTPError']), +- ] +-} +- +-# Duplicate the url parsing functions for urllib2. +-MAPPING["urllib2"].append(MAPPING["urllib"][1]) +- +- +-def build_pattern(): +- bare = set() +- for old_module, changes in MAPPING.items(): +- for change in changes: +- new_module, members = change +- members = alternates(members) +- yield """import_name< 'import' (module=%r +- | dotted_as_names< any* module=%r any* >) > +- """ % (old_module, old_module) +- yield """import_from< 'from' mod_member=%r 'import' +- ( member=%s | import_as_name< member=%s 'as' any > | +- import_as_names< members=any* >) > +- """ % (old_module, members, members) +- yield """import_from< 'from' module_star=%r 'import' star='*' > +- """ % old_module +- yield """import_name< 'import' +- dotted_as_name< module_as=%r 'as' any > > +- """ % old_module +- yield """power< module_dot=%r trailer< '.' member=%s > any* > +- """ % (old_module, members) +- +- +-class FixUrllib(FixImports): +- +- def build_pattern(self): +- return "|".join(build_pattern()) +- +- def transform_import(self, node, results): +- """Transform for the basic import case. Replaces the old +- import name with a comma separated list of its +- replacements. +- """ +- import_mod = results.get('module') +- pref = import_mod.get_prefix() +- +- names = [] +- +- # create a Node list of the replacement modules +- for name in MAPPING[import_mod.value][:-1]: +- names.extend([Name(name[0], prefix=pref), Comma()]) +- names.append(Name(MAPPING[import_mod.value][-1][0], prefix=pref)) +- import_mod.replace(names) +- +- def transform_member(self, node, results): +- """Transform for imports of specific module elements. Replaces +- the module to be imported from with the appropriate new +- module. +- """ +- mod_member = results.get('mod_member') +- pref = mod_member.get_prefix() +- member = results.get('member') +- +- # Simple case with only a single member being imported +- if member: +- # this may be a list of length one, or just a node +- if isinstance(member, list): +- member = member[0] +- new_name = None +- for change in MAPPING[mod_member.value]: +- if member.value in change[1]: +- new_name = change[0] +- break +- if new_name: +- mod_member.replace(Name(new_name, prefix=pref)) +- else: +- self.cannot_convert(node, +- 'This is an invalid module element') +- +- # Multiple members being imported +- else: +- # a dictionary for replacements, order matters +- modules = [] +- mod_dict = {} +- members = results.get('members') +- for member in members: +- member = member.value +- # we only care about the actual members +- if member != ',': +- for change in MAPPING[mod_member.value]: +- if member in change[1]: +- if change[0] in mod_dict: +- mod_dict[change[0]].append(member) +- else: +- mod_dict[change[0]] = [member] +- modules.append(change[0]) +- +- new_nodes = [] +- for module in modules: +- elts = mod_dict[module] +- names = [] +- for elt in elts[:-1]: +- names.extend([Name(elt, prefix=pref), Comma()]) +- names.append(Name(elts[-1], prefix=pref)) +- new_nodes.append(FromImport(module, names)) +- if new_nodes: +- nodes = [] +- for new_node in new_nodes[:-1]: +- nodes.extend([new_node, Newline()]) +- nodes.append(new_nodes[-1]) +- node.replace(nodes) +- else: +- self.cannot_convert(node, 'All module elements are invalid') +- +- def transform_dot(self, node, results): +- """Transform for calls to module members in code.""" +- module_dot = results.get('module_dot') +- member = results.get('member') +- # this may be a list of length one, or just a node +- if isinstance(member, list): +- member = member[0] +- new_name = None +- for change in MAPPING[module_dot.value]: +- if member.value in change[1]: +- new_name = change[0] +- break +- if new_name: +- module_dot.replace(Name(new_name, +- prefix=module_dot.get_prefix())) +- else: +- self.cannot_convert(node, 'This is an invalid module element') +- +- def transform(self, node, results): +- if results.get('module'): +- self.transform_import(node, results) +- elif results.get('mod_member'): +- self.transform_member(node, results) +- elif results.get('module_dot'): +- self.transform_dot(node, results) +- # Renaming and star imports are not supported for these modules. +- elif results.get('module_star'): +- self.cannot_convert(node, 'Cannot handle star imports.') +- elif results.get('module_as'): +- self.cannot_convert(node, 'This module is now multiple modules') +diff -r 531f2e948299 lib2to3/fixes/fix_ws_comma.py +--- a/lib2to3/fixes/fix_ws_comma.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,39 +0,0 @@ +-"""Fixer that changes 'a ,b' into 'a, b'. +- +-This also changes '{a :b}' into '{a: b}', but does not touch other +-uses of colons. It does not touch other uses of whitespace. +- +-""" +- +-from .. import pytree +-from ..pgen2 import token +-from .. import fixer_base +- +-class FixWsComma(fixer_base.BaseFix): +- +- explicit = True # The user must ask for this fixers +- +- PATTERN = """ +- any<(not(',') any)+ ',' ((not(',') any)+ ',')* [not(',') any]> +- """ +- +- COMMA = pytree.Leaf(token.COMMA, ",") +- COLON = pytree.Leaf(token.COLON, ":") +- SEPS = (COMMA, COLON) +- +- def transform(self, node, results): +- new = node.clone() +- comma = False +- for child in new.children: +- if child in self.SEPS: +- prefix = child.get_prefix() +- if prefix.isspace() and "\n" not in prefix: +- child.set_prefix("") +- comma = True +- else: +- if comma: +- prefix = child.get_prefix() +- if not prefix: +- child.set_prefix(" ") +- comma = False +- return new +diff -r 531f2e948299 lib2to3/fixes/fix_xrange.py +--- a/lib2to3/fixes/fix_xrange.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,64 +0,0 @@ +-# Copyright 2007 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Fixer that changes xrange(...) into range(...).""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, Call, consuming_calls +-from .. import patcomp +- +- +-class FixXrange(fixer_base.BaseFix): +- +- PATTERN = """ +- power< +- (name='range'|name='xrange') trailer< '(' args=any ')' > +- rest=any* > +- """ +- +- def transform(self, node, results): +- name = results["name"] +- if name.value == "xrange": +- return self.transform_xrange(node, results) +- elif name.value == "range": +- return self.transform_range(node, results) +- else: +- raise ValueError(repr(name)) +- +- def transform_xrange(self, node, results): +- name = results["name"] +- name.replace(Name("range", prefix=name.get_prefix())) +- +- def transform_range(self, node, results): +- if not self.in_special_context(node): +- range_call = Call(Name("range"), [results["args"].clone()]) +- # Encase the range call in list(). +- list_call = Call(Name("list"), [range_call], +- prefix=node.get_prefix()) +- # Put things that were after the range() call after the list call. +- for n in results["rest"]: +- list_call.append_child(n) +- return list_call +- return node +- +- P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" +- p1 = patcomp.compile_pattern(P1) +- +- P2 = """for_stmt< 'for' any 'in' node=any ':' any* > +- | comp_for< 'for' any 'in' node=any any* > +- | comparison< any 'in' node=any any*> +- """ +- p2 = patcomp.compile_pattern(P2) +- +- def in_special_context(self, node): +- if node.parent is None: +- return False +- results = {} +- if (node.parent.parent is not None and +- self.p1.match(node.parent.parent, results) and +- results["node"] is node): +- # list(d.keys()) -> list(d.keys()), etc. +- return results["func"].value in consuming_calls +- # for ... in d.iterkeys() -> for ... in d.keys(), etc. +- return self.p2.match(node.parent, results) and results["node"] is node +diff -r 531f2e948299 lib2to3/fixes/fix_xreadlines.py +--- a/lib2to3/fixes/fix_xreadlines.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,24 +0,0 @@ +-"""Fix "for x in f.xreadlines()" -> "for x in f". +- +-This fixer will also convert g(f.xreadlines) into g(f.__iter__).""" +-# Author: Collin Winter +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name +- +- +-class FixXreadlines(fixer_base.BaseFix): +- PATTERN = """ +- power< call=any+ trailer< '.' 'xreadlines' > trailer< '(' ')' > > +- | +- power< any+ trailer< '.' no_call='xreadlines' > > +- """ +- +- def transform(self, node, results): +- no_call = results.get("no_call") +- +- if no_call: +- no_call.replace(Name("__iter__", prefix=no_call.get_prefix())) +- else: +- node.replace([x.clone() for x in results["call"]]) +diff -r 531f2e948299 lib2to3/fixes/fix_zip.py +--- a/lib2to3/fixes/fix_zip.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,34 +0,0 @@ +-""" +-Fixer that changes zip(seq0, seq1, ...) into list(zip(seq0, seq1, ...) +-unless there exists a 'from future_builtins import zip' statement in the +-top-level namespace. +- +-We avoid the transformation if the zip() call is directly contained in +-iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. +-""" +- +-# Local imports +-from .. import fixer_base +-from ..fixer_util import Name, Call, in_special_context +- +-class FixZip(fixer_base.ConditionalFix): +- +- PATTERN = """ +- power< 'zip' args=trailer< '(' [any] ')' > +- > +- """ +- +- skip_on = "future_builtins.zip" +- +- def transform(self, node, results): +- if self.should_skip(node): +- return +- +- if in_special_context(node): +- return None +- +- new = node.clone() +- new.set_prefix("") +- new = Call(Name("list"), [new]) +- new.set_prefix(node.get_prefix()) +- return new +diff -r 531f2e948299 lib2to3/main.py +--- a/lib2to3/main.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,133 +0,0 @@ +-""" +-Main program for 2to3. +-""" +- +-import sys +-import os +-import logging +-import shutil +-import optparse +- +-from . import refactor +- +- +-class StdoutRefactoringTool(refactor.RefactoringTool): +- """ +- Prints output to stdout. +- """ +- +- def __init__(self, fixers, options, explicit, nobackups): +- self.nobackups = nobackups +- super(StdoutRefactoringTool, self).__init__(fixers, options, explicit) +- +- def log_error(self, msg, *args, **kwargs): +- self.errors.append((msg, args, kwargs)) +- self.logger.error(msg, *args, **kwargs) +- +- def write_file(self, new_text, filename, old_text): +- if not self.nobackups: +- # Make backup +- backup = filename + ".bak" +- if os.path.lexists(backup): +- try: +- os.remove(backup) +- except os.error, err: +- self.log_message("Can't remove backup %s", backup) +- try: +- os.rename(filename, backup) +- except os.error, err: +- self.log_message("Can't rename %s to %s", filename, backup) +- # Actually write the new file +- super(StdoutRefactoringTool, self).write_file(new_text, +- filename, old_text) +- if not self.nobackups: +- shutil.copymode(backup, filename) +- +- def print_output(self, lines): +- for line in lines: +- print line +- +- +-def main(fixer_pkg, args=None): +- """Main program. +- +- Args: +- fixer_pkg: the name of a package where the fixers are located. +- args: optional; a list of command line arguments. If omitted, +- sys.argv[1:] is used. +- +- Returns a suggested exit status (0, 1, 2). +- """ +- # Set up option parser +- parser = optparse.OptionParser(usage="2to3 [options] file|dir ...") +- parser.add_option("-d", "--doctests_only", action="store_true", +- help="Fix up doctests only") +- parser.add_option("-f", "--fix", action="append", default=[], +- help="Each FIX specifies a transformation; default: all") +- parser.add_option("-x", "--nofix", action="append", default=[], +- help="Prevent a fixer from being run.") +- parser.add_option("-l", "--list-fixes", action="store_true", +- help="List available transformations (fixes/fix_*.py)") +- parser.add_option("-p", "--print-function", action="store_true", +- help="Modify the grammar so that print() is a function") +- parser.add_option("-v", "--verbose", action="store_true", +- help="More verbose logging") +- parser.add_option("-w", "--write", action="store_true", +- help="Write back modified files") +- parser.add_option("-n", "--nobackups", action="store_true", default=False, +- help="Don't write backups for modified files.") +- +- # Parse command line arguments +- refactor_stdin = False +- options, args = parser.parse_args(args) +- if not options.write and options.nobackups: +- parser.error("Can't use -n without -w") +- if options.list_fixes: +- print "Available transformations for the -f/--fix option:" +- for fixname in refactor.get_all_fix_names(fixer_pkg): +- print fixname +- if not args: +- return 0 +- if not args: +- print >>sys.stderr, "At least one file or directory argument required." +- print >>sys.stderr, "Use --help to show usage." +- return 2 +- if "-" in args: +- refactor_stdin = True +- if options.write: +- print >>sys.stderr, "Can't write to stdin." +- return 2 +- +- # Set up logging handler +- level = logging.DEBUG if options.verbose else logging.INFO +- logging.basicConfig(format='%(name)s: %(message)s', level=level) +- +- # Initialize the refactoring tool +- rt_opts = {"print_function" : options.print_function} +- avail_fixes = set(refactor.get_fixers_from_package(fixer_pkg)) +- unwanted_fixes = set(fixer_pkg + ".fix_" + fix for fix in options.nofix) +- explicit = set() +- if options.fix: +- all_present = False +- for fix in options.fix: +- if fix == "all": +- all_present = True +- else: +- explicit.add(fixer_pkg + ".fix_" + fix) +- requested = avail_fixes.union(explicit) if all_present else explicit +- else: +- requested = avail_fixes.union(explicit) +- fixer_names = requested.difference(unwanted_fixes) +- rt = StdoutRefactoringTool(sorted(fixer_names), rt_opts, sorted(explicit), +- options.nobackups) +- +- # Refactor all files and directories passed as arguments +- if not rt.errors: +- if refactor_stdin: +- rt.refactor_stdin() +- else: +- rt.refactor(args, options.write, options.doctests_only) +- rt.summarize() +- +- # Return error status (0 if rt.errors is zero) +- return int(bool(rt.errors)) +diff -r 531f2e948299 lib2to3/patcomp.py +--- a/lib2to3/patcomp.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,186 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Pattern compiler. +- +-The grammer is taken from PatternGrammar.txt. +- +-The compiler compiles a pattern to a pytree.*Pattern instance. +-""" +- +-__author__ = "Guido van Rossum " +- +-# Python imports +-import os +- +-# Fairly local imports +-from .pgen2 import driver +-from .pgen2 import literals +-from .pgen2 import token +-from .pgen2 import tokenize +- +-# Really local imports +-from . import pytree +-from . import pygram +- +-# The pattern grammar file +-_PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), +- "PatternGrammar.txt") +- +- +-def tokenize_wrapper(input): +- """Tokenizes a string suppressing significant whitespace.""" +- skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) +- tokens = tokenize.generate_tokens(driver.generate_lines(input).next) +- for quintuple in tokens: +- type, value, start, end, line_text = quintuple +- if type not in skip: +- yield quintuple +- +- +-class PatternCompiler(object): +- +- def __init__(self, grammar_file=_PATTERN_GRAMMAR_FILE): +- """Initializer. +- +- Takes an optional alternative filename for the pattern grammar. +- """ +- self.grammar = driver.load_grammar(grammar_file) +- self.syms = pygram.Symbols(self.grammar) +- self.pygrammar = pygram.python_grammar +- self.pysyms = pygram.python_symbols +- self.driver = driver.Driver(self.grammar, convert=pattern_convert) +- +- def compile_pattern(self, input, debug=False): +- """Compiles a pattern string to a nested pytree.*Pattern object.""" +- tokens = tokenize_wrapper(input) +- root = self.driver.parse_tokens(tokens, debug=debug) +- return self.compile_node(root) +- +- def compile_node(self, node): +- """Compiles a node, recursively. +- +- This is one big switch on the node type. +- """ +- # XXX Optimize certain Wildcard-containing-Wildcard patterns +- # that can be merged +- if node.type == self.syms.Matcher: +- node = node.children[0] # Avoid unneeded recursion +- +- if node.type == self.syms.Alternatives: +- # Skip the odd children since they are just '|' tokens +- alts = [self.compile_node(ch) for ch in node.children[::2]] +- if len(alts) == 1: +- return alts[0] +- p = pytree.WildcardPattern([[a] for a in alts], min=1, max=1) +- return p.optimize() +- +- if node.type == self.syms.Alternative: +- units = [self.compile_node(ch) for ch in node.children] +- if len(units) == 1: +- return units[0] +- p = pytree.WildcardPattern([units], min=1, max=1) +- return p.optimize() +- +- if node.type == self.syms.NegatedUnit: +- pattern = self.compile_basic(node.children[1:]) +- p = pytree.NegatedPattern(pattern) +- return p.optimize() +- +- assert node.type == self.syms.Unit +- +- name = None +- nodes = node.children +- if len(nodes) >= 3 and nodes[1].type == token.EQUAL: +- name = nodes[0].value +- nodes = nodes[2:] +- repeat = None +- if len(nodes) >= 2 and nodes[-1].type == self.syms.Repeater: +- repeat = nodes[-1] +- nodes = nodes[:-1] +- +- # Now we've reduced it to: STRING | NAME [Details] | (...) | [...] +- pattern = self.compile_basic(nodes, repeat) +- +- if repeat is not None: +- assert repeat.type == self.syms.Repeater +- children = repeat.children +- child = children[0] +- if child.type == token.STAR: +- min = 0 +- max = pytree.HUGE +- elif child.type == token.PLUS: +- min = 1 +- max = pytree.HUGE +- elif child.type == token.LBRACE: +- assert children[-1].type == token.RBRACE +- assert len(children) in (3, 5) +- min = max = self.get_int(children[1]) +- if len(children) == 5: +- max = self.get_int(children[3]) +- else: +- assert False +- if min != 1 or max != 1: +- pattern = pattern.optimize() +- pattern = pytree.WildcardPattern([[pattern]], min=min, max=max) +- +- if name is not None: +- pattern.name = name +- return pattern.optimize() +- +- def compile_basic(self, nodes, repeat=None): +- # Compile STRING | NAME [Details] | (...) | [...] +- assert len(nodes) >= 1 +- node = nodes[0] +- if node.type == token.STRING: +- value = literals.evalString(node.value) +- return pytree.LeafPattern(content=value) +- elif node.type == token.NAME: +- value = node.value +- if value.isupper(): +- if value not in TOKEN_MAP: +- raise SyntaxError("Invalid token: %r" % value) +- return pytree.LeafPattern(TOKEN_MAP[value]) +- else: +- if value == "any": +- type = None +- elif not value.startswith("_"): +- type = getattr(self.pysyms, value, None) +- if type is None: +- raise SyntaxError("Invalid symbol: %r" % value) +- if nodes[1:]: # Details present +- content = [self.compile_node(nodes[1].children[1])] +- else: +- content = None +- return pytree.NodePattern(type, content) +- elif node.value == "(": +- return self.compile_node(nodes[1]) +- elif node.value == "[": +- assert repeat is None +- subpattern = self.compile_node(nodes[1]) +- return pytree.WildcardPattern([[subpattern]], min=0, max=1) +- assert False, node +- +- def get_int(self, node): +- assert node.type == token.NUMBER +- return int(node.value) +- +- +-# Map named tokens to the type value for a LeafPattern +-TOKEN_MAP = {"NAME": token.NAME, +- "STRING": token.STRING, +- "NUMBER": token.NUMBER, +- "TOKEN": None} +- +- +-def pattern_convert(grammar, raw_node_info): +- """Converts raw node information to a Node or Leaf instance.""" +- type, value, context, children = raw_node_info +- if children or type in grammar.number2symbol: +- return pytree.Node(type, children, context=context) +- else: +- return pytree.Leaf(type, value, context=context) +- +- +-def compile_pattern(pattern): +- return PatternCompiler().compile_pattern(pattern) +diff -r 531f2e948299 lib2to3/pgen2/.svn/all-wcprops +--- a/lib2to3/pgen2/.svn/all-wcprops Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,59 +0,0 @@ +-K 25 +-svn:wc:ra_dav:version-url +-V 57 +-/projects/!svn/ver/68340/sandbox/trunk/2to3/lib2to3/pgen2 +-END +-tokenize.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/61441/sandbox/trunk/2to3/lib2to3/pgen2/tokenize.py +-END +-pgen.py +-K 25 +-svn:wc:ra_dav:version-url +-V 65 +-/projects/!svn/ver/61629/sandbox/trunk/2to3/lib2to3/pgen2/pgen.py +-END +-parse.py +-K 25 +-svn:wc:ra_dav:version-url +-V 66 +-/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/pgen2/parse.py +-END +-driver.py +-K 25 +-svn:wc:ra_dav:version-url +-V 67 +-/projects/!svn/ver/68340/sandbox/trunk/2to3/lib2to3/pgen2/driver.py +-END +-__init__.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/61441/sandbox/trunk/2to3/lib2to3/pgen2/__init__.py +-END +-literals.py +-K 25 +-svn:wc:ra_dav:version-url +-V 69 +-/projects/!svn/ver/61441/sandbox/trunk/2to3/lib2to3/pgen2/literals.py +-END +-token.py +-K 25 +-svn:wc:ra_dav:version-url +-V 66 +-/projects/!svn/ver/61441/sandbox/trunk/2to3/lib2to3/pgen2/token.py +-END +-conv.py +-K 25 +-svn:wc:ra_dav:version-url +-V 65 +-/projects/!svn/ver/61441/sandbox/trunk/2to3/lib2to3/pgen2/conv.py +-END +-grammar.py +-K 25 +-svn:wc:ra_dav:version-url +-V 68 +-/projects/!svn/ver/61441/sandbox/trunk/2to3/lib2to3/pgen2/grammar.py +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/dir-prop-base +--- a/lib2to3/pgen2/.svn/dir-prop-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,8 +0,0 @@ +-K 10 +-svn:ignore +-V 13 +-*.pyc +-*.pyo +- +- +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/entries +--- a/lib2to3/pgen2/.svn/entries Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,334 +0,0 @@ +-9 +- +-dir +-70785 +-http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3/pgen2 +-http://svn.python.org/projects +- +- +- +-2009-01-05T08:11:39.704315Z +-68340 +-georg.brandl +-has-props +- +-svn:special svn:externals svn:needs-lock +- +- +- +- +- +- +- +- +- +- +- +-6015fed2-1504-0410-9fe1-9d1591cc4771 +- +-tokenize.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-06aea8121aa7b0fc71345d011813d4b4 +-2008-03-17T16:59:51.273602Z +-61441 +-martin.v.loewis +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-16184 +- +-pgen.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-40f1eec8af5247a511bf6acc34eac994 +-2008-03-19T16:58:19.069158Z +-61629 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-13740 +- +-parse.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-80c0ee069eab8de116e1c13572d6cd4b +-2008-11-25T23:13:17.968453Z +-67389 +-benjamin.peterson +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-8053 +- +-driver.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-e2c063aca0163f8f47fefeab1a5cdff7 +-2009-01-05T08:11:39.704315Z +-68340 +-georg.brandl +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-4809 +- +-__init__.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-5cb6bc9b6c96e165df87b615f2df9f1a +-2006-11-29T17:38:40.278528Z +-52858 +-guido.van.rossum +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-143 +- +-literals.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-e3b1d03cade5fa0c3a1a5324e0b1e539 +-2006-11-29T17:38:40.278528Z +-52858 +-guido.van.rossum +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1614 +- +-token.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-8fd1f5c3fc2ad1b2afa7e17064b0ba04 +-2007-02-12T23:59:44.048119Z +-53758 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-1244 +- +-conv.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-942a8910f37b9e5d202806ea05f7b2f1 +-2007-02-12T23:59:44.048119Z +-53758 +-collin.winter +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-9625 +- +-grammar.py +-file +- +- +- +- +-2009-03-31T00:29:32.000000Z +-612ee8e1a84660a7c44f7d5af3e7db69 +-2008-03-17T16:59:51.273602Z +-61441 +-martin.v.loewis +-has-props +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +- +-4947 +- +diff -r 531f2e948299 lib2to3/pgen2/.svn/format +--- a/lib2to3/pgen2/.svn/format Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,1 +0,0 @@ +-9 +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/__init__.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/__init__.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/conv.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/conv.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/driver.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/driver.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/grammar.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/grammar.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/literals.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/literals.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/parse.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/parse.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/pgen.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/pgen.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/token.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/token.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,13 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 14 +-svn:executable +-V 1 +-* +-K 12 +-svn:keywords +-V 2 +-Id +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/prop-base/tokenize.py.svn-base +--- a/lib2to3/pgen2/.svn/prop-base/tokenize.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,9 +0,0 @@ +-K 13 +-svn:eol-style +-V 6 +-native +-K 12 +-svn:keywords +-V 23 +-Author Date Id Revision +-END +diff -r 531f2e948299 lib2to3/pgen2/.svn/text-base/__init__.py.svn-base +--- a/lib2to3/pgen2/.svn/text-base/__init__.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,4 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""The pgen2 package.""" +diff -r 531f2e948299 lib2to3/pgen2/.svn/text-base/conv.py.svn-base +--- a/lib2to3/pgen2/.svn/text-base/conv.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,257 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Convert graminit.[ch] spit out by pgen to Python code. +- +-Pgen is the Python parser generator. It is useful to quickly create a +-parser from a grammar file in Python's grammar notation. But I don't +-want my parsers to be written in C (yet), so I'm translating the +-parsing tables to Python data structures and writing a Python parse +-engine. +- +-Note that the token numbers are constants determined by the standard +-Python tokenizer. The standard token module defines these numbers and +-their names (the names are not used much). The token numbers are +-hardcoded into the Python tokenizer and into pgen. A Python +-implementation of the Python tokenizer is also available, in the +-standard tokenize module. +- +-On the other hand, symbol numbers (representing the grammar's +-non-terminals) are assigned by pgen based on the actual grammar +-input. +- +-Note: this module is pretty much obsolete; the pgen module generates +-equivalent grammar tables directly from the Grammar.txt input file +-without having to invoke the Python pgen C program. +- +-""" +- +-# Python imports +-import re +- +-# Local imports +-from pgen2 import grammar, token +- +- +-class Converter(grammar.Grammar): +- """Grammar subclass that reads classic pgen output files. +- +- The run() method reads the tables as produced by the pgen parser +- generator, typically contained in two C files, graminit.h and +- graminit.c. The other methods are for internal use only. +- +- See the base class for more documentation. +- +- """ +- +- def run(self, graminit_h, graminit_c): +- """Load the grammar tables from the text files written by pgen.""" +- self.parse_graminit_h(graminit_h) +- self.parse_graminit_c(graminit_c) +- self.finish_off() +- +- def parse_graminit_h(self, filename): +- """Parse the .h file writen by pgen. (Internal) +- +- This file is a sequence of #define statements defining the +- nonterminals of the grammar as numbers. We build two tables +- mapping the numbers to names and back. +- +- """ +- try: +- f = open(filename) +- except IOError, err: +- print "Can't open %s: %s" % (filename, err) +- return False +- self.symbol2number = {} +- self.number2symbol = {} +- lineno = 0 +- for line in f: +- lineno += 1 +- mo = re.match(r"^#define\s+(\w+)\s+(\d+)$", line) +- if not mo and line.strip(): +- print "%s(%s): can't parse %s" % (filename, lineno, +- line.strip()) +- else: +- symbol, number = mo.groups() +- number = int(number) +- assert symbol not in self.symbol2number +- assert number not in self.number2symbol +- self.symbol2number[symbol] = number +- self.number2symbol[number] = symbol +- return True +- +- def parse_graminit_c(self, filename): +- """Parse the .c file writen by pgen. (Internal) +- +- The file looks as follows. The first two lines are always this: +- +- #include "pgenheaders.h" +- #include "grammar.h" +- +- After that come four blocks: +- +- 1) one or more state definitions +- 2) a table defining dfas +- 3) a table defining labels +- 4) a struct defining the grammar +- +- A state definition has the following form: +- - one or more arc arrays, each of the form: +- static arc arcs__[] = { +- {, }, +- ... +- }; +- - followed by a state array, of the form: +- static state states_[] = { +- {, arcs__}, +- ... +- }; +- +- """ +- try: +- f = open(filename) +- except IOError, err: +- print "Can't open %s: %s" % (filename, err) +- return False +- # The code below essentially uses f's iterator-ness! +- lineno = 0 +- +- # Expect the two #include lines +- lineno, line = lineno+1, f.next() +- assert line == '#include "pgenheaders.h"\n', (lineno, line) +- lineno, line = lineno+1, f.next() +- assert line == '#include "grammar.h"\n', (lineno, line) +- +- # Parse the state definitions +- lineno, line = lineno+1, f.next() +- allarcs = {} +- states = [] +- while line.startswith("static arc "): +- while line.startswith("static arc "): +- mo = re.match(r"static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$", +- line) +- assert mo, (lineno, line) +- n, m, k = map(int, mo.groups()) +- arcs = [] +- for _ in range(k): +- lineno, line = lineno+1, f.next() +- mo = re.match(r"\s+{(\d+), (\d+)},$", line) +- assert mo, (lineno, line) +- i, j = map(int, mo.groups()) +- arcs.append((i, j)) +- lineno, line = lineno+1, f.next() +- assert line == "};\n", (lineno, line) +- allarcs[(n, m)] = arcs +- lineno, line = lineno+1, f.next() +- mo = re.match(r"static state states_(\d+)\[(\d+)\] = {$", line) +- assert mo, (lineno, line) +- s, t = map(int, mo.groups()) +- assert s == len(states), (lineno, line) +- state = [] +- for _ in range(t): +- lineno, line = lineno+1, f.next() +- mo = re.match(r"\s+{(\d+), arcs_(\d+)_(\d+)},$", line) +- assert mo, (lineno, line) +- k, n, m = map(int, mo.groups()) +- arcs = allarcs[n, m] +- assert k == len(arcs), (lineno, line) +- state.append(arcs) +- states.append(state) +- lineno, line = lineno+1, f.next() +- assert line == "};\n", (lineno, line) +- lineno, line = lineno+1, f.next() +- self.states = states +- +- # Parse the dfas +- dfas = {} +- mo = re.match(r"static dfa dfas\[(\d+)\] = {$", line) +- assert mo, (lineno, line) +- ndfas = int(mo.group(1)) +- for i in range(ndfas): +- lineno, line = lineno+1, f.next() +- mo = re.match(r'\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$', +- line) +- assert mo, (lineno, line) +- symbol = mo.group(2) +- number, x, y, z = map(int, mo.group(1, 3, 4, 5)) +- assert self.symbol2number[symbol] == number, (lineno, line) +- assert self.number2symbol[number] == symbol, (lineno, line) +- assert x == 0, (lineno, line) +- state = states[z] +- assert y == len(state), (lineno, line) +- lineno, line = lineno+1, f.next() +- mo = re.match(r'\s+("(?:\\\d\d\d)*")},$', line) +- assert mo, (lineno, line) +- first = {} +- rawbitset = eval(mo.group(1)) +- for i, c in enumerate(rawbitset): +- byte = ord(c) +- for j in range(8): +- if byte & (1<= os.path.getmtime(b) +diff -r 531f2e948299 lib2to3/pgen2/.svn/text-base/grammar.py.svn-base +--- a/lib2to3/pgen2/.svn/text-base/grammar.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,171 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""This module defines the data structures used to represent a grammar. +- +-These are a bit arcane because they are derived from the data +-structures used by Python's 'pgen' parser generator. +- +-There's also a table here mapping operators to their names in the +-token module; the Python tokenize module reports all operators as the +-fallback token code OP, but the parser needs the actual token code. +- +-""" +- +-# Python imports +-import pickle +- +-# Local imports +-from . import token, tokenize +- +- +-class Grammar(object): +- """Pgen parsing tables tables conversion class. +- +- Once initialized, this class supplies the grammar tables for the +- parsing engine implemented by parse.py. The parsing engine +- accesses the instance variables directly. The class here does not +- provide initialization of the tables; several subclasses exist to +- do this (see the conv and pgen modules). +- +- The load() method reads the tables from a pickle file, which is +- much faster than the other ways offered by subclasses. The pickle +- file is written by calling dump() (after loading the grammar +- tables using a subclass). The report() method prints a readable +- representation of the tables to stdout, for debugging. +- +- The instance variables are as follows: +- +- symbol2number -- a dict mapping symbol names to numbers. Symbol +- numbers are always 256 or higher, to distinguish +- them from token numbers, which are between 0 and +- 255 (inclusive). +- +- number2symbol -- a dict mapping numbers to symbol names; +- these two are each other's inverse. +- +- states -- a list of DFAs, where each DFA is a list of +- states, each state is is a list of arcs, and each +- arc is a (i, j) pair where i is a label and j is +- a state number. The DFA number is the index into +- this list. (This name is slightly confusing.) +- Final states are represented by a special arc of +- the form (0, j) where j is its own state number. +- +- dfas -- a dict mapping symbol numbers to (DFA, first) +- pairs, where DFA is an item from the states list +- above, and first is a set of tokens that can +- begin this grammar rule (represented by a dict +- whose values are always 1). +- +- labels -- a list of (x, y) pairs where x is either a token +- number or a symbol number, and y is either None +- or a string; the strings are keywords. The label +- number is the index in this list; label numbers +- are used to mark state transitions (arcs) in the +- DFAs. +- +- start -- the number of the grammar's start symbol. +- +- keywords -- a dict mapping keyword strings to arc labels. +- +- tokens -- a dict mapping token numbers to arc labels. +- +- """ +- +- def __init__(self): +- self.symbol2number = {} +- self.number2symbol = {} +- self.states = [] +- self.dfas = {} +- self.labels = [(0, "EMPTY")] +- self.keywords = {} +- self.tokens = {} +- self.symbol2label = {} +- self.start = 256 +- +- def dump(self, filename): +- """Dump the grammar tables to a pickle file.""" +- f = open(filename, "wb") +- pickle.dump(self.__dict__, f, 2) +- f.close() +- +- def load(self, filename): +- """Load the grammar tables from a pickle file.""" +- f = open(filename, "rb") +- d = pickle.load(f) +- f.close() +- self.__dict__.update(d) +- +- def report(self): +- """Dump the grammar tables to standard output, for debugging.""" +- from pprint import pprint +- print "s2n" +- pprint(self.symbol2number) +- print "n2s" +- pprint(self.number2symbol) +- print "states" +- pprint(self.states) +- print "dfas" +- pprint(self.dfas) +- print "labels" +- pprint(self.labels) +- print "start", self.start +- +- +-# Map from operator to number (since tokenize doesn't do this) +- +-opmap_raw = """ +-( LPAR +-) RPAR +-[ LSQB +-] RSQB +-: COLON +-, COMMA +-; SEMI +-+ PLUS +-- MINUS +-* STAR +-/ SLASH +-| VBAR +-& AMPER +-< LESS +-> GREATER +-= EQUAL +-. DOT +-% PERCENT +-` BACKQUOTE +-{ LBRACE +-} RBRACE +-@ AT +-== EQEQUAL +-!= NOTEQUAL +-<> NOTEQUAL +-<= LESSEQUAL +->= GREATEREQUAL +-~ TILDE +-^ CIRCUMFLEX +-<< LEFTSHIFT +->> RIGHTSHIFT +-** DOUBLESTAR +-+= PLUSEQUAL +--= MINEQUAL +-*= STAREQUAL +-/= SLASHEQUAL +-%= PERCENTEQUAL +-&= AMPEREQUAL +-|= VBAREQUAL +-^= CIRCUMFLEXEQUAL +-<<= LEFTSHIFTEQUAL +->>= RIGHTSHIFTEQUAL +-**= DOUBLESTAREQUAL +-// DOUBLESLASH +-//= DOUBLESLASHEQUAL +--> RARROW +-""" +- +-opmap = {} +-for line in opmap_raw.splitlines(): +- if line: +- op, name = line.split() +- opmap[op] = getattr(token, name) +diff -r 531f2e948299 lib2to3/pgen2/.svn/text-base/literals.py.svn-base +--- a/lib2to3/pgen2/.svn/text-base/literals.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,60 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Safely evaluate Python string literals without using eval().""" +- +-import re +- +-simple_escapes = {"a": "\a", +- "b": "\b", +- "f": "\f", +- "n": "\n", +- "r": "\r", +- "t": "\t", +- "v": "\v", +- "'": "'", +- '"': '"', +- "\\": "\\"} +- +-def escape(m): +- all, tail = m.group(0, 1) +- assert all.startswith("\\") +- esc = simple_escapes.get(tail) +- if esc is not None: +- return esc +- if tail.startswith("x"): +- hexes = tail[1:] +- if len(hexes) < 2: +- raise ValueError("invalid hex string escape ('\\%s')" % tail) +- try: +- i = int(hexes, 16) +- except ValueError: +- raise ValueError("invalid hex string escape ('\\%s')" % tail) +- else: +- try: +- i = int(tail, 8) +- except ValueError: +- raise ValueError("invalid octal string escape ('\\%s')" % tail) +- return chr(i) +- +-def evalString(s): +- assert s.startswith("'") or s.startswith('"'), repr(s[:1]) +- q = s[0] +- if s[:3] == q*3: +- q = q*3 +- assert s.endswith(q), repr(s[-len(q):]) +- assert len(s) >= 2*len(q) +- s = s[len(q):-len(q)] +- return re.sub(r"\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})", escape, s) +- +-def test(): +- for i in range(256): +- c = chr(i) +- s = repr(c) +- e = evalString(s) +- if e != c: +- print i, c, s, e +- +- +-if __name__ == "__main__": +- test() +diff -r 531f2e948299 lib2to3/pgen2/.svn/text-base/parse.py.svn-base +--- a/lib2to3/pgen2/.svn/text-base/parse.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,201 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Parser engine for the grammar tables generated by pgen. +- +-The grammar table must be loaded first. +- +-See Parser/parser.c in the Python distribution for additional info on +-how this parsing engine works. +- +-""" +- +-# Local imports +-from . import token +- +-class ParseError(Exception): +- """Exception to signal the parser is stuck.""" +- +- def __init__(self, msg, type, value, context): +- Exception.__init__(self, "%s: type=%r, value=%r, context=%r" % +- (msg, type, value, context)) +- self.msg = msg +- self.type = type +- self.value = value +- self.context = context +- +-class Parser(object): +- """Parser engine. +- +- The proper usage sequence is: +- +- p = Parser(grammar, [converter]) # create instance +- p.setup([start]) # prepare for parsing +- : +- if p.addtoken(...): # parse a token; may raise ParseError +- break +- root = p.rootnode # root of abstract syntax tree +- +- A Parser instance may be reused by calling setup() repeatedly. +- +- A Parser instance contains state pertaining to the current token +- sequence, and should not be used concurrently by different threads +- to parse separate token sequences. +- +- See driver.py for how to get input tokens by tokenizing a file or +- string. +- +- Parsing is complete when addtoken() returns True; the root of the +- abstract syntax tree can then be retrieved from the rootnode +- instance variable. When a syntax error occurs, addtoken() raises +- the ParseError exception. There is no error recovery; the parser +- cannot be used after a syntax error was reported (but it can be +- reinitialized by calling setup()). +- +- """ +- +- def __init__(self, grammar, convert=None): +- """Constructor. +- +- The grammar argument is a grammar.Grammar instance; see the +- grammar module for more information. +- +- The parser is not ready yet for parsing; you must call the +- setup() method to get it started. +- +- The optional convert argument is a function mapping concrete +- syntax tree nodes to abstract syntax tree nodes. If not +- given, no conversion is done and the syntax tree produced is +- the concrete syntax tree. If given, it must be a function of +- two arguments, the first being the grammar (a grammar.Grammar +- instance), and the second being the concrete syntax tree node +- to be converted. The syntax tree is converted from the bottom +- up. +- +- A concrete syntax tree node is a (type, value, context, nodes) +- tuple, where type is the node type (a token or symbol number), +- value is None for symbols and a string for tokens, context is +- None or an opaque value used for error reporting (typically a +- (lineno, offset) pair), and nodes is a list of children for +- symbols, and None for tokens. +- +- An abstract syntax tree node may be anything; this is entirely +- up to the converter function. +- +- """ +- self.grammar = grammar +- self.convert = convert or (lambda grammar, node: node) +- +- def setup(self, start=None): +- """Prepare for parsing. +- +- This *must* be called before starting to parse. +- +- The optional argument is an alternative start symbol; it +- defaults to the grammar's start symbol. +- +- You can use a Parser instance to parse any number of programs; +- each time you call setup() the parser is reset to an initial +- state determined by the (implicit or explicit) start symbol. +- +- """ +- if start is None: +- start = self.grammar.start +- # Each stack entry is a tuple: (dfa, state, node). +- # A node is a tuple: (type, value, context, children), +- # where children is a list of nodes or None, and context may be None. +- newnode = (start, None, None, []) +- stackentry = (self.grammar.dfas[start], 0, newnode) +- self.stack = [stackentry] +- self.rootnode = None +- self.used_names = set() # Aliased to self.rootnode.used_names in pop() +- +- def addtoken(self, type, value, context): +- """Add a token; return True iff this is the end of the program.""" +- # Map from token to label +- ilabel = self.classify(type, value, context) +- # Loop until the token is shifted; may raise exceptions +- while True: +- dfa, state, node = self.stack[-1] +- states, first = dfa +- arcs = states[state] +- # Look for a state with this label +- for i, newstate in arcs: +- t, v = self.grammar.labels[i] +- if ilabel == i: +- # Look it up in the list of labels +- assert t < 256 +- # Shift a token; we're done with it +- self.shift(type, value, newstate, context) +- # Pop while we are in an accept-only state +- state = newstate +- while states[state] == [(0, state)]: +- self.pop() +- if not self.stack: +- # Done parsing! +- return True +- dfa, state, node = self.stack[-1] +- states, first = dfa +- # Done with this token +- return False +- elif t >= 256: +- # See if it's a symbol and if we're in its first set +- itsdfa = self.grammar.dfas[t] +- itsstates, itsfirst = itsdfa +- if ilabel in itsfirst: +- # Push a symbol +- self.push(t, self.grammar.dfas[t], newstate, context) +- break # To continue the outer while loop +- else: +- if (0, state) in arcs: +- # An accepting state, pop it and try something else +- self.pop() +- if not self.stack: +- # Done parsing, but another token is input +- raise ParseError("too much input", +- type, value, context) +- else: +- # No success finding a transition +- raise ParseError("bad input", type, value, context) +- +- def classify(self, type, value, context): +- """Turn a token into a label. (Internal)""" +- if type == token.NAME: +- # Keep a listing of all used names +- self.used_names.add(value) +- # Check for reserved words +- ilabel = self.grammar.keywords.get(value) +- if ilabel is not None: +- return ilabel +- ilabel = self.grammar.tokens.get(type) +- if ilabel is None: +- raise ParseError("bad token", type, value, context) +- return ilabel +- +- def shift(self, type, value, newstate, context): +- """Shift a token. (Internal)""" +- dfa, state, node = self.stack[-1] +- newnode = (type, value, context, None) +- newnode = self.convert(self.grammar, newnode) +- if newnode is not None: +- node[-1].append(newnode) +- self.stack[-1] = (dfa, newstate, node) +- +- def push(self, type, newdfa, newstate, context): +- """Push a nonterminal. (Internal)""" +- dfa, state, node = self.stack[-1] +- newnode = (type, None, context, []) +- self.stack[-1] = (dfa, newstate, node) +- self.stack.append((newdfa, 0, newnode)) +- +- def pop(self): +- """Pop a nonterminal. (Internal)""" +- popdfa, popstate, popnode = self.stack.pop() +- newnode = self.convert(self.grammar, popnode) +- if newnode is not None: +- if self.stack: +- dfa, state, node = self.stack[-1] +- node[-1].append(newnode) +- else: +- self.rootnode = newnode +- self.rootnode.used_names = self.used_names +diff -r 531f2e948299 lib2to3/pgen2/.svn/text-base/pgen.py.svn-base +--- a/lib2to3/pgen2/.svn/text-base/pgen.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,384 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-# Pgen imports +-from . import grammar, token, tokenize +- +-class PgenGrammar(grammar.Grammar): +- pass +- +-class ParserGenerator(object): +- +- def __init__(self, filename, stream=None): +- close_stream = None +- if stream is None: +- stream = open(filename) +- close_stream = stream.close +- self.filename = filename +- self.stream = stream +- self.generator = tokenize.generate_tokens(stream.readline) +- self.gettoken() # Initialize lookahead +- self.dfas, self.startsymbol = self.parse() +- if close_stream is not None: +- close_stream() +- self.first = {} # map from symbol name to set of tokens +- self.addfirstsets() +- +- def make_grammar(self): +- c = PgenGrammar() +- names = self.dfas.keys() +- names.sort() +- names.remove(self.startsymbol) +- names.insert(0, self.startsymbol) +- for name in names: +- i = 256 + len(c.symbol2number) +- c.symbol2number[name] = i +- c.number2symbol[i] = name +- for name in names: +- dfa = self.dfas[name] +- states = [] +- for state in dfa: +- arcs = [] +- for label, next in state.arcs.iteritems(): +- arcs.append((self.make_label(c, label), dfa.index(next))) +- if state.isfinal: +- arcs.append((0, dfa.index(state))) +- states.append(arcs) +- c.states.append(states) +- c.dfas[c.symbol2number[name]] = (states, self.make_first(c, name)) +- c.start = c.symbol2number[self.startsymbol] +- return c +- +- def make_first(self, c, name): +- rawfirst = self.first[name] +- first = {} +- for label in rawfirst: +- ilabel = self.make_label(c, label) +- ##assert ilabel not in first # XXX failed on <> ... != +- first[ilabel] = 1 +- return first +- +- def make_label(self, c, label): +- # XXX Maybe this should be a method on a subclass of converter? +- ilabel = len(c.labels) +- if label[0].isalpha(): +- # Either a symbol name or a named token +- if label in c.symbol2number: +- # A symbol name (a non-terminal) +- if label in c.symbol2label: +- return c.symbol2label[label] +- else: +- c.labels.append((c.symbol2number[label], None)) +- c.symbol2label[label] = ilabel +- return ilabel +- else: +- # A named token (NAME, NUMBER, STRING) +- itoken = getattr(token, label, None) +- assert isinstance(itoken, int), label +- assert itoken in token.tok_name, label +- if itoken in c.tokens: +- return c.tokens[itoken] +- else: +- c.labels.append((itoken, None)) +- c.tokens[itoken] = ilabel +- return ilabel +- else: +- # Either a keyword or an operator +- assert label[0] in ('"', "'"), label +- value = eval(label) +- if value[0].isalpha(): +- # A keyword +- if value in c.keywords: +- return c.keywords[value] +- else: +- c.labels.append((token.NAME, value)) +- c.keywords[value] = ilabel +- return ilabel +- else: +- # An operator (any non-numeric token) +- itoken = grammar.opmap[value] # Fails if unknown token +- if itoken in c.tokens: +- return c.tokens[itoken] +- else: +- c.labels.append((itoken, None)) +- c.tokens[itoken] = ilabel +- return ilabel +- +- def addfirstsets(self): +- names = self.dfas.keys() +- names.sort() +- for name in names: +- if name not in self.first: +- self.calcfirst(name) +- #print name, self.first[name].keys() +- +- def calcfirst(self, name): +- dfa = self.dfas[name] +- self.first[name] = None # dummy to detect left recursion +- state = dfa[0] +- totalset = {} +- overlapcheck = {} +- for label, next in state.arcs.iteritems(): +- if label in self.dfas: +- if label in self.first: +- fset = self.first[label] +- if fset is None: +- raise ValueError("recursion for rule %r" % name) +- else: +- self.calcfirst(label) +- fset = self.first[label] +- totalset.update(fset) +- overlapcheck[label] = fset +- else: +- totalset[label] = 1 +- overlapcheck[label] = {label: 1} +- inverse = {} +- for label, itsfirst in overlapcheck.iteritems(): +- for symbol in itsfirst: +- if symbol in inverse: +- raise ValueError("rule %s is ambiguous; %s is in the" +- " first sets of %s as well as %s" % +- (name, symbol, label, inverse[symbol])) +- inverse[symbol] = label +- self.first[name] = totalset +- +- def parse(self): +- dfas = {} +- startsymbol = None +- # MSTART: (NEWLINE | RULE)* ENDMARKER +- while self.type != token.ENDMARKER: +- while self.type == token.NEWLINE: +- self.gettoken() +- # RULE: NAME ':' RHS NEWLINE +- name = self.expect(token.NAME) +- self.expect(token.OP, ":") +- a, z = self.parse_rhs() +- self.expect(token.NEWLINE) +- #self.dump_nfa(name, a, z) +- dfa = self.make_dfa(a, z) +- #self.dump_dfa(name, dfa) +- oldlen = len(dfa) +- self.simplify_dfa(dfa) +- newlen = len(dfa) +- dfas[name] = dfa +- #print name, oldlen, newlen +- if startsymbol is None: +- startsymbol = name +- return dfas, startsymbol +- +- def make_dfa(self, start, finish): +- # To turn an NFA into a DFA, we define the states of the DFA +- # to correspond to *sets* of states of the NFA. Then do some +- # state reduction. Let's represent sets as dicts with 1 for +- # values. +- assert isinstance(start, NFAState) +- assert isinstance(finish, NFAState) +- def closure(state): +- base = {} +- addclosure(state, base) +- return base +- def addclosure(state, base): +- assert isinstance(state, NFAState) +- if state in base: +- return +- base[state] = 1 +- for label, next in state.arcs: +- if label is None: +- addclosure(next, base) +- states = [DFAState(closure(start), finish)] +- for state in states: # NB states grows while we're iterating +- arcs = {} +- for nfastate in state.nfaset: +- for label, next in nfastate.arcs: +- if label is not None: +- addclosure(next, arcs.setdefault(label, {})) +- for label, nfaset in arcs.iteritems(): +- for st in states: +- if st.nfaset == nfaset: +- break +- else: +- st = DFAState(nfaset, finish) +- states.append(st) +- state.addarc(st, label) +- return states # List of DFAState instances; first one is start +- +- def dump_nfa(self, name, start, finish): +- print "Dump of NFA for", name +- todo = [start] +- for i, state in enumerate(todo): +- print " State", i, state is finish and "(final)" or "" +- for label, next in state.arcs: +- if next in todo: +- j = todo.index(next) +- else: +- j = len(todo) +- todo.append(next) +- if label is None: +- print " -> %d" % j +- else: +- print " %s -> %d" % (label, j) +- +- def dump_dfa(self, name, dfa): +- print "Dump of DFA for", name +- for i, state in enumerate(dfa): +- print " State", i, state.isfinal and "(final)" or "" +- for label, next in state.arcs.iteritems(): +- print " %s -> %d" % (label, dfa.index(next)) +- +- def simplify_dfa(self, dfa): +- # This is not theoretically optimal, but works well enough. +- # Algorithm: repeatedly look for two states that have the same +- # set of arcs (same labels pointing to the same nodes) and +- # unify them, until things stop changing. +- +- # dfa is a list of DFAState instances +- changes = True +- while changes: +- changes = False +- for i, state_i in enumerate(dfa): +- for j in range(i+1, len(dfa)): +- state_j = dfa[j] +- if state_i == state_j: +- #print " unify", i, j +- del dfa[j] +- for state in dfa: +- state.unifystate(state_j, state_i) +- changes = True +- break +- +- def parse_rhs(self): +- # RHS: ALT ('|' ALT)* +- a, z = self.parse_alt() +- if self.value != "|": +- return a, z +- else: +- aa = NFAState() +- zz = NFAState() +- aa.addarc(a) +- z.addarc(zz) +- while self.value == "|": +- self.gettoken() +- a, z = self.parse_alt() +- aa.addarc(a) +- z.addarc(zz) +- return aa, zz +- +- def parse_alt(self): +- # ALT: ITEM+ +- a, b = self.parse_item() +- while (self.value in ("(", "[") or +- self.type in (token.NAME, token.STRING)): +- c, d = self.parse_item() +- b.addarc(c) +- b = d +- return a, b +- +- def parse_item(self): +- # ITEM: '[' RHS ']' | ATOM ['+' | '*'] +- if self.value == "[": +- self.gettoken() +- a, z = self.parse_rhs() +- self.expect(token.OP, "]") +- a.addarc(z) +- return a, z +- else: +- a, z = self.parse_atom() +- value = self.value +- if value not in ("+", "*"): +- return a, z +- self.gettoken() +- z.addarc(a) +- if value == "+": +- return a, z +- else: +- return a, a +- +- def parse_atom(self): +- # ATOM: '(' RHS ')' | NAME | STRING +- if self.value == "(": +- self.gettoken() +- a, z = self.parse_rhs() +- self.expect(token.OP, ")") +- return a, z +- elif self.type in (token.NAME, token.STRING): +- a = NFAState() +- z = NFAState() +- a.addarc(z, self.value) +- self.gettoken() +- return a, z +- else: +- self.raise_error("expected (...) or NAME or STRING, got %s/%s", +- self.type, self.value) +- +- def expect(self, type, value=None): +- if self.type != type or (value is not None and self.value != value): +- self.raise_error("expected %s/%s, got %s/%s", +- type, value, self.type, self.value) +- value = self.value +- self.gettoken() +- return value +- +- def gettoken(self): +- tup = self.generator.next() +- while tup[0] in (tokenize.COMMENT, tokenize.NL): +- tup = self.generator.next() +- self.type, self.value, self.begin, self.end, self.line = tup +- #print token.tok_name[self.type], repr(self.value) +- +- def raise_error(self, msg, *args): +- if args: +- try: +- msg = msg % args +- except: +- msg = " ".join([msg] + map(str, args)) +- raise SyntaxError(msg, (self.filename, self.end[0], +- self.end[1], self.line)) +- +-class NFAState(object): +- +- def __init__(self): +- self.arcs = [] # list of (label, NFAState) pairs +- +- def addarc(self, next, label=None): +- assert label is None or isinstance(label, str) +- assert isinstance(next, NFAState) +- self.arcs.append((label, next)) +- +-class DFAState(object): +- +- def __init__(self, nfaset, final): +- assert isinstance(nfaset, dict) +- assert isinstance(iter(nfaset).next(), NFAState) +- assert isinstance(final, NFAState) +- self.nfaset = nfaset +- self.isfinal = final in nfaset +- self.arcs = {} # map from label to DFAState +- +- def addarc(self, next, label): +- assert isinstance(label, str) +- assert label not in self.arcs +- assert isinstance(next, DFAState) +- self.arcs[label] = next +- +- def unifystate(self, old, new): +- for label, next in self.arcs.iteritems(): +- if next is old: +- self.arcs[label] = new +- +- def __eq__(self, other): +- # Equality test -- ignore the nfaset instance variable +- assert isinstance(other, DFAState) +- if self.isfinal != other.isfinal: +- return False +- # Can't just return self.arcs == other.arcs, because that +- # would invoke this method recursively, with cycles... +- if len(self.arcs) != len(other.arcs): +- return False +- for label, next in self.arcs.iteritems(): +- if next is not other.arcs.get(label): +- return False +- return True +- +-def generate_grammar(filename="Grammar.txt"): +- p = ParserGenerator(filename) +- return p.make_grammar() +diff -r 531f2e948299 lib2to3/pgen2/.svn/text-base/token.py.svn-base +--- a/lib2to3/pgen2/.svn/text-base/token.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,82 +0,0 @@ +-#! /usr/bin/env python +- +-"""Token constants (from "token.h").""" +- +-# Taken from Python (r53757) and modified to include some tokens +-# originally monkeypatched in by pgen2.tokenize +- +-#--start constants-- +-ENDMARKER = 0 +-NAME = 1 +-NUMBER = 2 +-STRING = 3 +-NEWLINE = 4 +-INDENT = 5 +-DEDENT = 6 +-LPAR = 7 +-RPAR = 8 +-LSQB = 9 +-RSQB = 10 +-COLON = 11 +-COMMA = 12 +-SEMI = 13 +-PLUS = 14 +-MINUS = 15 +-STAR = 16 +-SLASH = 17 +-VBAR = 18 +-AMPER = 19 +-LESS = 20 +-GREATER = 21 +-EQUAL = 22 +-DOT = 23 +-PERCENT = 24 +-BACKQUOTE = 25 +-LBRACE = 26 +-RBRACE = 27 +-EQEQUAL = 28 +-NOTEQUAL = 29 +-LESSEQUAL = 30 +-GREATEREQUAL = 31 +-TILDE = 32 +-CIRCUMFLEX = 33 +-LEFTSHIFT = 34 +-RIGHTSHIFT = 35 +-DOUBLESTAR = 36 +-PLUSEQUAL = 37 +-MINEQUAL = 38 +-STAREQUAL = 39 +-SLASHEQUAL = 40 +-PERCENTEQUAL = 41 +-AMPEREQUAL = 42 +-VBAREQUAL = 43 +-CIRCUMFLEXEQUAL = 44 +-LEFTSHIFTEQUAL = 45 +-RIGHTSHIFTEQUAL = 46 +-DOUBLESTAREQUAL = 47 +-DOUBLESLASH = 48 +-DOUBLESLASHEQUAL = 49 +-AT = 50 +-OP = 51 +-COMMENT = 52 +-NL = 53 +-RARROW = 54 +-ERRORTOKEN = 55 +-N_TOKENS = 56 +-NT_OFFSET = 256 +-#--end constants-- +- +-tok_name = {} +-for _name, _value in globals().items(): +- if type(_value) is type(0): +- tok_name[_value] = _name +- +- +-def ISTERMINAL(x): +- return x < NT_OFFSET +- +-def ISNONTERMINAL(x): +- return x >= NT_OFFSET +- +-def ISEOF(x): +- return x == ENDMARKER +diff -r 531f2e948299 lib2to3/pgen2/.svn/text-base/tokenize.py.svn-base +--- a/lib2to3/pgen2/.svn/text-base/tokenize.py.svn-base Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,405 +0,0 @@ +-# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation. +-# All rights reserved. +- +-"""Tokenization help for Python programs. +- +-generate_tokens(readline) is a generator that breaks a stream of +-text into Python tokens. It accepts a readline-like method which is called +-repeatedly to get the next line of input (or "" for EOF). It generates +-5-tuples with these members: +- +- the token type (see token.py) +- the token (a string) +- the starting (row, column) indices of the token (a 2-tuple of ints) +- the ending (row, column) indices of the token (a 2-tuple of ints) +- the original line (string) +- +-It is designed to match the working of the Python tokenizer exactly, except +-that it produces COMMENT tokens for comments and gives type OP for all +-operators +- +-Older entry points +- tokenize_loop(readline, tokeneater) +- tokenize(readline, tokeneater=printtoken) +-are the same, except instead of generating tokens, tokeneater is a callback +-function to which the 5 fields described above are passed as 5 arguments, +-each time a new token is found.""" +- +-__author__ = 'Ka-Ping Yee ' +-__credits__ = \ +- 'GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip Montanaro' +- +-import string, re +-from lib2to3.pgen2.token import * +- +-from . import token +-__all__ = [x for x in dir(token) if x[0] != '_'] + ["tokenize", +- "generate_tokens", "untokenize"] +-del token +- +-def group(*choices): return '(' + '|'.join(choices) + ')' +-def any(*choices): return group(*choices) + '*' +-def maybe(*choices): return group(*choices) + '?' +- +-Whitespace = r'[ \f\t]*' +-Comment = r'#[^\r\n]*' +-Ignore = Whitespace + any(r'\\\r?\n' + Whitespace) + maybe(Comment) +-Name = r'[a-zA-Z_]\w*' +- +-Binnumber = r'0[bB][01]*' +-Hexnumber = r'0[xX][\da-fA-F]*[lL]?' +-Octnumber = r'0[oO]?[0-7]*[lL]?' +-Decnumber = r'[1-9]\d*[lL]?' +-Intnumber = group(Binnumber, Hexnumber, Octnumber, Decnumber) +-Exponent = r'[eE][-+]?\d+' +-Pointfloat = group(r'\d+\.\d*', r'\.\d+') + maybe(Exponent) +-Expfloat = r'\d+' + Exponent +-Floatnumber = group(Pointfloat, Expfloat) +-Imagnumber = group(r'\d+[jJ]', Floatnumber + r'[jJ]') +-Number = group(Imagnumber, Floatnumber, Intnumber) +- +-# Tail end of ' string. +-Single = r"[^'\\]*(?:\\.[^'\\]*)*'" +-# Tail end of " string. +-Double = r'[^"\\]*(?:\\.[^"\\]*)*"' +-# Tail end of ''' string. +-Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''" +-# Tail end of """ string. +-Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""' +-Triple = group("[ubUB]?[rR]?'''", '[ubUB]?[rR]?"""') +-# Single-line ' or " string. +-String = group(r"[uU]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*'", +- r'[uU]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*"') +- +-# Because of leftmost-then-longest match semantics, be sure to put the +-# longest operators first (e.g., if = came before ==, == would get +-# recognized as two instances of =). +-Operator = group(r"\*\*=?", r">>=?", r"<<=?", r"<>", r"!=", +- r"//=?", r"->", +- r"[+\-*/%&|^=<>]=?", +- r"~") +- +-Bracket = '[][(){}]' +-Special = group(r'\r?\n', r'[:;.,`@]') +-Funny = group(Operator, Bracket, Special) +- +-PlainToken = group(Number, Funny, String, Name) +-Token = Ignore + PlainToken +- +-# First (or only) line of ' or " string. +-ContStr = group(r"[uUbB]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*" + +- group("'", r'\\\r?\n'), +- r'[uUbB]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*' + +- group('"', r'\\\r?\n')) +-PseudoExtras = group(r'\\\r?\n', Comment, Triple) +-PseudoToken = Whitespace + group(PseudoExtras, Number, Funny, ContStr, Name) +- +-tokenprog, pseudoprog, single3prog, double3prog = map( +- re.compile, (Token, PseudoToken, Single3, Double3)) +-endprogs = {"'": re.compile(Single), '"': re.compile(Double), +- "'''": single3prog, '"""': double3prog, +- "r'''": single3prog, 'r"""': double3prog, +- "u'''": single3prog, 'u"""': double3prog, +- "b'''": single3prog, 'b"""': double3prog, +- "ur'''": single3prog, 'ur"""': double3prog, +- "br'''": single3prog, 'br"""': double3prog, +- "R'''": single3prog, 'R"""': double3prog, +- "U'''": single3prog, 'U"""': double3prog, +- "B'''": single3prog, 'B"""': double3prog, +- "uR'''": single3prog, 'uR"""': double3prog, +- "Ur'''": single3prog, 'Ur"""': double3prog, +- "UR'''": single3prog, 'UR"""': double3prog, +- "bR'''": single3prog, 'bR"""': double3prog, +- "Br'''": single3prog, 'Br"""': double3prog, +- "BR'''": single3prog, 'BR"""': double3prog, +- 'r': None, 'R': None, +- 'u': None, 'U': None, +- 'b': None, 'B': None} +- +-triple_quoted = {} +-for t in ("'''", '"""', +- "r'''", 'r"""', "R'''", 'R"""', +- "u'''", 'u"""', "U'''", 'U"""', +- "b'''", 'b"""', "B'''", 'B"""', +- "ur'''", 'ur"""', "Ur'''", 'Ur"""', +- "uR'''", 'uR"""', "UR'''", 'UR"""', +- "br'''", 'br"""', "Br'''", 'Br"""', +- "bR'''", 'bR"""', "BR'''", 'BR"""',): +- triple_quoted[t] = t +-single_quoted = {} +-for t in ("'", '"', +- "r'", 'r"', "R'", 'R"', +- "u'", 'u"', "U'", 'U"', +- "b'", 'b"', "B'", 'B"', +- "ur'", 'ur"', "Ur'", 'Ur"', +- "uR'", 'uR"', "UR'", 'UR"', +- "br'", 'br"', "Br'", 'Br"', +- "bR'", 'bR"', "BR'", 'BR"', ): +- single_quoted[t] = t +- +-tabsize = 8 +- +-class TokenError(Exception): pass +- +-class StopTokenizing(Exception): pass +- +-def printtoken(type, token, (srow, scol), (erow, ecol), line): # for testing +- print "%d,%d-%d,%d:\t%s\t%s" % \ +- (srow, scol, erow, ecol, tok_name[type], repr(token)) +- +-def tokenize(readline, tokeneater=printtoken): +- """ +- The tokenize() function accepts two parameters: one representing the +- input stream, and one providing an output mechanism for tokenize(). +- +- The first parameter, readline, must be a callable object which provides +- the same interface as the readline() method of built-in file objects. +- Each call to the function should return one line of input as a string. +- +- The second parameter, tokeneater, must also be a callable object. It is +- called once for each token, with five arguments, corresponding to the +- tuples generated by generate_tokens(). +- """ +- try: +- tokenize_loop(readline, tokeneater) +- except StopTokenizing: +- pass +- +-# backwards compatible interface +-def tokenize_loop(readline, tokeneater): +- for token_info in generate_tokens(readline): +- tokeneater(*token_info) +- +-class Untokenizer: +- +- def __init__(self): +- self.tokens = [] +- self.prev_row = 1 +- self.prev_col = 0 +- +- def add_whitespace(self, start): +- row, col = start +- assert row <= self.prev_row +- col_offset = col - self.prev_col +- if col_offset: +- self.tokens.append(" " * col_offset) +- +- def untokenize(self, iterable): +- for t in iterable: +- if len(t) == 2: +- self.compat(t, iterable) +- break +- tok_type, token, start, end, line = t +- self.add_whitespace(start) +- self.tokens.append(token) +- self.prev_row, self.prev_col = end +- if tok_type in (NEWLINE, NL): +- self.prev_row += 1 +- self.prev_col = 0 +- return "".join(self.tokens) +- +- def compat(self, token, iterable): +- startline = False +- indents = [] +- toks_append = self.tokens.append +- toknum, tokval = token +- if toknum in (NAME, NUMBER): +- tokval += ' ' +- if toknum in (NEWLINE, NL): +- startline = True +- for tok in iterable: +- toknum, tokval = tok[:2] +- +- if toknum in (NAME, NUMBER): +- tokval += ' ' +- +- if toknum == INDENT: +- indents.append(tokval) +- continue +- elif toknum == DEDENT: +- indents.pop() +- continue +- elif toknum in (NEWLINE, NL): +- startline = True +- elif startline and indents: +- toks_append(indents[-1]) +- startline = False +- toks_append(tokval) +- +-def untokenize(iterable): +- """Transform tokens back into Python source code. +- +- Each element returned by the iterable must be a token sequence +- with at least two elements, a token number and token value. If +- only two tokens are passed, the resulting output is poor. +- +- Round-trip invariant for full input: +- Untokenized source will match input source exactly +- +- Round-trip invariant for limited intput: +- # Output text will tokenize the back to the input +- t1 = [tok[:2] for tok in generate_tokens(f.readline)] +- newcode = untokenize(t1) +- readline = iter(newcode.splitlines(1)).next +- t2 = [tok[:2] for tokin generate_tokens(readline)] +- assert t1 == t2 +- """ +- ut = Untokenizer() +- return ut.untokenize(iterable) +- +-def generate_tokens(readline): +- """ +- The generate_tokens() generator requires one argment, readline, which +- must be a callable object which provides the same interface as the +- readline() method of built-in file objects. Each call to the function +- should return one line of input as a string. Alternately, readline +- can be a callable function terminating with StopIteration: +- readline = open(myfile).next # Example of alternate readline +- +- The generator produces 5-tuples with these members: the token type; the +- token string; a 2-tuple (srow, scol) of ints specifying the row and +- column where the token begins in the source; a 2-tuple (erow, ecol) of +- ints specifying the row and column where the token ends in the source; +- and the line on which the token was found. The line passed is the +- logical line; continuation lines are included. +- """ +- lnum = parenlev = continued = 0 +- namechars, numchars = string.ascii_letters + '_', '0123456789' +- contstr, needcont = '', 0 +- contline = None +- indents = [0] +- +- while 1: # loop over lines in stream +- try: +- line = readline() +- except StopIteration: +- line = '' +- lnum = lnum + 1 +- pos, max = 0, len(line) +- +- if contstr: # continued string +- if not line: +- raise TokenError, ("EOF in multi-line string", strstart) +- endmatch = endprog.match(line) +- if endmatch: +- pos = end = endmatch.end(0) +- yield (STRING, contstr + line[:end], +- strstart, (lnum, end), contline + line) +- contstr, needcont = '', 0 +- contline = None +- elif needcont and line[-2:] != '\\\n' and line[-3:] != '\\\r\n': +- yield (ERRORTOKEN, contstr + line, +- strstart, (lnum, len(line)), contline) +- contstr = '' +- contline = None +- continue +- else: +- contstr = contstr + line +- contline = contline + line +- continue +- +- elif parenlev == 0 and not continued: # new statement +- if not line: break +- column = 0 +- while pos < max: # measure leading whitespace +- if line[pos] == ' ': column = column + 1 +- elif line[pos] == '\t': column = (column/tabsize + 1)*tabsize +- elif line[pos] == '\f': column = 0 +- else: break +- pos = pos + 1 +- if pos == max: break +- +- if line[pos] in '#\r\n': # skip comments or blank lines +- if line[pos] == '#': +- comment_token = line[pos:].rstrip('\r\n') +- nl_pos = pos + len(comment_token) +- yield (COMMENT, comment_token, +- (lnum, pos), (lnum, pos + len(comment_token)), line) +- yield (NL, line[nl_pos:], +- (lnum, nl_pos), (lnum, len(line)), line) +- else: +- yield ((NL, COMMENT)[line[pos] == '#'], line[pos:], +- (lnum, pos), (lnum, len(line)), line) +- continue +- +- if column > indents[-1]: # count indents or dedents +- indents.append(column) +- yield (INDENT, line[:pos], (lnum, 0), (lnum, pos), line) +- while column < indents[-1]: +- if column not in indents: +- raise IndentationError( +- "unindent does not match any outer indentation level", +- ("", lnum, pos, line)) +- indents = indents[:-1] +- yield (DEDENT, '', (lnum, pos), (lnum, pos), line) +- +- else: # continued statement +- if not line: +- raise TokenError, ("EOF in multi-line statement", (lnum, 0)) +- continued = 0 +- +- while pos < max: +- pseudomatch = pseudoprog.match(line, pos) +- if pseudomatch: # scan for tokens +- start, end = pseudomatch.span(1) +- spos, epos, pos = (lnum, start), (lnum, end), end +- token, initial = line[start:end], line[start] +- +- if initial in numchars or \ +- (initial == '.' and token != '.'): # ordinary number +- yield (NUMBER, token, spos, epos, line) +- elif initial in '\r\n': +- newline = NEWLINE +- if parenlev > 0: +- newline = NL +- yield (newline, token, spos, epos, line) +- elif initial == '#': +- assert not token.endswith("\n") +- yield (COMMENT, token, spos, epos, line) +- elif token in triple_quoted: +- endprog = endprogs[token] +- endmatch = endprog.match(line, pos) +- if endmatch: # all on one line +- pos = endmatch.end(0) +- token = line[start:pos] +- yield (STRING, token, spos, (lnum, pos), line) +- else: +- strstart = (lnum, start) # multiple lines +- contstr = line[start:] +- contline = line +- break +- elif initial in single_quoted or \ +- token[:2] in single_quoted or \ +- token[:3] in single_quoted: +- if token[-1] == '\n': # continued string +- strstart = (lnum, start) +- endprog = (endprogs[initial] or endprogs[token[1]] or +- endprogs[token[2]]) +- contstr, needcont = line[start:], 1 +- contline = line +- break +- else: # ordinary string +- yield (STRING, token, spos, epos, line) +- elif initial in namechars: # ordinary name +- yield (NAME, token, spos, epos, line) +- elif initial == '\\': # continued stmt +- # This yield is new; needed for better idempotency: +- yield (NL, token, spos, (lnum, pos), line) +- continued = 1 +- else: +- if initial in '([{': parenlev = parenlev + 1 +- elif initial in ')]}': parenlev = parenlev - 1 +- yield (OP, token, spos, epos, line) +- else: +- yield (ERRORTOKEN, line[pos], +- (lnum, pos), (lnum, pos+1), line) +- pos = pos + 1 +- +- for indent in indents[1:]: # pop remaining indent levels +- yield (DEDENT, '', (lnum, 0), (lnum, 0), '') +- yield (ENDMARKER, '', (lnum, 0), (lnum, 0), '') +- +-if __name__ == '__main__': # testing +- import sys +- if len(sys.argv) > 1: tokenize(open(sys.argv[1]).readline) +- else: tokenize(sys.stdin.readline) +diff -r 531f2e948299 lib2to3/pgen2/__init__.py +--- a/lib2to3/pgen2/__init__.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/pgen2/__init__.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,4 +1,1 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""The pgen2 package.""" ++from refactor.pgen2 import * +diff -r 531f2e948299 lib2to3/pgen2/conv.py +--- a/lib2to3/pgen2/conv.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,257 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Convert graminit.[ch] spit out by pgen to Python code. +- +-Pgen is the Python parser generator. It is useful to quickly create a +-parser from a grammar file in Python's grammar notation. But I don't +-want my parsers to be written in C (yet), so I'm translating the +-parsing tables to Python data structures and writing a Python parse +-engine. +- +-Note that the token numbers are constants determined by the standard +-Python tokenizer. The standard token module defines these numbers and +-their names (the names are not used much). The token numbers are +-hardcoded into the Python tokenizer and into pgen. A Python +-implementation of the Python tokenizer is also available, in the +-standard tokenize module. +- +-On the other hand, symbol numbers (representing the grammar's +-non-terminals) are assigned by pgen based on the actual grammar +-input. +- +-Note: this module is pretty much obsolete; the pgen module generates +-equivalent grammar tables directly from the Grammar.txt input file +-without having to invoke the Python pgen C program. +- +-""" +- +-# Python imports +-import re +- +-# Local imports +-from pgen2 import grammar, token +- +- +-class Converter(grammar.Grammar): +- """Grammar subclass that reads classic pgen output files. +- +- The run() method reads the tables as produced by the pgen parser +- generator, typically contained in two C files, graminit.h and +- graminit.c. The other methods are for internal use only. +- +- See the base class for more documentation. +- +- """ +- +- def run(self, graminit_h, graminit_c): +- """Load the grammar tables from the text files written by pgen.""" +- self.parse_graminit_h(graminit_h) +- self.parse_graminit_c(graminit_c) +- self.finish_off() +- +- def parse_graminit_h(self, filename): +- """Parse the .h file writen by pgen. (Internal) +- +- This file is a sequence of #define statements defining the +- nonterminals of the grammar as numbers. We build two tables +- mapping the numbers to names and back. +- +- """ +- try: +- f = open(filename) +- except IOError, err: +- print "Can't open %s: %s" % (filename, err) +- return False +- self.symbol2number = {} +- self.number2symbol = {} +- lineno = 0 +- for line in f: +- lineno += 1 +- mo = re.match(r"^#define\s+(\w+)\s+(\d+)$", line) +- if not mo and line.strip(): +- print "%s(%s): can't parse %s" % (filename, lineno, +- line.strip()) +- else: +- symbol, number = mo.groups() +- number = int(number) +- assert symbol not in self.symbol2number +- assert number not in self.number2symbol +- self.symbol2number[symbol] = number +- self.number2symbol[number] = symbol +- return True +- +- def parse_graminit_c(self, filename): +- """Parse the .c file writen by pgen. (Internal) +- +- The file looks as follows. The first two lines are always this: +- +- #include "pgenheaders.h" +- #include "grammar.h" +- +- After that come four blocks: +- +- 1) one or more state definitions +- 2) a table defining dfas +- 3) a table defining labels +- 4) a struct defining the grammar +- +- A state definition has the following form: +- - one or more arc arrays, each of the form: +- static arc arcs__[] = { +- {, }, +- ... +- }; +- - followed by a state array, of the form: +- static state states_[] = { +- {, arcs__}, +- ... +- }; +- +- """ +- try: +- f = open(filename) +- except IOError, err: +- print "Can't open %s: %s" % (filename, err) +- return False +- # The code below essentially uses f's iterator-ness! +- lineno = 0 +- +- # Expect the two #include lines +- lineno, line = lineno+1, f.next() +- assert line == '#include "pgenheaders.h"\n', (lineno, line) +- lineno, line = lineno+1, f.next() +- assert line == '#include "grammar.h"\n', (lineno, line) +- +- # Parse the state definitions +- lineno, line = lineno+1, f.next() +- allarcs = {} +- states = [] +- while line.startswith("static arc "): +- while line.startswith("static arc "): +- mo = re.match(r"static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$", +- line) +- assert mo, (lineno, line) +- n, m, k = map(int, mo.groups()) +- arcs = [] +- for _ in range(k): +- lineno, line = lineno+1, f.next() +- mo = re.match(r"\s+{(\d+), (\d+)},$", line) +- assert mo, (lineno, line) +- i, j = map(int, mo.groups()) +- arcs.append((i, j)) +- lineno, line = lineno+1, f.next() +- assert line == "};\n", (lineno, line) +- allarcs[(n, m)] = arcs +- lineno, line = lineno+1, f.next() +- mo = re.match(r"static state states_(\d+)\[(\d+)\] = {$", line) +- assert mo, (lineno, line) +- s, t = map(int, mo.groups()) +- assert s == len(states), (lineno, line) +- state = [] +- for _ in range(t): +- lineno, line = lineno+1, f.next() +- mo = re.match(r"\s+{(\d+), arcs_(\d+)_(\d+)},$", line) +- assert mo, (lineno, line) +- k, n, m = map(int, mo.groups()) +- arcs = allarcs[n, m] +- assert k == len(arcs), (lineno, line) +- state.append(arcs) +- states.append(state) +- lineno, line = lineno+1, f.next() +- assert line == "};\n", (lineno, line) +- lineno, line = lineno+1, f.next() +- self.states = states +- +- # Parse the dfas +- dfas = {} +- mo = re.match(r"static dfa dfas\[(\d+)\] = {$", line) +- assert mo, (lineno, line) +- ndfas = int(mo.group(1)) +- for i in range(ndfas): +- lineno, line = lineno+1, f.next() +- mo = re.match(r'\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$', +- line) +- assert mo, (lineno, line) +- symbol = mo.group(2) +- number, x, y, z = map(int, mo.group(1, 3, 4, 5)) +- assert self.symbol2number[symbol] == number, (lineno, line) +- assert self.number2symbol[number] == symbol, (lineno, line) +- assert x == 0, (lineno, line) +- state = states[z] +- assert y == len(state), (lineno, line) +- lineno, line = lineno+1, f.next() +- mo = re.match(r'\s+("(?:\\\d\d\d)*")},$', line) +- assert mo, (lineno, line) +- first = {} +- rawbitset = eval(mo.group(1)) +- for i, c in enumerate(rawbitset): +- byte = ord(c) +- for j in range(8): +- if byte & (1<= os.path.getmtime(b) +diff -r 531f2e948299 lib2to3/pgen2/grammar.py +--- a/lib2to3/pgen2/grammar.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,171 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""This module defines the data structures used to represent a grammar. +- +-These are a bit arcane because they are derived from the data +-structures used by Python's 'pgen' parser generator. +- +-There's also a table here mapping operators to their names in the +-token module; the Python tokenize module reports all operators as the +-fallback token code OP, but the parser needs the actual token code. +- +-""" +- +-# Python imports +-import pickle +- +-# Local imports +-from . import token, tokenize +- +- +-class Grammar(object): +- """Pgen parsing tables tables conversion class. +- +- Once initialized, this class supplies the grammar tables for the +- parsing engine implemented by parse.py. The parsing engine +- accesses the instance variables directly. The class here does not +- provide initialization of the tables; several subclasses exist to +- do this (see the conv and pgen modules). +- +- The load() method reads the tables from a pickle file, which is +- much faster than the other ways offered by subclasses. The pickle +- file is written by calling dump() (after loading the grammar +- tables using a subclass). The report() method prints a readable +- representation of the tables to stdout, for debugging. +- +- The instance variables are as follows: +- +- symbol2number -- a dict mapping symbol names to numbers. Symbol +- numbers are always 256 or higher, to distinguish +- them from token numbers, which are between 0 and +- 255 (inclusive). +- +- number2symbol -- a dict mapping numbers to symbol names; +- these two are each other's inverse. +- +- states -- a list of DFAs, where each DFA is a list of +- states, each state is is a list of arcs, and each +- arc is a (i, j) pair where i is a label and j is +- a state number. The DFA number is the index into +- this list. (This name is slightly confusing.) +- Final states are represented by a special arc of +- the form (0, j) where j is its own state number. +- +- dfas -- a dict mapping symbol numbers to (DFA, first) +- pairs, where DFA is an item from the states list +- above, and first is a set of tokens that can +- begin this grammar rule (represented by a dict +- whose values are always 1). +- +- labels -- a list of (x, y) pairs where x is either a token +- number or a symbol number, and y is either None +- or a string; the strings are keywords. The label +- number is the index in this list; label numbers +- are used to mark state transitions (arcs) in the +- DFAs. +- +- start -- the number of the grammar's start symbol. +- +- keywords -- a dict mapping keyword strings to arc labels. +- +- tokens -- a dict mapping token numbers to arc labels. +- +- """ +- +- def __init__(self): +- self.symbol2number = {} +- self.number2symbol = {} +- self.states = [] +- self.dfas = {} +- self.labels = [(0, "EMPTY")] +- self.keywords = {} +- self.tokens = {} +- self.symbol2label = {} +- self.start = 256 +- +- def dump(self, filename): +- """Dump the grammar tables to a pickle file.""" +- f = open(filename, "wb") +- pickle.dump(self.__dict__, f, 2) +- f.close() +- +- def load(self, filename): +- """Load the grammar tables from a pickle file.""" +- f = open(filename, "rb") +- d = pickle.load(f) +- f.close() +- self.__dict__.update(d) +- +- def report(self): +- """Dump the grammar tables to standard output, for debugging.""" +- from pprint import pprint +- print "s2n" +- pprint(self.symbol2number) +- print "n2s" +- pprint(self.number2symbol) +- print "states" +- pprint(self.states) +- print "dfas" +- pprint(self.dfas) +- print "labels" +- pprint(self.labels) +- print "start", self.start +- +- +-# Map from operator to number (since tokenize doesn't do this) +- +-opmap_raw = """ +-( LPAR +-) RPAR +-[ LSQB +-] RSQB +-: COLON +-, COMMA +-; SEMI +-+ PLUS +-- MINUS +-* STAR +-/ SLASH +-| VBAR +-& AMPER +-< LESS +-> GREATER +-= EQUAL +-. DOT +-% PERCENT +-` BACKQUOTE +-{ LBRACE +-} RBRACE +-@ AT +-== EQEQUAL +-!= NOTEQUAL +-<> NOTEQUAL +-<= LESSEQUAL +->= GREATEREQUAL +-~ TILDE +-^ CIRCUMFLEX +-<< LEFTSHIFT +->> RIGHTSHIFT +-** DOUBLESTAR +-+= PLUSEQUAL +--= MINEQUAL +-*= STAREQUAL +-/= SLASHEQUAL +-%= PERCENTEQUAL +-&= AMPEREQUAL +-|= VBAREQUAL +-^= CIRCUMFLEXEQUAL +-<<= LEFTSHIFTEQUAL +->>= RIGHTSHIFTEQUAL +-**= DOUBLESTAREQUAL +-// DOUBLESLASH +-//= DOUBLESLASHEQUAL +--> RARROW +-""" +- +-opmap = {} +-for line in opmap_raw.splitlines(): +- if line: +- op, name = line.split() +- opmap[op] = getattr(token, name) +diff -r 531f2e948299 lib2to3/pgen2/literals.py +--- a/lib2to3/pgen2/literals.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,60 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Safely evaluate Python string literals without using eval().""" +- +-import re +- +-simple_escapes = {"a": "\a", +- "b": "\b", +- "f": "\f", +- "n": "\n", +- "r": "\r", +- "t": "\t", +- "v": "\v", +- "'": "'", +- '"': '"', +- "\\": "\\"} +- +-def escape(m): +- all, tail = m.group(0, 1) +- assert all.startswith("\\") +- esc = simple_escapes.get(tail) +- if esc is not None: +- return esc +- if tail.startswith("x"): +- hexes = tail[1:] +- if len(hexes) < 2: +- raise ValueError("invalid hex string escape ('\\%s')" % tail) +- try: +- i = int(hexes, 16) +- except ValueError: +- raise ValueError("invalid hex string escape ('\\%s')" % tail) +- else: +- try: +- i = int(tail, 8) +- except ValueError: +- raise ValueError("invalid octal string escape ('\\%s')" % tail) +- return chr(i) +- +-def evalString(s): +- assert s.startswith("'") or s.startswith('"'), repr(s[:1]) +- q = s[0] +- if s[:3] == q*3: +- q = q*3 +- assert s.endswith(q), repr(s[-len(q):]) +- assert len(s) >= 2*len(q) +- s = s[len(q):-len(q)] +- return re.sub(r"\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})", escape, s) +- +-def test(): +- for i in range(256): +- c = chr(i) +- s = repr(c) +- e = evalString(s) +- if e != c: +- print i, c, s, e +- +- +-if __name__ == "__main__": +- test() +diff -r 531f2e948299 lib2to3/pgen2/parse.py +--- a/lib2to3/pgen2/parse.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,201 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Parser engine for the grammar tables generated by pgen. +- +-The grammar table must be loaded first. +- +-See Parser/parser.c in the Python distribution for additional info on +-how this parsing engine works. +- +-""" +- +-# Local imports +-from . import token +- +-class ParseError(Exception): +- """Exception to signal the parser is stuck.""" +- +- def __init__(self, msg, type, value, context): +- Exception.__init__(self, "%s: type=%r, value=%r, context=%r" % +- (msg, type, value, context)) +- self.msg = msg +- self.type = type +- self.value = value +- self.context = context +- +-class Parser(object): +- """Parser engine. +- +- The proper usage sequence is: +- +- p = Parser(grammar, [converter]) # create instance +- p.setup([start]) # prepare for parsing +- : +- if p.addtoken(...): # parse a token; may raise ParseError +- break +- root = p.rootnode # root of abstract syntax tree +- +- A Parser instance may be reused by calling setup() repeatedly. +- +- A Parser instance contains state pertaining to the current token +- sequence, and should not be used concurrently by different threads +- to parse separate token sequences. +- +- See driver.py for how to get input tokens by tokenizing a file or +- string. +- +- Parsing is complete when addtoken() returns True; the root of the +- abstract syntax tree can then be retrieved from the rootnode +- instance variable. When a syntax error occurs, addtoken() raises +- the ParseError exception. There is no error recovery; the parser +- cannot be used after a syntax error was reported (but it can be +- reinitialized by calling setup()). +- +- """ +- +- def __init__(self, grammar, convert=None): +- """Constructor. +- +- The grammar argument is a grammar.Grammar instance; see the +- grammar module for more information. +- +- The parser is not ready yet for parsing; you must call the +- setup() method to get it started. +- +- The optional convert argument is a function mapping concrete +- syntax tree nodes to abstract syntax tree nodes. If not +- given, no conversion is done and the syntax tree produced is +- the concrete syntax tree. If given, it must be a function of +- two arguments, the first being the grammar (a grammar.Grammar +- instance), and the second being the concrete syntax tree node +- to be converted. The syntax tree is converted from the bottom +- up. +- +- A concrete syntax tree node is a (type, value, context, nodes) +- tuple, where type is the node type (a token or symbol number), +- value is None for symbols and a string for tokens, context is +- None or an opaque value used for error reporting (typically a +- (lineno, offset) pair), and nodes is a list of children for +- symbols, and None for tokens. +- +- An abstract syntax tree node may be anything; this is entirely +- up to the converter function. +- +- """ +- self.grammar = grammar +- self.convert = convert or (lambda grammar, node: node) +- +- def setup(self, start=None): +- """Prepare for parsing. +- +- This *must* be called before starting to parse. +- +- The optional argument is an alternative start symbol; it +- defaults to the grammar's start symbol. +- +- You can use a Parser instance to parse any number of programs; +- each time you call setup() the parser is reset to an initial +- state determined by the (implicit or explicit) start symbol. +- +- """ +- if start is None: +- start = self.grammar.start +- # Each stack entry is a tuple: (dfa, state, node). +- # A node is a tuple: (type, value, context, children), +- # where children is a list of nodes or None, and context may be None. +- newnode = (start, None, None, []) +- stackentry = (self.grammar.dfas[start], 0, newnode) +- self.stack = [stackentry] +- self.rootnode = None +- self.used_names = set() # Aliased to self.rootnode.used_names in pop() +- +- def addtoken(self, type, value, context): +- """Add a token; return True iff this is the end of the program.""" +- # Map from token to label +- ilabel = self.classify(type, value, context) +- # Loop until the token is shifted; may raise exceptions +- while True: +- dfa, state, node = self.stack[-1] +- states, first = dfa +- arcs = states[state] +- # Look for a state with this label +- for i, newstate in arcs: +- t, v = self.grammar.labels[i] +- if ilabel == i: +- # Look it up in the list of labels +- assert t < 256 +- # Shift a token; we're done with it +- self.shift(type, value, newstate, context) +- # Pop while we are in an accept-only state +- state = newstate +- while states[state] == [(0, state)]: +- self.pop() +- if not self.stack: +- # Done parsing! +- return True +- dfa, state, node = self.stack[-1] +- states, first = dfa +- # Done with this token +- return False +- elif t >= 256: +- # See if it's a symbol and if we're in its first set +- itsdfa = self.grammar.dfas[t] +- itsstates, itsfirst = itsdfa +- if ilabel in itsfirst: +- # Push a symbol +- self.push(t, self.grammar.dfas[t], newstate, context) +- break # To continue the outer while loop +- else: +- if (0, state) in arcs: +- # An accepting state, pop it and try something else +- self.pop() +- if not self.stack: +- # Done parsing, but another token is input +- raise ParseError("too much input", +- type, value, context) +- else: +- # No success finding a transition +- raise ParseError("bad input", type, value, context) +- +- def classify(self, type, value, context): +- """Turn a token into a label. (Internal)""" +- if type == token.NAME: +- # Keep a listing of all used names +- self.used_names.add(value) +- # Check for reserved words +- ilabel = self.grammar.keywords.get(value) +- if ilabel is not None: +- return ilabel +- ilabel = self.grammar.tokens.get(type) +- if ilabel is None: +- raise ParseError("bad token", type, value, context) +- return ilabel +- +- def shift(self, type, value, newstate, context): +- """Shift a token. (Internal)""" +- dfa, state, node = self.stack[-1] +- newnode = (type, value, context, None) +- newnode = self.convert(self.grammar, newnode) +- if newnode is not None: +- node[-1].append(newnode) +- self.stack[-1] = (dfa, newstate, node) +- +- def push(self, type, newdfa, newstate, context): +- """Push a nonterminal. (Internal)""" +- dfa, state, node = self.stack[-1] +- newnode = (type, None, context, []) +- self.stack[-1] = (dfa, newstate, node) +- self.stack.append((newdfa, 0, newnode)) +- +- def pop(self): +- """Pop a nonterminal. (Internal)""" +- popdfa, popstate, popnode = self.stack.pop() +- newnode = self.convert(self.grammar, popnode) +- if newnode is not None: +- if self.stack: +- dfa, state, node = self.stack[-1] +- node[-1].append(newnode) +- else: +- self.rootnode = newnode +- self.rootnode.used_names = self.used_names +diff -r 531f2e948299 lib2to3/pgen2/pgen.py +--- a/lib2to3/pgen2/pgen.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,384 +0,0 @@ +-# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-# Pgen imports +-from . import grammar, token, tokenize +- +-class PgenGrammar(grammar.Grammar): +- pass +- +-class ParserGenerator(object): +- +- def __init__(self, filename, stream=None): +- close_stream = None +- if stream is None: +- stream = open(filename) +- close_stream = stream.close +- self.filename = filename +- self.stream = stream +- self.generator = tokenize.generate_tokens(stream.readline) +- self.gettoken() # Initialize lookahead +- self.dfas, self.startsymbol = self.parse() +- if close_stream is not None: +- close_stream() +- self.first = {} # map from symbol name to set of tokens +- self.addfirstsets() +- +- def make_grammar(self): +- c = PgenGrammar() +- names = self.dfas.keys() +- names.sort() +- names.remove(self.startsymbol) +- names.insert(0, self.startsymbol) +- for name in names: +- i = 256 + len(c.symbol2number) +- c.symbol2number[name] = i +- c.number2symbol[i] = name +- for name in names: +- dfa = self.dfas[name] +- states = [] +- for state in dfa: +- arcs = [] +- for label, next in state.arcs.iteritems(): +- arcs.append((self.make_label(c, label), dfa.index(next))) +- if state.isfinal: +- arcs.append((0, dfa.index(state))) +- states.append(arcs) +- c.states.append(states) +- c.dfas[c.symbol2number[name]] = (states, self.make_first(c, name)) +- c.start = c.symbol2number[self.startsymbol] +- return c +- +- def make_first(self, c, name): +- rawfirst = self.first[name] +- first = {} +- for label in rawfirst: +- ilabel = self.make_label(c, label) +- ##assert ilabel not in first # XXX failed on <> ... != +- first[ilabel] = 1 +- return first +- +- def make_label(self, c, label): +- # XXX Maybe this should be a method on a subclass of converter? +- ilabel = len(c.labels) +- if label[0].isalpha(): +- # Either a symbol name or a named token +- if label in c.symbol2number: +- # A symbol name (a non-terminal) +- if label in c.symbol2label: +- return c.symbol2label[label] +- else: +- c.labels.append((c.symbol2number[label], None)) +- c.symbol2label[label] = ilabel +- return ilabel +- else: +- # A named token (NAME, NUMBER, STRING) +- itoken = getattr(token, label, None) +- assert isinstance(itoken, int), label +- assert itoken in token.tok_name, label +- if itoken in c.tokens: +- return c.tokens[itoken] +- else: +- c.labels.append((itoken, None)) +- c.tokens[itoken] = ilabel +- return ilabel +- else: +- # Either a keyword or an operator +- assert label[0] in ('"', "'"), label +- value = eval(label) +- if value[0].isalpha(): +- # A keyword +- if value in c.keywords: +- return c.keywords[value] +- else: +- c.labels.append((token.NAME, value)) +- c.keywords[value] = ilabel +- return ilabel +- else: +- # An operator (any non-numeric token) +- itoken = grammar.opmap[value] # Fails if unknown token +- if itoken in c.tokens: +- return c.tokens[itoken] +- else: +- c.labels.append((itoken, None)) +- c.tokens[itoken] = ilabel +- return ilabel +- +- def addfirstsets(self): +- names = self.dfas.keys() +- names.sort() +- for name in names: +- if name not in self.first: +- self.calcfirst(name) +- #print name, self.first[name].keys() +- +- def calcfirst(self, name): +- dfa = self.dfas[name] +- self.first[name] = None # dummy to detect left recursion +- state = dfa[0] +- totalset = {} +- overlapcheck = {} +- for label, next in state.arcs.iteritems(): +- if label in self.dfas: +- if label in self.first: +- fset = self.first[label] +- if fset is None: +- raise ValueError("recursion for rule %r" % name) +- else: +- self.calcfirst(label) +- fset = self.first[label] +- totalset.update(fset) +- overlapcheck[label] = fset +- else: +- totalset[label] = 1 +- overlapcheck[label] = {label: 1} +- inverse = {} +- for label, itsfirst in overlapcheck.iteritems(): +- for symbol in itsfirst: +- if symbol in inverse: +- raise ValueError("rule %s is ambiguous; %s is in the" +- " first sets of %s as well as %s" % +- (name, symbol, label, inverse[symbol])) +- inverse[symbol] = label +- self.first[name] = totalset +- +- def parse(self): +- dfas = {} +- startsymbol = None +- # MSTART: (NEWLINE | RULE)* ENDMARKER +- while self.type != token.ENDMARKER: +- while self.type == token.NEWLINE: +- self.gettoken() +- # RULE: NAME ':' RHS NEWLINE +- name = self.expect(token.NAME) +- self.expect(token.OP, ":") +- a, z = self.parse_rhs() +- self.expect(token.NEWLINE) +- #self.dump_nfa(name, a, z) +- dfa = self.make_dfa(a, z) +- #self.dump_dfa(name, dfa) +- oldlen = len(dfa) +- self.simplify_dfa(dfa) +- newlen = len(dfa) +- dfas[name] = dfa +- #print name, oldlen, newlen +- if startsymbol is None: +- startsymbol = name +- return dfas, startsymbol +- +- def make_dfa(self, start, finish): +- # To turn an NFA into a DFA, we define the states of the DFA +- # to correspond to *sets* of states of the NFA. Then do some +- # state reduction. Let's represent sets as dicts with 1 for +- # values. +- assert isinstance(start, NFAState) +- assert isinstance(finish, NFAState) +- def closure(state): +- base = {} +- addclosure(state, base) +- return base +- def addclosure(state, base): +- assert isinstance(state, NFAState) +- if state in base: +- return +- base[state] = 1 +- for label, next in state.arcs: +- if label is None: +- addclosure(next, base) +- states = [DFAState(closure(start), finish)] +- for state in states: # NB states grows while we're iterating +- arcs = {} +- for nfastate in state.nfaset: +- for label, next in nfastate.arcs: +- if label is not None: +- addclosure(next, arcs.setdefault(label, {})) +- for label, nfaset in arcs.iteritems(): +- for st in states: +- if st.nfaset == nfaset: +- break +- else: +- st = DFAState(nfaset, finish) +- states.append(st) +- state.addarc(st, label) +- return states # List of DFAState instances; first one is start +- +- def dump_nfa(self, name, start, finish): +- print "Dump of NFA for", name +- todo = [start] +- for i, state in enumerate(todo): +- print " State", i, state is finish and "(final)" or "" +- for label, next in state.arcs: +- if next in todo: +- j = todo.index(next) +- else: +- j = len(todo) +- todo.append(next) +- if label is None: +- print " -> %d" % j +- else: +- print " %s -> %d" % (label, j) +- +- def dump_dfa(self, name, dfa): +- print "Dump of DFA for", name +- for i, state in enumerate(dfa): +- print " State", i, state.isfinal and "(final)" or "" +- for label, next in state.arcs.iteritems(): +- print " %s -> %d" % (label, dfa.index(next)) +- +- def simplify_dfa(self, dfa): +- # This is not theoretically optimal, but works well enough. +- # Algorithm: repeatedly look for two states that have the same +- # set of arcs (same labels pointing to the same nodes) and +- # unify them, until things stop changing. +- +- # dfa is a list of DFAState instances +- changes = True +- while changes: +- changes = False +- for i, state_i in enumerate(dfa): +- for j in range(i+1, len(dfa)): +- state_j = dfa[j] +- if state_i == state_j: +- #print " unify", i, j +- del dfa[j] +- for state in dfa: +- state.unifystate(state_j, state_i) +- changes = True +- break +- +- def parse_rhs(self): +- # RHS: ALT ('|' ALT)* +- a, z = self.parse_alt() +- if self.value != "|": +- return a, z +- else: +- aa = NFAState() +- zz = NFAState() +- aa.addarc(a) +- z.addarc(zz) +- while self.value == "|": +- self.gettoken() +- a, z = self.parse_alt() +- aa.addarc(a) +- z.addarc(zz) +- return aa, zz +- +- def parse_alt(self): +- # ALT: ITEM+ +- a, b = self.parse_item() +- while (self.value in ("(", "[") or +- self.type in (token.NAME, token.STRING)): +- c, d = self.parse_item() +- b.addarc(c) +- b = d +- return a, b +- +- def parse_item(self): +- # ITEM: '[' RHS ']' | ATOM ['+' | '*'] +- if self.value == "[": +- self.gettoken() +- a, z = self.parse_rhs() +- self.expect(token.OP, "]") +- a.addarc(z) +- return a, z +- else: +- a, z = self.parse_atom() +- value = self.value +- if value not in ("+", "*"): +- return a, z +- self.gettoken() +- z.addarc(a) +- if value == "+": +- return a, z +- else: +- return a, a +- +- def parse_atom(self): +- # ATOM: '(' RHS ')' | NAME | STRING +- if self.value == "(": +- self.gettoken() +- a, z = self.parse_rhs() +- self.expect(token.OP, ")") +- return a, z +- elif self.type in (token.NAME, token.STRING): +- a = NFAState() +- z = NFAState() +- a.addarc(z, self.value) +- self.gettoken() +- return a, z +- else: +- self.raise_error("expected (...) or NAME or STRING, got %s/%s", +- self.type, self.value) +- +- def expect(self, type, value=None): +- if self.type != type or (value is not None and self.value != value): +- self.raise_error("expected %s/%s, got %s/%s", +- type, value, self.type, self.value) +- value = self.value +- self.gettoken() +- return value +- +- def gettoken(self): +- tup = self.generator.next() +- while tup[0] in (tokenize.COMMENT, tokenize.NL): +- tup = self.generator.next() +- self.type, self.value, self.begin, self.end, self.line = tup +- #print token.tok_name[self.type], repr(self.value) +- +- def raise_error(self, msg, *args): +- if args: +- try: +- msg = msg % args +- except: +- msg = " ".join([msg] + map(str, args)) +- raise SyntaxError(msg, (self.filename, self.end[0], +- self.end[1], self.line)) +- +-class NFAState(object): +- +- def __init__(self): +- self.arcs = [] # list of (label, NFAState) pairs +- +- def addarc(self, next, label=None): +- assert label is None or isinstance(label, str) +- assert isinstance(next, NFAState) +- self.arcs.append((label, next)) +- +-class DFAState(object): +- +- def __init__(self, nfaset, final): +- assert isinstance(nfaset, dict) +- assert isinstance(iter(nfaset).next(), NFAState) +- assert isinstance(final, NFAState) +- self.nfaset = nfaset +- self.isfinal = final in nfaset +- self.arcs = {} # map from label to DFAState +- +- def addarc(self, next, label): +- assert isinstance(label, str) +- assert label not in self.arcs +- assert isinstance(next, DFAState) +- self.arcs[label] = next +- +- def unifystate(self, old, new): +- for label, next in self.arcs.iteritems(): +- if next is old: +- self.arcs[label] = new +- +- def __eq__(self, other): +- # Equality test -- ignore the nfaset instance variable +- assert isinstance(other, DFAState) +- if self.isfinal != other.isfinal: +- return False +- # Can't just return self.arcs == other.arcs, because that +- # would invoke this method recursively, with cycles... +- if len(self.arcs) != len(other.arcs): +- return False +- for label, next in self.arcs.iteritems(): +- if next is not other.arcs.get(label): +- return False +- return True +- +-def generate_grammar(filename="Grammar.txt"): +- p = ParserGenerator(filename) +- return p.make_grammar() +diff -r 531f2e948299 lib2to3/pgen2/token.py +--- a/lib2to3/pgen2/token.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,82 +0,0 @@ +-#! /usr/bin/env python +- +-"""Token constants (from "token.h").""" +- +-# Taken from Python (r53757) and modified to include some tokens +-# originally monkeypatched in by pgen2.tokenize +- +-#--start constants-- +-ENDMARKER = 0 +-NAME = 1 +-NUMBER = 2 +-STRING = 3 +-NEWLINE = 4 +-INDENT = 5 +-DEDENT = 6 +-LPAR = 7 +-RPAR = 8 +-LSQB = 9 +-RSQB = 10 +-COLON = 11 +-COMMA = 12 +-SEMI = 13 +-PLUS = 14 +-MINUS = 15 +-STAR = 16 +-SLASH = 17 +-VBAR = 18 +-AMPER = 19 +-LESS = 20 +-GREATER = 21 +-EQUAL = 22 +-DOT = 23 +-PERCENT = 24 +-BACKQUOTE = 25 +-LBRACE = 26 +-RBRACE = 27 +-EQEQUAL = 28 +-NOTEQUAL = 29 +-LESSEQUAL = 30 +-GREATEREQUAL = 31 +-TILDE = 32 +-CIRCUMFLEX = 33 +-LEFTSHIFT = 34 +-RIGHTSHIFT = 35 +-DOUBLESTAR = 36 +-PLUSEQUAL = 37 +-MINEQUAL = 38 +-STAREQUAL = 39 +-SLASHEQUAL = 40 +-PERCENTEQUAL = 41 +-AMPEREQUAL = 42 +-VBAREQUAL = 43 +-CIRCUMFLEXEQUAL = 44 +-LEFTSHIFTEQUAL = 45 +-RIGHTSHIFTEQUAL = 46 +-DOUBLESTAREQUAL = 47 +-DOUBLESLASH = 48 +-DOUBLESLASHEQUAL = 49 +-AT = 50 +-OP = 51 +-COMMENT = 52 +-NL = 53 +-RARROW = 54 +-ERRORTOKEN = 55 +-N_TOKENS = 56 +-NT_OFFSET = 256 +-#--end constants-- +- +-tok_name = {} +-for _name, _value in globals().items(): +- if type(_value) is type(0): +- tok_name[_value] = _name +- +- +-def ISTERMINAL(x): +- return x < NT_OFFSET +- +-def ISNONTERMINAL(x): +- return x >= NT_OFFSET +- +-def ISEOF(x): +- return x == ENDMARKER +diff -r 531f2e948299 lib2to3/pgen2/tokenize.py +--- a/lib2to3/pgen2/tokenize.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,405 +0,0 @@ +-# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation. +-# All rights reserved. +- +-"""Tokenization help for Python programs. +- +-generate_tokens(readline) is a generator that breaks a stream of +-text into Python tokens. It accepts a readline-like method which is called +-repeatedly to get the next line of input (or "" for EOF). It generates +-5-tuples with these members: +- +- the token type (see token.py) +- the token (a string) +- the starting (row, column) indices of the token (a 2-tuple of ints) +- the ending (row, column) indices of the token (a 2-tuple of ints) +- the original line (string) +- +-It is designed to match the working of the Python tokenizer exactly, except +-that it produces COMMENT tokens for comments and gives type OP for all +-operators +- +-Older entry points +- tokenize_loop(readline, tokeneater) +- tokenize(readline, tokeneater=printtoken) +-are the same, except instead of generating tokens, tokeneater is a callback +-function to which the 5 fields described above are passed as 5 arguments, +-each time a new token is found.""" +- +-__author__ = 'Ka-Ping Yee ' +-__credits__ = \ +- 'GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip Montanaro' +- +-import string, re +-from lib2to3.pgen2.token import * +- +-from . import token +-__all__ = [x for x in dir(token) if x[0] != '_'] + ["tokenize", +- "generate_tokens", "untokenize"] +-del token +- +-def group(*choices): return '(' + '|'.join(choices) + ')' +-def any(*choices): return group(*choices) + '*' +-def maybe(*choices): return group(*choices) + '?' +- +-Whitespace = r'[ \f\t]*' +-Comment = r'#[^\r\n]*' +-Ignore = Whitespace + any(r'\\\r?\n' + Whitespace) + maybe(Comment) +-Name = r'[a-zA-Z_]\w*' +- +-Binnumber = r'0[bB][01]*' +-Hexnumber = r'0[xX][\da-fA-F]*[lL]?' +-Octnumber = r'0[oO]?[0-7]*[lL]?' +-Decnumber = r'[1-9]\d*[lL]?' +-Intnumber = group(Binnumber, Hexnumber, Octnumber, Decnumber) +-Exponent = r'[eE][-+]?\d+' +-Pointfloat = group(r'\d+\.\d*', r'\.\d+') + maybe(Exponent) +-Expfloat = r'\d+' + Exponent +-Floatnumber = group(Pointfloat, Expfloat) +-Imagnumber = group(r'\d+[jJ]', Floatnumber + r'[jJ]') +-Number = group(Imagnumber, Floatnumber, Intnumber) +- +-# Tail end of ' string. +-Single = r"[^'\\]*(?:\\.[^'\\]*)*'" +-# Tail end of " string. +-Double = r'[^"\\]*(?:\\.[^"\\]*)*"' +-# Tail end of ''' string. +-Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''" +-# Tail end of """ string. +-Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""' +-Triple = group("[ubUB]?[rR]?'''", '[ubUB]?[rR]?"""') +-# Single-line ' or " string. +-String = group(r"[uU]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*'", +- r'[uU]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*"') +- +-# Because of leftmost-then-longest match semantics, be sure to put the +-# longest operators first (e.g., if = came before ==, == would get +-# recognized as two instances of =). +-Operator = group(r"\*\*=?", r">>=?", r"<<=?", r"<>", r"!=", +- r"//=?", r"->", +- r"[+\-*/%&|^=<>]=?", +- r"~") +- +-Bracket = '[][(){}]' +-Special = group(r'\r?\n', r'[:;.,`@]') +-Funny = group(Operator, Bracket, Special) +- +-PlainToken = group(Number, Funny, String, Name) +-Token = Ignore + PlainToken +- +-# First (or only) line of ' or " string. +-ContStr = group(r"[uUbB]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*" + +- group("'", r'\\\r?\n'), +- r'[uUbB]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*' + +- group('"', r'\\\r?\n')) +-PseudoExtras = group(r'\\\r?\n', Comment, Triple) +-PseudoToken = Whitespace + group(PseudoExtras, Number, Funny, ContStr, Name) +- +-tokenprog, pseudoprog, single3prog, double3prog = map( +- re.compile, (Token, PseudoToken, Single3, Double3)) +-endprogs = {"'": re.compile(Single), '"': re.compile(Double), +- "'''": single3prog, '"""': double3prog, +- "r'''": single3prog, 'r"""': double3prog, +- "u'''": single3prog, 'u"""': double3prog, +- "b'''": single3prog, 'b"""': double3prog, +- "ur'''": single3prog, 'ur"""': double3prog, +- "br'''": single3prog, 'br"""': double3prog, +- "R'''": single3prog, 'R"""': double3prog, +- "U'''": single3prog, 'U"""': double3prog, +- "B'''": single3prog, 'B"""': double3prog, +- "uR'''": single3prog, 'uR"""': double3prog, +- "Ur'''": single3prog, 'Ur"""': double3prog, +- "UR'''": single3prog, 'UR"""': double3prog, +- "bR'''": single3prog, 'bR"""': double3prog, +- "Br'''": single3prog, 'Br"""': double3prog, +- "BR'''": single3prog, 'BR"""': double3prog, +- 'r': None, 'R': None, +- 'u': None, 'U': None, +- 'b': None, 'B': None} +- +-triple_quoted = {} +-for t in ("'''", '"""', +- "r'''", 'r"""', "R'''", 'R"""', +- "u'''", 'u"""', "U'''", 'U"""', +- "b'''", 'b"""', "B'''", 'B"""', +- "ur'''", 'ur"""', "Ur'''", 'Ur"""', +- "uR'''", 'uR"""', "UR'''", 'UR"""', +- "br'''", 'br"""', "Br'''", 'Br"""', +- "bR'''", 'bR"""', "BR'''", 'BR"""',): +- triple_quoted[t] = t +-single_quoted = {} +-for t in ("'", '"', +- "r'", 'r"', "R'", 'R"', +- "u'", 'u"', "U'", 'U"', +- "b'", 'b"', "B'", 'B"', +- "ur'", 'ur"', "Ur'", 'Ur"', +- "uR'", 'uR"', "UR'", 'UR"', +- "br'", 'br"', "Br'", 'Br"', +- "bR'", 'bR"', "BR'", 'BR"', ): +- single_quoted[t] = t +- +-tabsize = 8 +- +-class TokenError(Exception): pass +- +-class StopTokenizing(Exception): pass +- +-def printtoken(type, token, (srow, scol), (erow, ecol), line): # for testing +- print "%d,%d-%d,%d:\t%s\t%s" % \ +- (srow, scol, erow, ecol, tok_name[type], repr(token)) +- +-def tokenize(readline, tokeneater=printtoken): +- """ +- The tokenize() function accepts two parameters: one representing the +- input stream, and one providing an output mechanism for tokenize(). +- +- The first parameter, readline, must be a callable object which provides +- the same interface as the readline() method of built-in file objects. +- Each call to the function should return one line of input as a string. +- +- The second parameter, tokeneater, must also be a callable object. It is +- called once for each token, with five arguments, corresponding to the +- tuples generated by generate_tokens(). +- """ +- try: +- tokenize_loop(readline, tokeneater) +- except StopTokenizing: +- pass +- +-# backwards compatible interface +-def tokenize_loop(readline, tokeneater): +- for token_info in generate_tokens(readline): +- tokeneater(*token_info) +- +-class Untokenizer: +- +- def __init__(self): +- self.tokens = [] +- self.prev_row = 1 +- self.prev_col = 0 +- +- def add_whitespace(self, start): +- row, col = start +- assert row <= self.prev_row +- col_offset = col - self.prev_col +- if col_offset: +- self.tokens.append(" " * col_offset) +- +- def untokenize(self, iterable): +- for t in iterable: +- if len(t) == 2: +- self.compat(t, iterable) +- break +- tok_type, token, start, end, line = t +- self.add_whitespace(start) +- self.tokens.append(token) +- self.prev_row, self.prev_col = end +- if tok_type in (NEWLINE, NL): +- self.prev_row += 1 +- self.prev_col = 0 +- return "".join(self.tokens) +- +- def compat(self, token, iterable): +- startline = False +- indents = [] +- toks_append = self.tokens.append +- toknum, tokval = token +- if toknum in (NAME, NUMBER): +- tokval += ' ' +- if toknum in (NEWLINE, NL): +- startline = True +- for tok in iterable: +- toknum, tokval = tok[:2] +- +- if toknum in (NAME, NUMBER): +- tokval += ' ' +- +- if toknum == INDENT: +- indents.append(tokval) +- continue +- elif toknum == DEDENT: +- indents.pop() +- continue +- elif toknum in (NEWLINE, NL): +- startline = True +- elif startline and indents: +- toks_append(indents[-1]) +- startline = False +- toks_append(tokval) +- +-def untokenize(iterable): +- """Transform tokens back into Python source code. +- +- Each element returned by the iterable must be a token sequence +- with at least two elements, a token number and token value. If +- only two tokens are passed, the resulting output is poor. +- +- Round-trip invariant for full input: +- Untokenized source will match input source exactly +- +- Round-trip invariant for limited intput: +- # Output text will tokenize the back to the input +- t1 = [tok[:2] for tok in generate_tokens(f.readline)] +- newcode = untokenize(t1) +- readline = iter(newcode.splitlines(1)).next +- t2 = [tok[:2] for tokin generate_tokens(readline)] +- assert t1 == t2 +- """ +- ut = Untokenizer() +- return ut.untokenize(iterable) +- +-def generate_tokens(readline): +- """ +- The generate_tokens() generator requires one argment, readline, which +- must be a callable object which provides the same interface as the +- readline() method of built-in file objects. Each call to the function +- should return one line of input as a string. Alternately, readline +- can be a callable function terminating with StopIteration: +- readline = open(myfile).next # Example of alternate readline +- +- The generator produces 5-tuples with these members: the token type; the +- token string; a 2-tuple (srow, scol) of ints specifying the row and +- column where the token begins in the source; a 2-tuple (erow, ecol) of +- ints specifying the row and column where the token ends in the source; +- and the line on which the token was found. The line passed is the +- logical line; continuation lines are included. +- """ +- lnum = parenlev = continued = 0 +- namechars, numchars = string.ascii_letters + '_', '0123456789' +- contstr, needcont = '', 0 +- contline = None +- indents = [0] +- +- while 1: # loop over lines in stream +- try: +- line = readline() +- except StopIteration: +- line = '' +- lnum = lnum + 1 +- pos, max = 0, len(line) +- +- if contstr: # continued string +- if not line: +- raise TokenError, ("EOF in multi-line string", strstart) +- endmatch = endprog.match(line) +- if endmatch: +- pos = end = endmatch.end(0) +- yield (STRING, contstr + line[:end], +- strstart, (lnum, end), contline + line) +- contstr, needcont = '', 0 +- contline = None +- elif needcont and line[-2:] != '\\\n' and line[-3:] != '\\\r\n': +- yield (ERRORTOKEN, contstr + line, +- strstart, (lnum, len(line)), contline) +- contstr = '' +- contline = None +- continue +- else: +- contstr = contstr + line +- contline = contline + line +- continue +- +- elif parenlev == 0 and not continued: # new statement +- if not line: break +- column = 0 +- while pos < max: # measure leading whitespace +- if line[pos] == ' ': column = column + 1 +- elif line[pos] == '\t': column = (column/tabsize + 1)*tabsize +- elif line[pos] == '\f': column = 0 +- else: break +- pos = pos + 1 +- if pos == max: break +- +- if line[pos] in '#\r\n': # skip comments or blank lines +- if line[pos] == '#': +- comment_token = line[pos:].rstrip('\r\n') +- nl_pos = pos + len(comment_token) +- yield (COMMENT, comment_token, +- (lnum, pos), (lnum, pos + len(comment_token)), line) +- yield (NL, line[nl_pos:], +- (lnum, nl_pos), (lnum, len(line)), line) +- else: +- yield ((NL, COMMENT)[line[pos] == '#'], line[pos:], +- (lnum, pos), (lnum, len(line)), line) +- continue +- +- if column > indents[-1]: # count indents or dedents +- indents.append(column) +- yield (INDENT, line[:pos], (lnum, 0), (lnum, pos), line) +- while column < indents[-1]: +- if column not in indents: +- raise IndentationError( +- "unindent does not match any outer indentation level", +- ("", lnum, pos, line)) +- indents = indents[:-1] +- yield (DEDENT, '', (lnum, pos), (lnum, pos), line) +- +- else: # continued statement +- if not line: +- raise TokenError, ("EOF in multi-line statement", (lnum, 0)) +- continued = 0 +- +- while pos < max: +- pseudomatch = pseudoprog.match(line, pos) +- if pseudomatch: # scan for tokens +- start, end = pseudomatch.span(1) +- spos, epos, pos = (lnum, start), (lnum, end), end +- token, initial = line[start:end], line[start] +- +- if initial in numchars or \ +- (initial == '.' and token != '.'): # ordinary number +- yield (NUMBER, token, spos, epos, line) +- elif initial in '\r\n': +- newline = NEWLINE +- if parenlev > 0: +- newline = NL +- yield (newline, token, spos, epos, line) +- elif initial == '#': +- assert not token.endswith("\n") +- yield (COMMENT, token, spos, epos, line) +- elif token in triple_quoted: +- endprog = endprogs[token] +- endmatch = endprog.match(line, pos) +- if endmatch: # all on one line +- pos = endmatch.end(0) +- token = line[start:pos] +- yield (STRING, token, spos, (lnum, pos), line) +- else: +- strstart = (lnum, start) # multiple lines +- contstr = line[start:] +- contline = line +- break +- elif initial in single_quoted or \ +- token[:2] in single_quoted or \ +- token[:3] in single_quoted: +- if token[-1] == '\n': # continued string +- strstart = (lnum, start) +- endprog = (endprogs[initial] or endprogs[token[1]] or +- endprogs[token[2]]) +- contstr, needcont = line[start:], 1 +- contline = line +- break +- else: # ordinary string +- yield (STRING, token, spos, epos, line) +- elif initial in namechars: # ordinary name +- yield (NAME, token, spos, epos, line) +- elif initial == '\\': # continued stmt +- # This yield is new; needed for better idempotency: +- yield (NL, token, spos, (lnum, pos), line) +- continued = 1 +- else: +- if initial in '([{': parenlev = parenlev + 1 +- elif initial in ')]}': parenlev = parenlev - 1 +- yield (OP, token, spos, epos, line) +- else: +- yield (ERRORTOKEN, line[pos], +- (lnum, pos), (lnum, pos+1), line) +- pos = pos + 1 +- +- for indent in indents[1:]: # pop remaining indent levels +- yield (DEDENT, '', (lnum, 0), (lnum, 0), '') +- yield (ENDMARKER, '', (lnum, 0), (lnum, 0), '') +- +-if __name__ == '__main__': # testing +- import sys +- if len(sys.argv) > 1: tokenize(open(sys.argv[1]).readline) +- else: tokenize(sys.stdin.readline) +diff -r 531f2e948299 lib2to3/pygram.py +--- a/lib2to3/pygram.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,31 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Export the Python grammar and symbols.""" +- +-# Python imports +-import os +- +-# Local imports +-from .pgen2 import token +-from .pgen2 import driver +-from . import pytree +- +-# The grammar file +-_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt") +- +- +-class Symbols(object): +- +- def __init__(self, grammar): +- """Initializer. +- +- Creates an attribute for each grammar symbol (nonterminal), +- whose value is the symbol's type (an int >= 256). +- """ +- for name, symbol in grammar.symbol2number.iteritems(): +- setattr(self, name, symbol) +- +- +-python_grammar = driver.load_grammar(_GRAMMAR_FILE) +-python_symbols = Symbols(python_grammar) +diff -r 531f2e948299 lib2to3/pytree.py +--- a/lib2to3/pytree.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,846 +0,0 @@ +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-""" +-Python parse tree definitions. +- +-This is a very concrete parse tree; we need to keep every token and +-even the comments and whitespace between tokens. +- +-There's also a pattern matching implementation here. +-""" +- +-__author__ = "Guido van Rossum " +- +-import sys +-from StringIO import StringIO +- +- +-HUGE = 0x7FFFFFFF # maximum repeat count, default max +- +-_type_reprs = {} +-def type_repr(type_num): +- global _type_reprs +- if not _type_reprs: +- from .pygram import python_symbols +- # printing tokens is possible but not as useful +- # from .pgen2 import token // token.__dict__.items(): +- for name, val in python_symbols.__dict__.items(): +- if type(val) == int: _type_reprs[val] = name +- return _type_reprs.setdefault(type_num, type_num) +- +- +-class Base(object): +- +- """ +- Abstract base class for Node and Leaf. +- +- This provides some default functionality and boilerplate using the +- template pattern. +- +- A node may be a subnode of at most one parent. +- """ +- +- # Default values for instance variables +- type = None # int: token number (< 256) or symbol number (>= 256) +- parent = None # Parent node pointer, or None +- children = () # Tuple of subnodes +- was_changed = False +- +- def __new__(cls, *args, **kwds): +- """Constructor that prevents Base from being instantiated.""" +- assert cls is not Base, "Cannot instantiate Base" +- return object.__new__(cls) +- +- def __eq__(self, other): +- """ +- Compare two nodes for equality. +- +- This calls the method _eq(). +- """ +- if self.__class__ is not other.__class__: +- return NotImplemented +- return self._eq(other) +- +- def __ne__(self, other): +- """ +- Compare two nodes for inequality. +- +- This calls the method _eq(). +- """ +- if self.__class__ is not other.__class__: +- return NotImplemented +- return not self._eq(other) +- +- def _eq(self, other): +- """ +- Compare two nodes for equality. +- +- This is called by __eq__ and __ne__. It is only called if the two nodes +- have the same type. This must be implemented by the concrete subclass. +- Nodes should be considered equal if they have the same structure, +- ignoring the prefix string and other context information. +- """ +- raise NotImplementedError +- +- def clone(self): +- """ +- Return a cloned (deep) copy of self. +- +- This must be implemented by the concrete subclass. +- """ +- raise NotImplementedError +- +- def post_order(self): +- """ +- Return a post-order iterator for the tree. +- +- This must be implemented by the concrete subclass. +- """ +- raise NotImplementedError +- +- def pre_order(self): +- """ +- Return a pre-order iterator for the tree. +- +- This must be implemented by the concrete subclass. +- """ +- raise NotImplementedError +- +- def set_prefix(self, prefix): +- """ +- Set the prefix for the node (see Leaf class). +- +- This must be implemented by the concrete subclass. +- """ +- raise NotImplementedError +- +- def get_prefix(self): +- """ +- Return the prefix for the node (see Leaf class). +- +- This must be implemented by the concrete subclass. +- """ +- raise NotImplementedError +- +- def replace(self, new): +- """Replace this node with a new one in the parent.""" +- assert self.parent is not None, str(self) +- assert new is not None +- if not isinstance(new, list): +- new = [new] +- l_children = [] +- found = False +- for ch in self.parent.children: +- if ch is self: +- assert not found, (self.parent.children, self, new) +- if new is not None: +- l_children.extend(new) +- found = True +- else: +- l_children.append(ch) +- assert found, (self.children, self, new) +- self.parent.changed() +- self.parent.children = l_children +- for x in new: +- x.parent = self.parent +- self.parent = None +- +- def get_lineno(self): +- """Return the line number which generated the invocant node.""" +- node = self +- while not isinstance(node, Leaf): +- if not node.children: +- return +- node = node.children[0] +- return node.lineno +- +- def changed(self): +- if self.parent: +- self.parent.changed() +- self.was_changed = True +- +- def remove(self): +- """ +- Remove the node from the tree. Returns the position of the node in its +- parent's children before it was removed. +- """ +- if self.parent: +- for i, node in enumerate(self.parent.children): +- if node is self: +- self.parent.changed() +- del self.parent.children[i] +- self.parent = None +- return i +- +- @property +- def next_sibling(self): +- """ +- The node immediately following the invocant in their parent's children +- list. If the invocant does not have a next sibling, it is None +- """ +- if self.parent is None: +- return None +- +- # Can't use index(); we need to test by identity +- for i, child in enumerate(self.parent.children): +- if child is self: +- try: +- return self.parent.children[i+1] +- except IndexError: +- return None +- +- @property +- def prev_sibling(self): +- """ +- The node immediately preceding the invocant in their parent's children +- list. If the invocant does not have a previous sibling, it is None. +- """ +- if self.parent is None: +- return None +- +- # Can't use index(); we need to test by identity +- for i, child in enumerate(self.parent.children): +- if child is self: +- if i == 0: +- return None +- return self.parent.children[i-1] +- +- def get_suffix(self): +- """ +- Return the string immediately following the invocant node. This is +- effectively equivalent to node.next_sibling.get_prefix() +- """ +- next_sib = self.next_sibling +- if next_sib is None: +- return "" +- return next_sib.get_prefix() +- +- +-class Node(Base): +- +- """Concrete implementation for interior nodes.""" +- +- def __init__(self, type, children, context=None, prefix=None): +- """ +- Initializer. +- +- Takes a type constant (a symbol number >= 256), a sequence of +- child nodes, and an optional context keyword argument. +- +- As a side effect, the parent pointers of the children are updated. +- """ +- assert type >= 256, type +- self.type = type +- self.children = list(children) +- for ch in self.children: +- assert ch.parent is None, repr(ch) +- ch.parent = self +- if prefix is not None: +- self.set_prefix(prefix) +- +- def __repr__(self): +- """Return a canonical string representation.""" +- return "%s(%s, %r)" % (self.__class__.__name__, +- type_repr(self.type), +- self.children) +- +- def __str__(self): +- """ +- Return a pretty string representation. +- +- This reproduces the input source exactly. +- """ +- return "".join(map(str, self.children)) +- +- def _eq(self, other): +- """Compare two nodes for equality.""" +- return (self.type, self.children) == (other.type, other.children) +- +- def clone(self): +- """Return a cloned (deep) copy of self.""" +- return Node(self.type, [ch.clone() for ch in self.children]) +- +- def post_order(self): +- """Return a post-order iterator for the tree.""" +- for child in self.children: +- for node in child.post_order(): +- yield node +- yield self +- +- def pre_order(self): +- """Return a pre-order iterator for the tree.""" +- yield self +- for child in self.children: +- for node in child.post_order(): +- yield node +- +- def set_prefix(self, prefix): +- """ +- Set the prefix for the node. +- +- This passes the responsibility on to the first child. +- """ +- if self.children: +- self.children[0].set_prefix(prefix) +- +- def get_prefix(self): +- """ +- Return the prefix for the node. +- +- This passes the call on to the first child. +- """ +- if not self.children: +- return "" +- return self.children[0].get_prefix() +- +- def set_child(self, i, child): +- """ +- Equivalent to 'node.children[i] = child'. This method also sets the +- child's parent attribute appropriately. +- """ +- child.parent = self +- self.children[i].parent = None +- self.children[i] = child +- self.changed() +- +- def insert_child(self, i, child): +- """ +- Equivalent to 'node.children.insert(i, child)'. This method also sets +- the child's parent attribute appropriately. +- """ +- child.parent = self +- self.children.insert(i, child) +- self.changed() +- +- def append_child(self, child): +- """ +- Equivalent to 'node.children.append(child)'. This method also sets the +- child's parent attribute appropriately. +- """ +- child.parent = self +- self.children.append(child) +- self.changed() +- +- +-class Leaf(Base): +- +- """Concrete implementation for leaf nodes.""" +- +- # Default values for instance variables +- prefix = "" # Whitespace and comments preceding this token in the input +- lineno = 0 # Line where this token starts in the input +- column = 0 # Column where this token tarts in the input +- +- def __init__(self, type, value, context=None, prefix=None): +- """ +- Initializer. +- +- Takes a type constant (a token number < 256), a string value, and an +- optional context keyword argument. +- """ +- assert 0 <= type < 256, type +- if context is not None: +- self.prefix, (self.lineno, self.column) = context +- self.type = type +- self.value = value +- if prefix is not None: +- self.prefix = prefix +- +- def __repr__(self): +- """Return a canonical string representation.""" +- return "%s(%r, %r)" % (self.__class__.__name__, +- self.type, +- self.value) +- +- def __str__(self): +- """ +- Return a pretty string representation. +- +- This reproduces the input source exactly. +- """ +- return self.prefix + str(self.value) +- +- def _eq(self, other): +- """Compare two nodes for equality.""" +- return (self.type, self.value) == (other.type, other.value) +- +- def clone(self): +- """Return a cloned (deep) copy of self.""" +- return Leaf(self.type, self.value, +- (self.prefix, (self.lineno, self.column))) +- +- def post_order(self): +- """Return a post-order iterator for the tree.""" +- yield self +- +- def pre_order(self): +- """Return a pre-order iterator for the tree.""" +- yield self +- +- def set_prefix(self, prefix): +- """Set the prefix for the node.""" +- self.changed() +- self.prefix = prefix +- +- def get_prefix(self): +- """Return the prefix for the node.""" +- return self.prefix +- +- +-def convert(gr, raw_node): +- """ +- Convert raw node information to a Node or Leaf instance. +- +- This is passed to the parser driver which calls it whenever a reduction of a +- grammar rule produces a new complete node, so that the tree is build +- strictly bottom-up. +- """ +- type, value, context, children = raw_node +- if children or type in gr.number2symbol: +- # If there's exactly one child, return that child instead of +- # creating a new node. +- if len(children) == 1: +- return children[0] +- return Node(type, children, context=context) +- else: +- return Leaf(type, value, context=context) +- +- +-class BasePattern(object): +- +- """ +- A pattern is a tree matching pattern. +- +- It looks for a specific node type (token or symbol), and +- optionally for a specific content. +- +- This is an abstract base class. There are three concrete +- subclasses: +- +- - LeafPattern matches a single leaf node; +- - NodePattern matches a single node (usually non-leaf); +- - WildcardPattern matches a sequence of nodes of variable length. +- """ +- +- # Defaults for instance variables +- type = None # Node type (token if < 256, symbol if >= 256) +- content = None # Optional content matching pattern +- name = None # Optional name used to store match in results dict +- +- def __new__(cls, *args, **kwds): +- """Constructor that prevents BasePattern from being instantiated.""" +- assert cls is not BasePattern, "Cannot instantiate BasePattern" +- return object.__new__(cls) +- +- def __repr__(self): +- args = [type_repr(self.type), self.content, self.name] +- while args and args[-1] is None: +- del args[-1] +- return "%s(%s)" % (self.__class__.__name__, ", ".join(map(repr, args))) +- +- def optimize(self): +- """ +- A subclass can define this as a hook for optimizations. +- +- Returns either self or another node with the same effect. +- """ +- return self +- +- def match(self, node, results=None): +- """ +- Does this pattern exactly match a node? +- +- Returns True if it matches, False if not. +- +- If results is not None, it must be a dict which will be +- updated with the nodes matching named subpatterns. +- +- Default implementation for non-wildcard patterns. +- """ +- if self.type is not None and node.type != self.type: +- return False +- if self.content is not None: +- r = None +- if results is not None: +- r = {} +- if not self._submatch(node, r): +- return False +- if r: +- results.update(r) +- if results is not None and self.name: +- results[self.name] = node +- return True +- +- def match_seq(self, nodes, results=None): +- """ +- Does this pattern exactly match a sequence of nodes? +- +- Default implementation for non-wildcard patterns. +- """ +- if len(nodes) != 1: +- return False +- return self.match(nodes[0], results) +- +- def generate_matches(self, nodes): +- """ +- Generator yielding all matches for this pattern. +- +- Default implementation for non-wildcard patterns. +- """ +- r = {} +- if nodes and self.match(nodes[0], r): +- yield 1, r +- +- +-class LeafPattern(BasePattern): +- +- def __init__(self, type=None, content=None, name=None): +- """ +- Initializer. Takes optional type, content, and name. +- +- The type, if given must be a token type (< 256). If not given, +- this matches any *leaf* node; the content may still be required. +- +- The content, if given, must be a string. +- +- If a name is given, the matching node is stored in the results +- dict under that key. +- """ +- if type is not None: +- assert 0 <= type < 256, type +- if content is not None: +- assert isinstance(content, basestring), repr(content) +- self.type = type +- self.content = content +- self.name = name +- +- def match(self, node, results=None): +- """Override match() to insist on a leaf node.""" +- if not isinstance(node, Leaf): +- return False +- return BasePattern.match(self, node, results) +- +- def _submatch(self, node, results=None): +- """ +- Match the pattern's content to the node's children. +- +- This assumes the node type matches and self.content is not None. +- +- Returns True if it matches, False if not. +- +- If results is not None, it must be a dict which will be +- updated with the nodes matching named subpatterns. +- +- When returning False, the results dict may still be updated. +- """ +- return self.content == node.value +- +- +-class NodePattern(BasePattern): +- +- wildcards = False +- +- def __init__(self, type=None, content=None, name=None): +- """ +- Initializer. Takes optional type, content, and name. +- +- The type, if given, must be a symbol type (>= 256). If the +- type is None this matches *any* single node (leaf or not), +- except if content is not None, in which it only matches +- non-leaf nodes that also match the content pattern. +- +- The content, if not None, must be a sequence of Patterns that +- must match the node's children exactly. If the content is +- given, the type must not be None. +- +- If a name is given, the matching node is stored in the results +- dict under that key. +- """ +- if type is not None: +- assert type >= 256, type +- if content is not None: +- assert not isinstance(content, basestring), repr(content) +- content = list(content) +- for i, item in enumerate(content): +- assert isinstance(item, BasePattern), (i, item) +- if isinstance(item, WildcardPattern): +- self.wildcards = True +- self.type = type +- self.content = content +- self.name = name +- +- def _submatch(self, node, results=None): +- """ +- Match the pattern's content to the node's children. +- +- This assumes the node type matches and self.content is not None. +- +- Returns True if it matches, False if not. +- +- If results is not None, it must be a dict which will be +- updated with the nodes matching named subpatterns. +- +- When returning False, the results dict may still be updated. +- """ +- if self.wildcards: +- for c, r in generate_matches(self.content, node.children): +- if c == len(node.children): +- if results is not None: +- results.update(r) +- return True +- return False +- if len(self.content) != len(node.children): +- return False +- for subpattern, child in zip(self.content, node.children): +- if not subpattern.match(child, results): +- return False +- return True +- +- +-class WildcardPattern(BasePattern): +- +- """ +- A wildcard pattern can match zero or more nodes. +- +- This has all the flexibility needed to implement patterns like: +- +- .* .+ .? .{m,n} +- (a b c | d e | f) +- (...)* (...)+ (...)? (...){m,n} +- +- except it always uses non-greedy matching. +- """ +- +- def __init__(self, content=None, min=0, max=HUGE, name=None): +- """ +- Initializer. +- +- Args: +- content: optional sequence of subsequences of patterns; +- if absent, matches one node; +- if present, each subsequence is an alternative [*] +- min: optinal minumum number of times to match, default 0 +- max: optional maximum number of times tro match, default HUGE +- name: optional name assigned to this match +- +- [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is +- equivalent to (a b c | d e | f g h); if content is None, +- this is equivalent to '.' in regular expression terms. +- The min and max parameters work as follows: +- min=0, max=maxint: .* +- min=1, max=maxint: .+ +- min=0, max=1: .? +- min=1, max=1: . +- If content is not None, replace the dot with the parenthesized +- list of alternatives, e.g. (a b c | d e | f g h)* +- """ +- assert 0 <= min <= max <= HUGE, (min, max) +- if content is not None: +- content = tuple(map(tuple, content)) # Protect against alterations +- # Check sanity of alternatives +- assert len(content), repr(content) # Can't have zero alternatives +- for alt in content: +- assert len(alt), repr(alt) # Can have empty alternatives +- self.content = content +- self.min = min +- self.max = max +- self.name = name +- +- def optimize(self): +- """Optimize certain stacked wildcard patterns.""" +- subpattern = None +- if (self.content is not None and +- len(self.content) == 1 and len(self.content[0]) == 1): +- subpattern = self.content[0][0] +- if self.min == 1 and self.max == 1: +- if self.content is None: +- return NodePattern(name=self.name) +- if subpattern is not None and self.name == subpattern.name: +- return subpattern.optimize() +- if (self.min <= 1 and isinstance(subpattern, WildcardPattern) and +- subpattern.min <= 1 and self.name == subpattern.name): +- return WildcardPattern(subpattern.content, +- self.min*subpattern.min, +- self.max*subpattern.max, +- subpattern.name) +- return self +- +- def match(self, node, results=None): +- """Does this pattern exactly match a node?""" +- return self.match_seq([node], results) +- +- def match_seq(self, nodes, results=None): +- """Does this pattern exactly match a sequence of nodes?""" +- for c, r in self.generate_matches(nodes): +- if c == len(nodes): +- if results is not None: +- results.update(r) +- if self.name: +- results[self.name] = list(nodes) +- return True +- return False +- +- def generate_matches(self, nodes): +- """ +- Generator yielding matches for a sequence of nodes. +- +- Args: +- nodes: sequence of nodes +- +- Yields: +- (count, results) tuples where: +- count: the match comprises nodes[:count]; +- results: dict containing named submatches. +- """ +- if self.content is None: +- # Shortcut for special case (see __init__.__doc__) +- for count in xrange(self.min, 1 + min(len(nodes), self.max)): +- r = {} +- if self.name: +- r[self.name] = nodes[:count] +- yield count, r +- elif self.name == "bare_name": +- yield self._bare_name_matches(nodes) +- else: +- # The reason for this is that hitting the recursion limit usually +- # results in some ugly messages about how RuntimeErrors are being +- # ignored. +- save_stderr = sys.stderr +- sys.stderr = StringIO() +- try: +- for count, r in self._recursive_matches(nodes, 0): +- if self.name: +- r[self.name] = nodes[:count] +- yield count, r +- except RuntimeError: +- # We fall back to the iterative pattern matching scheme if the recursive +- # scheme hits the recursion limit. +- for count, r in self._iterative_matches(nodes): +- if self.name: +- r[self.name] = nodes[:count] +- yield count, r +- finally: +- sys.stderr = save_stderr +- +- def _iterative_matches(self, nodes): +- """Helper to iteratively yield the matches.""" +- nodelen = len(nodes) +- if 0 >= self.min: +- yield 0, {} +- +- results = [] +- # generate matches that use just one alt from self.content +- for alt in self.content: +- for c, r in generate_matches(alt, nodes): +- yield c, r +- results.append((c, r)) +- +- # for each match, iterate down the nodes +- while results: +- new_results = [] +- for c0, r0 in results: +- # stop if the entire set of nodes has been matched +- if c0 < nodelen and c0 <= self.max: +- for alt in self.content: +- for c1, r1 in generate_matches(alt, nodes[c0:]): +- if c1 > 0: +- r = {} +- r.update(r0) +- r.update(r1) +- yield c0 + c1, r +- new_results.append((c0 + c1, r)) +- results = new_results +- +- def _bare_name_matches(self, nodes): +- """Special optimized matcher for bare_name.""" +- count = 0 +- r = {} +- done = False +- max = len(nodes) +- while not done and count < max: +- done = True +- for leaf in self.content: +- if leaf[0].match(nodes[count], r): +- count += 1 +- done = False +- break +- r[self.name] = nodes[:count] +- return count, r +- +- def _recursive_matches(self, nodes, count): +- """Helper to recursively yield the matches.""" +- assert self.content is not None +- if count >= self.min: +- yield 0, {} +- if count < self.max: +- for alt in self.content: +- for c0, r0 in generate_matches(alt, nodes): +- for c1, r1 in self._recursive_matches(nodes[c0:], count+1): +- r = {} +- r.update(r0) +- r.update(r1) +- yield c0 + c1, r +- +- +-class NegatedPattern(BasePattern): +- +- def __init__(self, content=None): +- """ +- Initializer. +- +- The argument is either a pattern or None. If it is None, this +- only matches an empty sequence (effectively '$' in regex +- lingo). If it is not None, this matches whenever the argument +- pattern doesn't have any matches. +- """ +- if content is not None: +- assert isinstance(content, BasePattern), repr(content) +- self.content = content +- +- def match(self, node): +- # We never match a node in its entirety +- return False +- +- def match_seq(self, nodes): +- # We only match an empty sequence of nodes in its entirety +- return len(nodes) == 0 +- +- def generate_matches(self, nodes): +- if self.content is None: +- # Return a match if there is an empty sequence +- if len(nodes) == 0: +- yield 0, {} +- else: +- # Return a match if the argument pattern has no matches +- for c, r in self.content.generate_matches(nodes): +- return +- yield 0, {} +- +- +-def generate_matches(patterns, nodes): +- """ +- Generator yielding matches for a sequence of patterns and nodes. +- +- Args: +- patterns: a sequence of patterns +- nodes: a sequence of nodes +- +- Yields: +- (count, results) tuples where: +- count: the entire sequence of patterns matches nodes[:count]; +- results: dict containing named submatches. +- """ +- if not patterns: +- yield 0, {} +- else: +- p, rest = patterns[0], patterns[1:] +- for c0, r0 in p.generate_matches(nodes): +- if not rest: +- yield c0, r0 +- else: +- for c1, r1 in generate_matches(rest, nodes[c0:]): +- r = {} +- r.update(r0) +- r.update(r1) +- yield c0 + c1, r +diff -r 531f2e948299 lib2to3/refactor.py +--- a/lib2to3/refactor.py Mon Mar 30 20:02:09 2009 -0500 ++++ /dev/null Thu Jan 01 00:00:00 1970 +0000 +@@ -1,515 +0,0 @@ +-#!/usr/bin/env python2.5 +-# Copyright 2006 Google, Inc. All Rights Reserved. +-# Licensed to PSF under a Contributor Agreement. +- +-"""Refactoring framework. +- +-Used as a main program, this can refactor any number of files and/or +-recursively descend down directories. Imported as a module, this +-provides infrastructure to write your own refactoring tool. +-""" +- +-__author__ = "Guido van Rossum " +- +- +-# Python imports +-import os +-import sys +-import difflib +-import logging +-import operator +-from collections import defaultdict +-from itertools import chain +- +-# Local imports +-from .pgen2 import driver +-from .pgen2 import tokenize +- +-from . import pytree +-from . import patcomp +-from . import fixes +-from . import pygram +- +- +-def get_all_fix_names(fixer_pkg, remove_prefix=True): +- """Return a sorted list of all available fix names in the given package.""" +- pkg = __import__(fixer_pkg, [], [], ["*"]) +- fixer_dir = os.path.dirname(pkg.__file__) +- fix_names = [] +- for name in sorted(os.listdir(fixer_dir)): +- if name.startswith("fix_") and name.endswith(".py"): +- if remove_prefix: +- name = name[4:] +- fix_names.append(name[:-3]) +- return fix_names +- +-def get_head_types(pat): +- """ Accepts a pytree Pattern Node and returns a set +- of the pattern types which will match first. """ +- +- if isinstance(pat, (pytree.NodePattern, pytree.LeafPattern)): +- # NodePatters must either have no type and no content +- # or a type and content -- so they don't get any farther +- # Always return leafs +- return set([pat.type]) +- +- if isinstance(pat, pytree.NegatedPattern): +- if pat.content: +- return get_head_types(pat.content) +- return set([None]) # Negated Patterns don't have a type +- +- if isinstance(pat, pytree.WildcardPattern): +- # Recurse on each node in content +- r = set() +- for p in pat.content: +- for x in p: +- r.update(get_head_types(x)) +- return r +- +- raise Exception("Oh no! I don't understand pattern %s" %(pat)) +- +-def get_headnode_dict(fixer_list): +- """ Accepts a list of fixers and returns a dictionary +- of head node type --> fixer list. """ +- head_nodes = defaultdict(list) +- for fixer in fixer_list: +- if not fixer.pattern: +- head_nodes[None].append(fixer) +- continue +- for t in get_head_types(fixer.pattern): +- head_nodes[t].append(fixer) +- return head_nodes +- +-def get_fixers_from_package(pkg_name): +- """ +- Return the fully qualified names for fixers in the package pkg_name. +- """ +- return [pkg_name + "." + fix_name +- for fix_name in get_all_fix_names(pkg_name, False)] +- +- +-class FixerError(Exception): +- """A fixer could not be loaded.""" +- +- +-class RefactoringTool(object): +- +- _default_options = {"print_function": False} +- +- CLASS_PREFIX = "Fix" # The prefix for fixer classes +- FILE_PREFIX = "fix_" # The prefix for modules with a fixer within +- +- def __init__(self, fixer_names, options=None, explicit=None): +- """Initializer. +- +- Args: +- fixer_names: a list of fixers to import +- options: an dict with configuration. +- explicit: a list of fixers to run even if they are explicit. +- """ +- self.fixers = fixer_names +- self.explicit = explicit or [] +- self.options = self._default_options.copy() +- if options is not None: +- self.options.update(options) +- self.errors = [] +- self.logger = logging.getLogger("RefactoringTool") +- self.fixer_log = [] +- self.wrote = False +- if self.options["print_function"]: +- del pygram.python_grammar.keywords["print"] +- self.driver = driver.Driver(pygram.python_grammar, +- convert=pytree.convert, +- logger=self.logger) +- self.pre_order, self.post_order = self.get_fixers() +- +- self.pre_order_heads = get_headnode_dict(self.pre_order) +- self.post_order_heads = get_headnode_dict(self.post_order) +- +- self.files = [] # List of files that were or should be modified +- +- def get_fixers(self): +- """Inspects the options to load the requested patterns and handlers. +- +- Returns: +- (pre_order, post_order), where pre_order is the list of fixers that +- want a pre-order AST traversal, and post_order is the list that want +- post-order traversal. +- """ +- pre_order_fixers = [] +- post_order_fixers = [] +- for fix_mod_path in self.fixers: +- mod = __import__(fix_mod_path, {}, {}, ["*"]) +- fix_name = fix_mod_path.rsplit(".", 1)[-1] +- if fix_name.startswith(self.FILE_PREFIX): +- fix_name = fix_name[len(self.FILE_PREFIX):] +- parts = fix_name.split("_") +- class_name = self.CLASS_PREFIX + "".join([p.title() for p in parts]) +- try: +- fix_class = getattr(mod, class_name) +- except AttributeError: +- raise FixerError("Can't find %s.%s" % (fix_name, class_name)) +- fixer = fix_class(self.options, self.fixer_log) +- if fixer.explicit and self.explicit is not True and \ +- fix_mod_path not in self.explicit: +- self.log_message("Skipping implicit fixer: %s", fix_name) +- continue +- +- self.log_debug("Adding transformation: %s", fix_name) +- if fixer.order == "pre": +- pre_order_fixers.append(fixer) +- elif fixer.order == "post": +- post_order_fixers.append(fixer) +- else: +- raise FixerError("Illegal fixer order: %r" % fixer.order) +- +- key_func = operator.attrgetter("run_order") +- pre_order_fixers.sort(key=key_func) +- post_order_fixers.sort(key=key_func) +- return (pre_order_fixers, post_order_fixers) +- +- def log_error(self, msg, *args, **kwds): +- """Called when an error occurs.""" +- raise +- +- def log_message(self, msg, *args): +- """Hook to log a message.""" +- if args: +- msg = msg % args +- self.logger.info(msg) +- +- def log_debug(self, msg, *args): +- if args: +- msg = msg % args +- self.logger.debug(msg) +- +- def print_output(self, lines): +- """Called with lines of output to give to the user.""" +- pass +- +- def refactor(self, items, write=False, doctests_only=False): +- """Refactor a list of files and directories.""" +- for dir_or_file in items: +- if os.path.isdir(dir_or_file): +- self.refactor_dir(dir_or_file, write, doctests_only) +- else: +- self.refactor_file(dir_or_file, write, doctests_only) +- +- def refactor_dir(self, dir_name, write=False, doctests_only=False): +- """Descends down a directory and refactor every Python file found. +- +- Python files are assumed to have a .py extension. +- +- Files and subdirectories starting with '.' are skipped. +- """ +- for dirpath, dirnames, filenames in os.walk(dir_name): +- self.log_debug("Descending into %s", dirpath) +- dirnames.sort() +- filenames.sort() +- for name in filenames: +- if not name.startswith(".") and name.endswith("py"): +- fullname = os.path.join(dirpath, name) +- self.refactor_file(fullname, write, doctests_only) +- # Modify dirnames in-place to remove subdirs with leading dots +- dirnames[:] = [dn for dn in dirnames if not dn.startswith(".")] +- +- def refactor_file(self, filename, write=False, doctests_only=False): +- """Refactors a file.""" +- try: +- f = open(filename) +- except IOError, err: +- self.log_error("Can't open %s: %s", filename, err) +- return +- try: +- input = f.read() + "\n" # Silence certain parse errors +- finally: +- f.close() +- if doctests_only: +- self.log_debug("Refactoring doctests in %s", filename) +- output = self.refactor_docstring(input, filename) +- if output != input: +- self.processed_file(output, filename, input, write=write) +- else: +- self.log_debug("No doctest changes in %s", filename) +- else: +- tree = self.refactor_string(input, filename) +- if tree and tree.was_changed: +- # The [:-1] is to take off the \n we added earlier +- self.processed_file(str(tree)[:-1], filename, write=write) +- else: +- self.log_debug("No changes in %s", filename) +- +- def refactor_string(self, data, name): +- """Refactor a given input string. +- +- Args: +- data: a string holding the code to be refactored. +- name: a human-readable name for use in error/log messages. +- +- Returns: +- An AST corresponding to the refactored input stream; None if +- there were errors during the parse. +- """ +- try: +- tree = self.driver.parse_string(data) +- except Exception, err: +- self.log_error("Can't parse %s: %s: %s", +- name, err.__class__.__name__, err) +- return +- self.log_debug("Refactoring %s", name) +- self.refactor_tree(tree, name) +- return tree +- +- def refactor_stdin(self, doctests_only=False): +- input = sys.stdin.read() +- if doctests_only: +- self.log_debug("Refactoring doctests in stdin") +- output = self.refactor_docstring(input, "") +- if output != input: +- self.processed_file(output, "", input) +- else: +- self.log_debug("No doctest changes in stdin") +- else: +- tree = self.refactor_string(input, "") +- if tree and tree.was_changed: +- self.processed_file(str(tree), "", input) +- else: +- self.log_debug("No changes in stdin") +- +- def refactor_tree(self, tree, name): +- """Refactors a parse tree (modifying the tree in place). +- +- Args: +- tree: a pytree.Node instance representing the root of the tree +- to be refactored. +- name: a human-readable name for this tree. +- +- Returns: +- True if the tree was modified, False otherwise. +- """ +- for fixer in chain(self.pre_order, self.post_order): +- fixer.start_tree(tree, name) +- +- self.traverse_by(self.pre_order_heads, tree.pre_order()) +- self.traverse_by(self.post_order_heads, tree.post_order()) +- +- for fixer in chain(self.pre_order, self.post_order): +- fixer.finish_tree(tree, name) +- return tree.was_changed +- +- def traverse_by(self, fixers, traversal): +- """Traverse an AST, applying a set of fixers to each node. +- +- This is a helper method for refactor_tree(). +- +- Args: +- fixers: a list of fixer instances. +- traversal: a generator that yields AST nodes. +- +- Returns: +- None +- """ +- if not fixers: +- return +- for node in traversal: +- for fixer in fixers[node.type] + fixers[None]: +- results = fixer.match(node) +- if results: +- new = fixer.transform(node, results) +- if new is not None and (new != node or +- str(new) != str(node)): +- node.replace(new) +- node = new +- +- def processed_file(self, new_text, filename, old_text=None, write=False): +- """ +- Called when a file has been refactored, and there are changes. +- """ +- self.files.append(filename) +- if old_text is None: +- try: +- f = open(filename, "r") +- except IOError, err: +- self.log_error("Can't read %s: %s", filename, err) +- return +- try: +- old_text = f.read() +- finally: +- f.close() +- if old_text == new_text: +- self.log_debug("No changes to %s", filename) +- return +- self.print_output(diff_texts(old_text, new_text, filename)) +- if write: +- self.write_file(new_text, filename, old_text) +- else: +- self.log_debug("Not writing changes to %s", filename) +- +- def write_file(self, new_text, filename, old_text): +- """Writes a string to a file. +- +- It first shows a unified diff between the old text and the new text, and +- then rewrites the file; the latter is only done if the write option is +- set. +- """ +- try: +- f = open(filename, "w") +- except os.error, err: +- self.log_error("Can't create %s: %s", filename, err) +- return +- try: +- f.write(new_text) +- except os.error, err: +- self.log_error("Can't write %s: %s", filename, err) +- finally: +- f.close() +- self.log_debug("Wrote changes to %s", filename) +- self.wrote = True +- +- PS1 = ">>> " +- PS2 = "... " +- +- def refactor_docstring(self, input, filename): +- """Refactors a docstring, looking for doctests. +- +- This returns a modified version of the input string. It looks +- for doctests, which start with a ">>>" prompt, and may be +- continued with "..." prompts, as long as the "..." is indented +- the same as the ">>>". +- +- (Unfortunately we can't use the doctest module's parser, +- since, like most parsers, it is not geared towards preserving +- the original source.) +- """ +- result = [] +- block = None +- block_lineno = None +- indent = None +- lineno = 0 +- for line in input.splitlines(True): +- lineno += 1 +- if line.lstrip().startswith(self.PS1): +- if block is not None: +- result.extend(self.refactor_doctest(block, block_lineno, +- indent, filename)) +- block_lineno = lineno +- block = [line] +- i = line.find(self.PS1) +- indent = line[:i] +- elif (indent is not None and +- (line.startswith(indent + self.PS2) or +- line == indent + self.PS2.rstrip() + "\n")): +- block.append(line) +- else: +- if block is not None: +- result.extend(self.refactor_doctest(block, block_lineno, +- indent, filename)) +- block = None +- indent = None +- result.append(line) +- if block is not None: +- result.extend(self.refactor_doctest(block, block_lineno, +- indent, filename)) +- return "".join(result) +- +- def refactor_doctest(self, block, lineno, indent, filename): +- """Refactors one doctest. +- +- A doctest is given as a block of lines, the first of which starts +- with ">>>" (possibly indented), while the remaining lines start +- with "..." (identically indented). +- +- """ +- try: +- tree = self.parse_block(block, lineno, indent) +- except Exception, err: +- if self.log.isEnabledFor(logging.DEBUG): +- for line in block: +- self.log_debug("Source: %s", line.rstrip("\n")) +- self.log_error("Can't parse docstring in %s line %s: %s: %s", +- filename, lineno, err.__class__.__name__, err) +- return block +- if self.refactor_tree(tree, filename): +- new = str(tree).splitlines(True) +- # Undo the adjustment of the line numbers in wrap_toks() below. +- clipped, new = new[:lineno-1], new[lineno-1:] +- assert clipped == ["\n"] * (lineno-1), clipped +- if not new[-1].endswith("\n"): +- new[-1] += "\n" +- block = [indent + self.PS1 + new.pop(0)] +- if new: +- block += [indent + self.PS2 + line for line in new] +- return block +- +- def summarize(self): +- if self.wrote: +- were = "were" +- else: +- were = "need to be" +- if not self.files: +- self.log_message("No files %s modified.", were) +- else: +- self.log_message("Files that %s modified:", were) +- for file in self.files: +- self.log_message(file) +- if self.fixer_log: +- self.log_message("Warnings/messages while refactoring:") +- for message in self.fixer_log: +- self.log_message(message) +- if self.errors: +- if len(self.errors) == 1: +- self.log_message("There was 1 error:") +- else: +- self.log_message("There were %d errors:", len(self.errors)) +- for msg, args, kwds in self.errors: +- self.log_message(msg, *args, **kwds) +- +- def parse_block(self, block, lineno, indent): +- """Parses a block into a tree. +- +- This is necessary to get correct line number / offset information +- in the parser diagnostics and embedded into the parse tree. +- """ +- return self.driver.parse_tokens(self.wrap_toks(block, lineno, indent)) +- +- def wrap_toks(self, block, lineno, indent): +- """Wraps a tokenize stream to systematically modify start/end.""" +- tokens = tokenize.generate_tokens(self.gen_lines(block, indent).next) +- for type, value, (line0, col0), (line1, col1), line_text in tokens: +- line0 += lineno - 1 +- line1 += lineno - 1 +- # Don't bother updating the columns; this is too complicated +- # since line_text would also have to be updated and it would +- # still break for tokens spanning lines. Let the user guess +- # that the column numbers for doctests are relative to the +- # end of the prompt string (PS1 or PS2). +- yield type, value, (line0, col0), (line1, col1), line_text +- +- +- def gen_lines(self, block, indent): +- """Generates lines as expected by tokenize from a list of lines. +- +- This strips the first len(indent + self.PS1) characters off each line. +- """ +- prefix1 = indent + self.PS1 +- prefix2 = indent + self.PS2 +- prefix = prefix1 +- for line in block: +- if line.startswith(prefix): +- yield line[len(prefix):] +- elif line == prefix.rstrip() + "\n": +- yield "\n" +- else: +- raise AssertionError("line=%r, prefix=%r" % (line, prefix)) +- prefix = prefix2 +- while True: +- yield "" +- +- +-def diff_texts(a, b, filename): +- """Return a unified diff of two strings.""" +- a = a.splitlines() +- b = b.splitlines() +- return difflib.unified_diff(a, b, filename, filename, +- "(original)", "(refactored)", +- lineterm="") +diff -r 531f2e948299 lib2to3/tests/.svn/entries +--- a/lib2to3/tests/.svn/entries Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/.svn/entries Wed Apr 01 13:59:47 2009 -0500 +@@ -1,7 +1,7 @@ + 9 + + dir +-70785 ++70822 + http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3/tests + http://svn.python.org/projects + +diff -r 531f2e948299 lib2to3/tests/data/.svn/entries +--- a/lib2to3/tests/data/.svn/entries Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/.svn/entries Wed Apr 01 13:59:47 2009 -0500 +@@ -1,7 +1,7 @@ + 9 + + dir +-70785 ++70822 + http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3/tests/data + http://svn.python.org/projects + +diff -r 531f2e948299 lib2to3/tests/data/fixers/.svn/entries +--- a/lib2to3/tests/data/fixers/.svn/entries Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/fixers/.svn/entries Wed Apr 01 13:59:47 2009 -0500 +@@ -1,7 +1,7 @@ + 9 + + dir +-70785 ++70822 + http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3/tests/data/fixers + http://svn.python.org/projects + +diff -r 531f2e948299 lib2to3/tests/data/fixers/bad_order.py +--- a/lib2to3/tests/data/fixers/bad_order.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/fixers/bad_order.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,4 +1,4 @@ +-from lib2to3.fixer_base import BaseFix ++from refactor.fixer_base import BaseFix + + class FixBadOrder(BaseFix): + +diff -r 531f2e948299 lib2to3/tests/data/fixers/myfixes/.svn/entries +--- a/lib2to3/tests/data/fixers/myfixes/.svn/entries Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/fixers/myfixes/.svn/entries Wed Apr 01 13:59:47 2009 -0500 +@@ -1,7 +1,7 @@ + 9 + + dir +-70785 ++70822 + http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3/tests/data/fixers/myfixes + http://svn.python.org/projects + +diff -r 531f2e948299 lib2to3/tests/data/fixers/myfixes/fix_explicit.py +--- a/lib2to3/tests/data/fixers/myfixes/fix_explicit.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/fixers/myfixes/fix_explicit.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,4 +1,4 @@ +-from lib2to3.fixer_base import BaseFix ++from refactor.fixer_base import BaseFix + + class FixExplicit(BaseFix): + explicit = True +diff -r 531f2e948299 lib2to3/tests/data/fixers/myfixes/fix_first.py +--- a/lib2to3/tests/data/fixers/myfixes/fix_first.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/fixers/myfixes/fix_first.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,4 +1,4 @@ +-from lib2to3.fixer_base import BaseFix ++from refactor.fixer_base import BaseFix + + class FixFirst(BaseFix): + run_order = 1 +diff -r 531f2e948299 lib2to3/tests/data/fixers/myfixes/fix_last.py +--- a/lib2to3/tests/data/fixers/myfixes/fix_last.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/fixers/myfixes/fix_last.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,4 +1,4 @@ +-from lib2to3.fixer_base import BaseFix ++from refactor.fixer_base import BaseFix + + class FixLast(BaseFix): + +diff -r 531f2e948299 lib2to3/tests/data/fixers/myfixes/fix_parrot.py +--- a/lib2to3/tests/data/fixers/myfixes/fix_parrot.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/fixers/myfixes/fix_parrot.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,5 +1,5 @@ +-from lib2to3.fixer_base import BaseFix +-from lib2to3.fixer_util import Name ++from refactor.fixer_base import BaseFix ++from refactor.fixer_util import Name + + class FixParrot(BaseFix): + """ +diff -r 531f2e948299 lib2to3/tests/data/fixers/myfixes/fix_preorder.py +--- a/lib2to3/tests/data/fixers/myfixes/fix_preorder.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/data/fixers/myfixes/fix_preorder.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,4 +1,4 @@ +-from lib2to3.fixer_base import BaseFix ++from refactor.fixer_base import BaseFix + + class FixPreorder(BaseFix): + order = "pre" +diff -r 531f2e948299 lib2to3/tests/support.py +--- a/lib2to3/tests/support.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/support.py Wed Apr 01 13:59:47 2009 -0500 +@@ -1,5 +1,5 @@ + """Support code for test_*.py files""" +-# Author: Collin Winter ++# Original Author: Collin Winter + + # Python imports + import unittest +@@ -16,12 +16,26 @@ + from .. import refactor + from ..pgen2 import driver + ++test_pkg = "refactor.fixes" + test_dir = os.path.dirname(__file__) + proj_dir = os.path.normpath(os.path.join(test_dir, "..")) + grammar_path = os.path.join(test_dir, "..", "Grammar.txt") + grammar = driver.load_grammar(grammar_path) + driver = driver.Driver(grammar, convert=pytree.convert) + ++def parse_version(version_string): ++ """Returns a version tuple matching input version_string.""" ++ if not version_string: ++ return () ++ ++ version_list = [] ++ for token in version_string.split('.'): ++ try: ++ version_list.append(int(token)) ++ except ValueError: ++ version_list.append(token) ++ return tuple(version_list) ++ + def parse_string(string): + return driver.parse_string(reformat(string), debug=True) + +@@ -39,18 +53,19 @@ + def reformat(string): + return dedent(string) + "\n\n" + +-def get_refactorer(fixers=None, options=None): ++def get_refactorer(fixers=None, options=None, pkg_name=None): + """ + A convenience function for creating a RefactoringTool for tests. + + fixers is a list of fixers for the RefactoringTool to use. By default +- "lib2to3.fixes.*" is used. options is an optional dictionary of options to ++ "refactor.fixes.*" is used. options is an optional dictionary of options to + be passed to the RefactoringTool. + """ ++ pkg_name = pkg_name or test_pkg + if fixers is not None: +- fixers = ["lib2to3.fixes.fix_" + fix for fix in fixers] ++ fixers = [pkg_name + ".fix_" + fix for fix in fixers] + else: +- fixers = refactor.get_fixers_from_package("lib2to3.fixes") ++ fixers = refactor.get_fixers_from_package(pkg_name) + options = options or {} + return refactor.RefactoringTool(fixers, options, explicit=True) + +diff -r 531f2e948299 lib2to3/tests/test_fixers.py +--- a/lib2to3/tests/test_fixers.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/test_fixers.py Wed Apr 01 13:59:47 2009 -0500 +@@ -23,7 +23,7 @@ + if fix_list is None: + fix_list = [self.fixer] + options = {"print_function" : False} +- self.refactor = support.get_refactorer(fix_list, options) ++ self.refactor = support.get_refactorer(fix_list, options, "refactor.fixes.from2") + self.fixer_log = [] + self.filename = "" + +@@ -1625,7 +1625,7 @@ + + class Test_imports(FixerTestCase, ImportsFixerTests): + fixer = "imports" +- from ..fixes.fix_imports import MAPPING as modules ++ from refactor.fixes.from2.fix_imports import MAPPING as modules + + def test_multiple_imports(self): + b = """import urlparse, cStringIO""" +@@ -1646,23 +1646,23 @@ + + class Test_imports2(FixerTestCase, ImportsFixerTests): + fixer = "imports2" +- from ..fixes.fix_imports2 import MAPPING as modules ++ from refactor.fixes.from2.fix_imports2 import MAPPING as modules + + + class Test_imports_fixer_order(FixerTestCase, ImportsFixerTests): + + def setUp(self): + super(Test_imports_fixer_order, self).setUp(['imports', 'imports2']) +- from ..fixes.fix_imports2 import MAPPING as mapping2 ++ from refactor.fixes.from2.fix_imports2 import MAPPING as mapping2 + self.modules = mapping2.copy() +- from ..fixes.fix_imports import MAPPING as mapping1 ++ from refactor.fixes.from2.fix_imports import MAPPING as mapping1 + for key in ('dbhash', 'dumbdbm', 'dbm', 'gdbm'): + self.modules[key] = mapping1[key] + + + class Test_urllib(FixerTestCase): + fixer = "urllib" +- from ..fixes.fix_urllib import MAPPING as modules ++ from refactor.fixes.from2.fix_urllib import MAPPING as modules + + def test_import_module(self): + for old, changes in self.modules.items(): +@@ -3449,7 +3449,7 @@ + self.files_checked.append(name) + return self.always_exists or (name in self.present_files) + +- from ..fixes import fix_import ++ from refactor.fixes.from2 import fix_import + fix_import.exists = fake_exists + + def tearDown(self): +diff -r 531f2e948299 lib2to3/tests/test_parser.py +--- a/lib2to3/tests/test_parser.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/test_parser.py Wed Apr 01 13:59:47 2009 -0500 +@@ -17,7 +17,7 @@ + import os.path + + # Local imports +-from ..pgen2.parse import ParseError ++from refactor.pgen2.parse import ParseError + + + class GrammarTest(support.TestCase): +diff -r 531f2e948299 lib2to3/tests/test_util.py +--- a/lib2to3/tests/test_util.py Mon Mar 30 20:02:09 2009 -0500 ++++ b/lib2to3/tests/test_util.py Wed Apr 01 13:59:47 2009 -0500 +@@ -11,7 +11,7 @@ + # Local imports + from .. import pytree + from .. import fixer_util +-from ..fixer_util import Attr, Name ++from refactor.fixer_util import Attr, Name + + + def parse(code, strip_levels=0): +diff -r 531f2e948299 refactor/.svn/all-wcprops +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/all-wcprops Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,65 @@ ++K 25 ++svn:wc:ra_dav:version-url ++V 51 ++/projects/!svn/ver/69681/sandbox/trunk/2to3/lib2to3 ++END ++pytree.py ++K 25 ++svn:wc:ra_dav:version-url ++V 61 ++/projects/!svn/ver/69680/sandbox/trunk/2to3/lib2to3/pytree.py ++END ++fixer_util.py ++K 25 ++svn:wc:ra_dav:version-url ++V 65 ++/projects/!svn/ver/69679/sandbox/trunk/2to3/lib2to3/fixer_util.py ++END ++PatternGrammar.txt ++K 25 ++svn:wc:ra_dav:version-url ++V 70 ++/projects/!svn/ver/61428/sandbox/trunk/2to3/lib2to3/PatternGrammar.txt ++END ++Grammar.txt ++K 25 ++svn:wc:ra_dav:version-url ++V 63 ++/projects/!svn/ver/66191/sandbox/trunk/2to3/lib2to3/Grammar.txt ++END ++__init__.py ++K 25 ++svn:wc:ra_dav:version-url ++V 63 ++/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/__init__.py ++END ++pygram.py ++K 25 ++svn:wc:ra_dav:version-url ++V 61 ++/projects/!svn/ver/67769/sandbox/trunk/2to3/lib2to3/pygram.py ++END ++patcomp.py ++K 25 ++svn:wc:ra_dav:version-url ++V 62 ++/projects/!svn/ver/69681/sandbox/trunk/2to3/lib2to3/patcomp.py ++END ++main.py ++K 25 ++svn:wc:ra_dav:version-url ++V 59 ++/projects/!svn/ver/67919/sandbox/trunk/2to3/lib2to3/main.py ++END ++refactor.py ++K 25 ++svn:wc:ra_dav:version-url ++V 63 ++/projects/!svn/ver/67991/sandbox/trunk/2to3/lib2to3/refactor.py ++END ++fixer_base.py ++K 25 ++svn:wc:ra_dav:version-url ++V 65 ++/projects/!svn/ver/67769/sandbox/trunk/2to3/lib2to3/fixer_base.py ++END +diff -r 531f2e948299 refactor/.svn/dir-prop-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/dir-prop-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 10 ++svn:ignore ++V 24 ++*.pyc ++*.pyo ++*.pickle ++@* ++ ++END +diff -r 531f2e948299 refactor/.svn/entries +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/entries Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,377 @@ ++9 ++ ++dir ++70822 ++http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3 ++http://svn.python.org/projects ++ ++ ++ ++2009-02-16T17:43:09.878955Z ++69681 ++benjamin.peterson ++has-props ++ ++svn:special svn:externals svn:needs-lock ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++6015fed2-1504-0410-9fe1-9d1591cc4771 ++ ++pytree.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++04e7d771f65eaa3bd221cb8451652cad ++2009-02-16T17:41:48.036309Z ++69680 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++27575 ++ ++fixer_util.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++ceed30974a584bd7ca461077b735aa61 ++2009-02-16T17:36:06.789054Z ++69679 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++14348 ++ ++tests ++dir ++ ++PatternGrammar.txt ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++4b47e92dafaedf0ea24c8097b65797c4 ++2008-03-16T19:36:15.363093Z ++61428 ++martin.v.loewis ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++793 ++ ++Grammar.txt ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++a6aa611a634ddccccf9fef17bbbbeadb ++2008-09-03T22:00:52.351755Z ++66191 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++6331 ++ ++__init__.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++191142a35d9dceef524b32c6d9676e51 ++2008-11-28T23:18:48.744865Z ++67433 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++7 ++ ++pygram.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++3a20d757a7db6a30ec477352f7c9cf6a ++2008-12-14T20:59:10.846867Z ++67769 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++774 ++ ++patcomp.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++e1a5a1fa70f5b518e4dfe6abb24adf1a ++2009-02-16T17:43:09.878955Z ++69681 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++6529 ++ ++pgen2 ++dir ++ ++main.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++85da2f9b910a7b8af322a872949da4e1 ++2008-12-23T19:12:22.717389Z ++67919 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++4921 ++ ++refactor.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++93cfbef5d9bcae247382e78984d5b8e5 ++2008-12-28T20:30:26.284113Z ++67991 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++19094 ++ ++fixes ++dir ++ ++fixer_base.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++decf389028d0e267eb33ff8a0a69285c ++2008-12-14T20:59:10.846867Z ++67769 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++6215 ++ +diff -r 531f2e948299 refactor/.svn/format +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/format Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,1 @@ ++9 +diff -r 531f2e948299 refactor/.svn/prop-base/Grammar.txt.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/Grammar.txt.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 23 ++Author Date Id Revision ++END +diff -r 531f2e948299 refactor/.svn/prop-base/PatternGrammar.txt.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/PatternGrammar.txt.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 23 ++Author Date Id Revision ++END +diff -r 531f2e948299 refactor/.svn/prop-base/__init__.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/__init__.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/.svn/prop-base/fixer_base.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/fixer_base.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/.svn/prop-base/fixer_util.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/fixer_util.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/.svn/prop-base/main.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/main.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/.svn/prop-base/patcomp.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/patcomp.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/.svn/prop-base/pygram.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/pygram.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/.svn/prop-base/pytree.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/pytree.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/.svn/prop-base/refactor.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/prop-base/refactor.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,13 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 14 ++svn:executable ++V 1 ++* ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/.svn/text-base/Grammar.txt.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/Grammar.txt.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,155 @@ ++# Grammar for Python ++ ++# Note: Changing the grammar specified in this file will most likely ++# require corresponding changes in the parser module ++# (../Modules/parsermodule.c). If you can't make the changes to ++# that module yourself, please co-ordinate the required changes ++# with someone who can; ask around on python-dev for help. Fred ++# Drake will probably be listening there. ++ ++# NOTE WELL: You should also follow all the steps listed in PEP 306, ++# "How to Change Python's Grammar" ++ ++# Commands for Kees Blom's railroad program ++#diagram:token NAME ++#diagram:token NUMBER ++#diagram:token STRING ++#diagram:token NEWLINE ++#diagram:token ENDMARKER ++#diagram:token INDENT ++#diagram:output\input python.bla ++#diagram:token DEDENT ++#diagram:output\textwidth 20.04cm\oddsidemargin 0.0cm\evensidemargin 0.0cm ++#diagram:rules ++ ++# Start symbols for the grammar: ++# file_input is a module or sequence of commands read from an input file; ++# single_input is a single interactive statement; ++# eval_input is the input for the eval() and input() functions. ++# NB: compound_stmt in single_input is followed by extra NEWLINE! ++file_input: (NEWLINE | stmt)* ENDMARKER ++single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE ++eval_input: testlist NEWLINE* ENDMARKER ++ ++decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE ++decorators: decorator+ ++decorated: decorators (classdef | funcdef) ++funcdef: 'def' NAME parameters ['->' test] ':' suite ++parameters: '(' [typedargslist] ')' ++typedargslist: ((tfpdef ['=' test] ',')* ++ ('*' [tname] (',' tname ['=' test])* [',' '**' tname] | '**' tname) ++ | tfpdef ['=' test] (',' tfpdef ['=' test])* [',']) ++tname: NAME [':' test] ++tfpdef: tname | '(' tfplist ')' ++tfplist: tfpdef (',' tfpdef)* [','] ++varargslist: ((vfpdef ['=' test] ',')* ++ ('*' [vname] (',' vname ['=' test])* [',' '**' vname] | '**' vname) ++ | vfpdef ['=' test] (',' vfpdef ['=' test])* [',']) ++vname: NAME ++vfpdef: vname | '(' vfplist ')' ++vfplist: vfpdef (',' vfpdef)* [','] ++ ++stmt: simple_stmt | compound_stmt ++simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE ++small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt | ++ import_stmt | global_stmt | exec_stmt | assert_stmt) ++expr_stmt: testlist (augassign (yield_expr|testlist) | ++ ('=' (yield_expr|testlist))*) ++augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' | ++ '<<=' | '>>=' | '**=' | '//=') ++# For normal assignments, additional restrictions enforced by the interpreter ++print_stmt: 'print' ( [ test (',' test)* [','] ] | ++ '>>' test [ (',' test)+ [','] ] ) ++del_stmt: 'del' exprlist ++pass_stmt: 'pass' ++flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt ++break_stmt: 'break' ++continue_stmt: 'continue' ++return_stmt: 'return' [testlist] ++yield_stmt: yield_expr ++raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]] ++import_stmt: import_name | import_from ++import_name: 'import' dotted_as_names ++import_from: ('from' ('.'* dotted_name | '.'+) ++ 'import' ('*' | '(' import_as_names ')' | import_as_names)) ++import_as_name: NAME ['as' NAME] ++dotted_as_name: dotted_name ['as' NAME] ++import_as_names: import_as_name (',' import_as_name)* [','] ++dotted_as_names: dotted_as_name (',' dotted_as_name)* ++dotted_name: NAME ('.' NAME)* ++global_stmt: ('global' | 'nonlocal') NAME (',' NAME)* ++exec_stmt: 'exec' expr ['in' test [',' test]] ++assert_stmt: 'assert' test [',' test] ++ ++compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated ++if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] ++while_stmt: 'while' test ':' suite ['else' ':' suite] ++for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] ++try_stmt: ('try' ':' suite ++ ((except_clause ':' suite)+ ++ ['else' ':' suite] ++ ['finally' ':' suite] | ++ 'finally' ':' suite)) ++with_stmt: 'with' test [ with_var ] ':' suite ++with_var: 'as' expr ++# NB compile.c makes sure that the default except clause is last ++except_clause: 'except' [test [(',' | 'as') test]] ++suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT ++ ++# Backward compatibility cruft to support: ++# [ x for x in lambda: True, lambda: False if x() ] ++# even while also allowing: ++# lambda x: 5 if x else 2 ++# (But not a mix of the two) ++testlist_safe: old_test [(',' old_test)+ [',']] ++old_test: or_test | old_lambdef ++old_lambdef: 'lambda' [varargslist] ':' old_test ++ ++test: or_test ['if' or_test 'else' test] | lambdef ++or_test: and_test ('or' and_test)* ++and_test: not_test ('and' not_test)* ++not_test: 'not' not_test | comparison ++comparison: expr (comp_op expr)* ++comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' ++expr: xor_expr ('|' xor_expr)* ++xor_expr: and_expr ('^' and_expr)* ++and_expr: shift_expr ('&' shift_expr)* ++shift_expr: arith_expr (('<<'|'>>') arith_expr)* ++arith_expr: term (('+'|'-') term)* ++term: factor (('*'|'/'|'%'|'//') factor)* ++factor: ('+'|'-'|'~') factor | power ++power: atom trailer* ['**' factor] ++atom: ('(' [yield_expr|testlist_gexp] ')' | ++ '[' [listmaker] ']' | ++ '{' [dictsetmaker] '}' | ++ '`' testlist1 '`' | ++ NAME | NUMBER | STRING+ | '.' '.' '.') ++listmaker: test ( comp_for | (',' test)* [','] ) ++testlist_gexp: test ( comp_for | (',' test)* [','] ) ++lambdef: 'lambda' [varargslist] ':' test ++trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME ++subscriptlist: subscript (',' subscript)* [','] ++subscript: test | [test] ':' [test] [sliceop] ++sliceop: ':' [test] ++exprlist: expr (',' expr)* [','] ++testlist: test (',' test)* [','] ++dictsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) | ++ (test (comp_for | (',' test)* [','])) ) ++ ++classdef: 'class' NAME ['(' [arglist] ')'] ':' suite ++ ++arglist: (argument ',')* (argument [','] ++ |'*' test (',' argument)* [',' '**' test] ++ |'**' test) ++argument: test [comp_for] | test '=' test # Really [keyword '='] test ++ ++comp_iter: comp_for | comp_if ++comp_for: 'for' exprlist 'in' testlist_safe [comp_iter] ++comp_if: 'if' old_test [comp_iter] ++ ++testlist1: test (',' test)* ++ ++# not used in grammar, but may appear in "node" passed from Parser to Compiler ++encoding_decl: NAME ++ ++yield_expr: 'yield' [testlist] +diff -r 531f2e948299 refactor/.svn/text-base/PatternGrammar.txt.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/PatternGrammar.txt.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,28 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++# A grammar to describe tree matching patterns. ++# Not shown here: ++# - 'TOKEN' stands for any token (leaf node) ++# - 'any' stands for any node (leaf or interior) ++# With 'any' we can still specify the sub-structure. ++ ++# The start symbol is 'Matcher'. ++ ++Matcher: Alternatives ENDMARKER ++ ++Alternatives: Alternative ('|' Alternative)* ++ ++Alternative: (Unit | NegatedUnit)+ ++ ++Unit: [NAME '='] ( STRING [Repeater] ++ | NAME [Details] [Repeater] ++ | '(' Alternatives ')' [Repeater] ++ | '[' Alternatives ']' ++ ) ++ ++NegatedUnit: 'not' (STRING | NAME [Details] | '(' Alternatives ')') ++ ++Repeater: '*' | '+' | '{' NUMBER [',' NUMBER] '}' ++ ++Details: '<' Alternatives '>' +diff -r 531f2e948299 refactor/.svn/text-base/__init__.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/__init__.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,1 @@ ++#empty +diff -r 531f2e948299 refactor/.svn/text-base/fixer_base.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/fixer_base.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,178 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Base class for fixers (optional, but recommended).""" ++ ++# Python imports ++import logging ++import itertools ++ ++# Local imports ++from .patcomp import PatternCompiler ++from . import pygram ++from .fixer_util import does_tree_import ++ ++class BaseFix(object): ++ ++ """Optional base class for fixers. ++ ++ The subclass name must be FixFooBar where FooBar is the result of ++ removing underscores and capitalizing the words of the fix name. ++ For example, the class name for a fixer named 'has_key' should be ++ FixHasKey. ++ """ ++ ++ PATTERN = None # Most subclasses should override with a string literal ++ pattern = None # Compiled pattern, set by compile_pattern() ++ options = None # Options object passed to initializer ++ filename = None # The filename (set by set_filename) ++ logger = None # A logger (set by set_filename) ++ numbers = itertools.count(1) # For new_name() ++ used_names = set() # A set of all used NAMEs ++ order = "post" # Does the fixer prefer pre- or post-order traversal ++ explicit = False # Is this ignored by refactor.py -f all? ++ run_order = 5 # Fixers will be sorted by run order before execution ++ # Lower numbers will be run first. ++ ++ # Shortcut for access to Python grammar symbols ++ syms = pygram.python_symbols ++ ++ def __init__(self, options, log): ++ """Initializer. Subclass may override. ++ ++ Args: ++ options: an dict containing the options passed to RefactoringTool ++ that could be used to customize the fixer through the command line. ++ log: a list to append warnings and other messages to. ++ """ ++ self.options = options ++ self.log = log ++ self.compile_pattern() ++ ++ def compile_pattern(self): ++ """Compiles self.PATTERN into self.pattern. ++ ++ Subclass may override if it doesn't want to use ++ self.{pattern,PATTERN} in .match(). ++ """ ++ if self.PATTERN is not None: ++ self.pattern = PatternCompiler().compile_pattern(self.PATTERN) ++ ++ def set_filename(self, filename): ++ """Set the filename, and a logger derived from it. ++ ++ The main refactoring tool should call this. ++ """ ++ self.filename = filename ++ self.logger = logging.getLogger(filename) ++ ++ def match(self, node): ++ """Returns match for a given parse tree node. ++ ++ Should return a true or false object (not necessarily a bool). ++ It may return a non-empty dict of matching sub-nodes as ++ returned by a matching pattern. ++ ++ Subclass may override. ++ """ ++ results = {"node": node} ++ return self.pattern.match(node, results) and results ++ ++ def transform(self, node, results): ++ """Returns the transformation for a given parse tree node. ++ ++ Args: ++ node: the root of the parse tree that matched the fixer. ++ results: a dict mapping symbolic names to part of the match. ++ ++ Returns: ++ None, or a node that is a modified copy of the ++ argument node. The node argument may also be modified in-place to ++ effect the same change. ++ ++ Subclass *must* override. ++ """ ++ raise NotImplementedError() ++ ++ def new_name(self, template="xxx_todo_changeme"): ++ """Return a string suitable for use as an identifier ++ ++ The new name is guaranteed not to conflict with other identifiers. ++ """ ++ name = template ++ while name in self.used_names: ++ name = template + str(self.numbers.next()) ++ self.used_names.add(name) ++ return name ++ ++ def log_message(self, message): ++ if self.first_log: ++ self.first_log = False ++ self.log.append("### In file %s ###" % self.filename) ++ self.log.append(message) ++ ++ def cannot_convert(self, node, reason=None): ++ """Warn the user that a given chunk of code is not valid Python 3, ++ but that it cannot be converted automatically. ++ ++ First argument is the top-level node for the code in question. ++ Optional second argument is why it can't be converted. ++ """ ++ lineno = node.get_lineno() ++ for_output = node.clone() ++ for_output.set_prefix("") ++ msg = "Line %d: could not convert: %s" ++ self.log_message(msg % (lineno, for_output)) ++ if reason: ++ self.log_message(reason) ++ ++ def warning(self, node, reason): ++ """Used for warning the user about possible uncertainty in the ++ translation. ++ ++ First argument is the top-level node for the code in question. ++ Optional second argument is why it can't be converted. ++ """ ++ lineno = node.get_lineno() ++ self.log_message("Line %d: %s" % (lineno, reason)) ++ ++ def start_tree(self, tree, filename): ++ """Some fixers need to maintain tree-wide state. ++ This method is called once, at the start of tree fix-up. ++ ++ tree - the root node of the tree to be processed. ++ filename - the name of the file the tree came from. ++ """ ++ self.used_names = tree.used_names ++ self.set_filename(filename) ++ self.numbers = itertools.count(1) ++ self.first_log = True ++ ++ def finish_tree(self, tree, filename): ++ """Some fixers need to maintain tree-wide state. ++ This method is called once, at the conclusion of tree fix-up. ++ ++ tree - the root node of the tree to be processed. ++ filename - the name of the file the tree came from. ++ """ ++ pass ++ ++ ++class ConditionalFix(BaseFix): ++ """ Base class for fixers which not execute if an import is found. """ ++ ++ # This is the name of the import which, if found, will cause the test to be skipped ++ skip_on = None ++ ++ def start_tree(self, *args): ++ super(ConditionalFix, self).start_tree(*args) ++ self._should_skip = None ++ ++ def should_skip(self, node): ++ if self._should_skip is not None: ++ return self._should_skip ++ pkg = self.skip_on.split(".") ++ name = pkg[-1] ++ pkg = ".".join(pkg[:-1]) ++ self._should_skip = does_tree_import(pkg, name, node) ++ return self._should_skip +diff -r 531f2e948299 refactor/.svn/text-base/fixer_util.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/fixer_util.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,425 @@ ++"""Utility functions, node construction macros, etc.""" ++# Author: Collin Winter ++ ++# Local imports ++from .pgen2 import token ++from .pytree import Leaf, Node ++from .pygram import python_symbols as syms ++from . import patcomp ++ ++ ++########################################################### ++### Common node-construction "macros" ++########################################################### ++ ++def KeywordArg(keyword, value): ++ return Node(syms.argument, ++ [keyword, Leaf(token.EQUAL, '='), value]) ++ ++def LParen(): ++ return Leaf(token.LPAR, "(") ++ ++def RParen(): ++ return Leaf(token.RPAR, ")") ++ ++def Assign(target, source): ++ """Build an assignment statement""" ++ if not isinstance(target, list): ++ target = [target] ++ if not isinstance(source, list): ++ source.set_prefix(" ") ++ source = [source] ++ ++ return Node(syms.atom, ++ target + [Leaf(token.EQUAL, "=", prefix=" ")] + source) ++ ++def Name(name, prefix=None): ++ """Return a NAME leaf""" ++ return Leaf(token.NAME, name, prefix=prefix) ++ ++def Attr(obj, attr): ++ """A node tuple for obj.attr""" ++ return [obj, Node(syms.trailer, [Dot(), attr])] ++ ++def Comma(): ++ """A comma leaf""" ++ return Leaf(token.COMMA, ",") ++ ++def Dot(): ++ """A period (.) leaf""" ++ return Leaf(token.DOT, ".") ++ ++def ArgList(args, lparen=LParen(), rparen=RParen()): ++ """A parenthesised argument list, used by Call()""" ++ node = Node(syms.trailer, [lparen.clone(), rparen.clone()]) ++ if args: ++ node.insert_child(1, Node(syms.arglist, args)) ++ return node ++ ++def Call(func_name, args=None, prefix=None): ++ """A function call""" ++ node = Node(syms.power, [func_name, ArgList(args)]) ++ if prefix is not None: ++ node.set_prefix(prefix) ++ return node ++ ++def Newline(): ++ """A newline literal""" ++ return Leaf(token.NEWLINE, "\n") ++ ++def BlankLine(): ++ """A blank line""" ++ return Leaf(token.NEWLINE, "") ++ ++def Number(n, prefix=None): ++ return Leaf(token.NUMBER, n, prefix=prefix) ++ ++def Subscript(index_node): ++ """A numeric or string subscript""" ++ return Node(syms.trailer, [Leaf(token.LBRACE, '['), ++ index_node, ++ Leaf(token.RBRACE, ']')]) ++ ++def String(string, prefix=None): ++ """A string leaf""" ++ return Leaf(token.STRING, string, prefix=prefix) ++ ++def ListComp(xp, fp, it, test=None): ++ """A list comprehension of the form [xp for fp in it if test]. ++ ++ If test is None, the "if test" part is omitted. ++ """ ++ xp.set_prefix("") ++ fp.set_prefix(" ") ++ it.set_prefix(" ") ++ for_leaf = Leaf(token.NAME, "for") ++ for_leaf.set_prefix(" ") ++ in_leaf = Leaf(token.NAME, "in") ++ in_leaf.set_prefix(" ") ++ inner_args = [for_leaf, fp, in_leaf, it] ++ if test: ++ test.set_prefix(" ") ++ if_leaf = Leaf(token.NAME, "if") ++ if_leaf.set_prefix(" ") ++ inner_args.append(Node(syms.comp_if, [if_leaf, test])) ++ inner = Node(syms.listmaker, [xp, Node(syms.comp_for, inner_args)]) ++ return Node(syms.atom, ++ [Leaf(token.LBRACE, "["), ++ inner, ++ Leaf(token.RBRACE, "]")]) ++ ++def FromImport(package_name, name_leafs): ++ """ Return an import statement in the form: ++ from package import name_leafs""" ++ # XXX: May not handle dotted imports properly (eg, package_name='foo.bar') ++ #assert package_name == '.' or '.' not in package_name, "FromImport has "\ ++ # "not been tested with dotted package names -- use at your own "\ ++ # "peril!" ++ ++ for leaf in name_leafs: ++ # Pull the leaves out of their old tree ++ leaf.remove() ++ ++ children = [Leaf(token.NAME, 'from'), ++ Leaf(token.NAME, package_name, prefix=" "), ++ Leaf(token.NAME, 'import', prefix=" "), ++ Node(syms.import_as_names, name_leafs)] ++ imp = Node(syms.import_from, children) ++ return imp ++ ++ ++########################################################### ++### Determine whether a node represents a given literal ++########################################################### ++ ++def is_tuple(node): ++ """Does the node represent a tuple literal?""" ++ if isinstance(node, Node) and node.children == [LParen(), RParen()]: ++ return True ++ return (isinstance(node, Node) ++ and len(node.children) == 3 ++ and isinstance(node.children[0], Leaf) ++ and isinstance(node.children[1], Node) ++ and isinstance(node.children[2], Leaf) ++ and node.children[0].value == "(" ++ and node.children[2].value == ")") ++ ++def is_list(node): ++ """Does the node represent a list literal?""" ++ return (isinstance(node, Node) ++ and len(node.children) > 1 ++ and isinstance(node.children[0], Leaf) ++ and isinstance(node.children[-1], Leaf) ++ and node.children[0].value == "[" ++ and node.children[-1].value == "]") ++ ++ ++########################################################### ++### Misc ++########################################################### ++ ++def parenthesize(node): ++ return Node(syms.atom, [LParen(), node, RParen()]) ++ ++ ++consuming_calls = set(["sorted", "list", "set", "any", "all", "tuple", "sum", ++ "min", "max"]) ++ ++def attr_chain(obj, attr): ++ """Follow an attribute chain. ++ ++ If you have a chain of objects where a.foo -> b, b.foo-> c, etc, ++ use this to iterate over all objects in the chain. Iteration is ++ terminated by getattr(x, attr) is None. ++ ++ Args: ++ obj: the starting object ++ attr: the name of the chaining attribute ++ ++ Yields: ++ Each successive object in the chain. ++ """ ++ next = getattr(obj, attr) ++ while next: ++ yield next ++ next = getattr(next, attr) ++ ++p0 = """for_stmt< 'for' any 'in' node=any ':' any* > ++ | comp_for< 'for' any 'in' node=any any* > ++ """ ++p1 = """ ++power< ++ ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | ++ 'any' | 'all' | (any* trailer< '.' 'join' >) ) ++ trailer< '(' node=any ')' > ++ any* ++> ++""" ++p2 = """ ++power< ++ 'sorted' ++ trailer< '(' arglist ')' > ++ any* ++> ++""" ++pats_built = False ++def in_special_context(node): ++ """ Returns true if node is in an environment where all that is required ++ of it is being itterable (ie, it doesn't matter if it returns a list ++ or an itterator). ++ See test_map_nochange in test_fixers.py for some examples and tests. ++ """ ++ global p0, p1, p2, pats_built ++ if not pats_built: ++ p1 = patcomp.compile_pattern(p1) ++ p0 = patcomp.compile_pattern(p0) ++ p2 = patcomp.compile_pattern(p2) ++ pats_built = True ++ patterns = [p0, p1, p2] ++ for pattern, parent in zip(patterns, attr_chain(node, "parent")): ++ results = {} ++ if pattern.match(parent, results) and results["node"] is node: ++ return True ++ return False ++ ++def is_probably_builtin(node): ++ """ ++ Check that something isn't an attribute or function name etc. ++ """ ++ prev = node.prev_sibling ++ if prev is not None and prev.type == token.DOT: ++ # Attribute lookup. ++ return False ++ parent = node.parent ++ if parent.type in (syms.funcdef, syms.classdef): ++ return False ++ if parent.type == syms.expr_stmt and parent.children[0] is node: ++ # Assignment. ++ return False ++ if parent.type == syms.parameters or \ ++ (parent.type == syms.typedargslist and ( ++ (prev is not None and prev.type == token.COMMA) or ++ parent.children[0] is node ++ )): ++ # The name of an argument. ++ return False ++ return True ++ ++########################################################### ++### The following functions are to find bindings in a suite ++########################################################### ++ ++def make_suite(node): ++ if node.type == syms.suite: ++ return node ++ node = node.clone() ++ parent, node.parent = node.parent, None ++ suite = Node(syms.suite, [node]) ++ suite.parent = parent ++ return suite ++ ++def find_root(node): ++ """Find the top level namespace.""" ++ # Scamper up to the top level namespace ++ while node.type != syms.file_input: ++ assert node.parent, "Tree is insane! root found before "\ ++ "file_input node was found." ++ node = node.parent ++ return node ++ ++def does_tree_import(package, name, node): ++ """ Returns true if name is imported from package at the ++ top level of the tree which node belongs to. ++ To cover the case of an import like 'import foo', use ++ None for the package and 'foo' for the name. """ ++ binding = find_binding(name, find_root(node), package) ++ return bool(binding) ++ ++def is_import(node): ++ """Returns true if the node is an import statement.""" ++ return node.type in (syms.import_name, syms.import_from) ++ ++def touch_import(package, name, node): ++ """ Works like `does_tree_import` but adds an import statement ++ if it was not imported. """ ++ def is_import_stmt(node): ++ return node.type == syms.simple_stmt and node.children and \ ++ is_import(node.children[0]) ++ ++ root = find_root(node) ++ ++ if does_tree_import(package, name, root): ++ return ++ ++ add_newline_before = False ++ ++ # figure out where to insert the new import. First try to find ++ # the first import and then skip to the last one. ++ insert_pos = offset = 0 ++ for idx, node in enumerate(root.children): ++ if not is_import_stmt(node): ++ continue ++ for offset, node2 in enumerate(root.children[idx:]): ++ if not is_import_stmt(node2): ++ break ++ insert_pos = idx + offset ++ break ++ ++ # if there are no imports where we can insert, find the docstring. ++ # if that also fails, we stick to the beginning of the file ++ if insert_pos == 0: ++ for idx, node in enumerate(root.children): ++ if node.type == syms.simple_stmt and node.children and \ ++ node.children[0].type == token.STRING: ++ insert_pos = idx + 1 ++ add_newline_before ++ break ++ ++ if package is None: ++ import_ = Node(syms.import_name, [ ++ Leaf(token.NAME, 'import'), ++ Leaf(token.NAME, name, prefix=' ') ++ ]) ++ else: ++ import_ = FromImport(package, [Leaf(token.NAME, name, prefix=' ')]) ++ ++ children = [import_, Newline()] ++ if add_newline_before: ++ children.insert(0, Newline()) ++ root.insert_child(insert_pos, Node(syms.simple_stmt, children)) ++ ++ ++_def_syms = set([syms.classdef, syms.funcdef]) ++def find_binding(name, node, package=None): ++ """ Returns the node which binds variable name, otherwise None. ++ If optional argument package is supplied, only imports will ++ be returned. ++ See test cases for examples.""" ++ for child in node.children: ++ ret = None ++ if child.type == syms.for_stmt: ++ if _find(name, child.children[1]): ++ return child ++ n = find_binding(name, make_suite(child.children[-1]), package) ++ if n: ret = n ++ elif child.type in (syms.if_stmt, syms.while_stmt): ++ n = find_binding(name, make_suite(child.children[-1]), package) ++ if n: ret = n ++ elif child.type == syms.try_stmt: ++ n = find_binding(name, make_suite(child.children[2]), package) ++ if n: ++ ret = n ++ else: ++ for i, kid in enumerate(child.children[3:]): ++ if kid.type == token.COLON and kid.value == ":": ++ # i+3 is the colon, i+4 is the suite ++ n = find_binding(name, make_suite(child.children[i+4]), package) ++ if n: ret = n ++ elif child.type in _def_syms and child.children[1].value == name: ++ ret = child ++ elif _is_import_binding(child, name, package): ++ ret = child ++ elif child.type == syms.simple_stmt: ++ ret = find_binding(name, child, package) ++ elif child.type == syms.expr_stmt: ++ if _find(name, child.children[0]): ++ ret = child ++ ++ if ret: ++ if not package: ++ return ret ++ if is_import(ret): ++ return ret ++ return None ++ ++_block_syms = set([syms.funcdef, syms.classdef, syms.trailer]) ++def _find(name, node): ++ nodes = [node] ++ while nodes: ++ node = nodes.pop() ++ if node.type > 256 and node.type not in _block_syms: ++ nodes.extend(node.children) ++ elif node.type == token.NAME and node.value == name: ++ return node ++ return None ++ ++def _is_import_binding(node, name, package=None): ++ """ Will reuturn node if node will import name, or node ++ will import * from package. None is returned otherwise. ++ See test cases for examples. """ ++ ++ if node.type == syms.import_name and not package: ++ imp = node.children[1] ++ if imp.type == syms.dotted_as_names: ++ for child in imp.children: ++ if child.type == syms.dotted_as_name: ++ if child.children[2].value == name: ++ return node ++ elif child.type == token.NAME and child.value == name: ++ return node ++ elif imp.type == syms.dotted_as_name: ++ last = imp.children[-1] ++ if last.type == token.NAME and last.value == name: ++ return node ++ elif imp.type == token.NAME and imp.value == name: ++ return node ++ elif node.type == syms.import_from: ++ # unicode(...) is used to make life easier here, because ++ # from a.b import parses to ['import', ['a', '.', 'b'], ...] ++ if package and unicode(node.children[1]).strip() != package: ++ return None ++ n = node.children[3] ++ if package and _find('as', n): ++ # See test_from_import_as for explanation ++ return None ++ elif n.type == syms.import_as_names and _find(name, n): ++ return node ++ elif n.type == syms.import_as_name: ++ child = n.children[2] ++ if child.type == token.NAME and child.value == name: ++ return node ++ elif n.type == token.NAME and n.value == name: ++ return node ++ elif package and n.type == token.STAR: ++ return node ++ return None +diff -r 531f2e948299 refactor/.svn/text-base/main.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/main.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,133 @@ ++""" ++Main program for 2to3. ++""" ++ ++import sys ++import os ++import logging ++import shutil ++import optparse ++ ++from . import refactor ++ ++ ++class StdoutRefactoringTool(refactor.RefactoringTool): ++ """ ++ Prints output to stdout. ++ """ ++ ++ def __init__(self, fixers, options, explicit, nobackups): ++ self.nobackups = nobackups ++ super(StdoutRefactoringTool, self).__init__(fixers, options, explicit) ++ ++ def log_error(self, msg, *args, **kwargs): ++ self.errors.append((msg, args, kwargs)) ++ self.logger.error(msg, *args, **kwargs) ++ ++ def write_file(self, new_text, filename, old_text): ++ if not self.nobackups: ++ # Make backup ++ backup = filename + ".bak" ++ if os.path.lexists(backup): ++ try: ++ os.remove(backup) ++ except os.error, err: ++ self.log_message("Can't remove backup %s", backup) ++ try: ++ os.rename(filename, backup) ++ except os.error, err: ++ self.log_message("Can't rename %s to %s", filename, backup) ++ # Actually write the new file ++ super(StdoutRefactoringTool, self).write_file(new_text, ++ filename, old_text) ++ if not self.nobackups: ++ shutil.copymode(backup, filename) ++ ++ def print_output(self, lines): ++ for line in lines: ++ print line ++ ++ ++def main(fixer_pkg, args=None): ++ """Main program. ++ ++ Args: ++ fixer_pkg: the name of a package where the fixers are located. ++ args: optional; a list of command line arguments. If omitted, ++ sys.argv[1:] is used. ++ ++ Returns a suggested exit status (0, 1, 2). ++ """ ++ # Set up option parser ++ parser = optparse.OptionParser(usage="2to3 [options] file|dir ...") ++ parser.add_option("-d", "--doctests_only", action="store_true", ++ help="Fix up doctests only") ++ parser.add_option("-f", "--fix", action="append", default=[], ++ help="Each FIX specifies a transformation; default: all") ++ parser.add_option("-x", "--nofix", action="append", default=[], ++ help="Prevent a fixer from being run.") ++ parser.add_option("-l", "--list-fixes", action="store_true", ++ help="List available transformations (fixes/fix_*.py)") ++ parser.add_option("-p", "--print-function", action="store_true", ++ help="Modify the grammar so that print() is a function") ++ parser.add_option("-v", "--verbose", action="store_true", ++ help="More verbose logging") ++ parser.add_option("-w", "--write", action="store_true", ++ help="Write back modified files") ++ parser.add_option("-n", "--nobackups", action="store_true", default=False, ++ help="Don't write backups for modified files.") ++ ++ # Parse command line arguments ++ refactor_stdin = False ++ options, args = parser.parse_args(args) ++ if not options.write and options.nobackups: ++ parser.error("Can't use -n without -w") ++ if options.list_fixes: ++ print "Available transformations for the -f/--fix option:" ++ for fixname in refactor.get_all_fix_names(fixer_pkg): ++ print fixname ++ if not args: ++ return 0 ++ if not args: ++ print >>sys.stderr, "At least one file or directory argument required." ++ print >>sys.stderr, "Use --help to show usage." ++ return 2 ++ if "-" in args: ++ refactor_stdin = True ++ if options.write: ++ print >>sys.stderr, "Can't write to stdin." ++ return 2 ++ ++ # Set up logging handler ++ level = logging.DEBUG if options.verbose else logging.INFO ++ logging.basicConfig(format='%(name)s: %(message)s', level=level) ++ ++ # Initialize the refactoring tool ++ rt_opts = {"print_function" : options.print_function} ++ avail_fixes = set(refactor.get_fixers_from_package(fixer_pkg)) ++ unwanted_fixes = set(fixer_pkg + ".fix_" + fix for fix in options.nofix) ++ explicit = set() ++ if options.fix: ++ all_present = False ++ for fix in options.fix: ++ if fix == "all": ++ all_present = True ++ else: ++ explicit.add(fixer_pkg + ".fix_" + fix) ++ requested = avail_fixes.union(explicit) if all_present else explicit ++ else: ++ requested = avail_fixes.union(explicit) ++ fixer_names = requested.difference(unwanted_fixes) ++ rt = StdoutRefactoringTool(sorted(fixer_names), rt_opts, sorted(explicit), ++ options.nobackups) ++ ++ # Refactor all files and directories passed as arguments ++ if not rt.errors: ++ if refactor_stdin: ++ rt.refactor_stdin() ++ else: ++ rt.refactor(args, options.write, options.doctests_only) ++ rt.summarize() ++ ++ # Return error status (0 if rt.errors is zero) ++ return int(bool(rt.errors)) +diff -r 531f2e948299 refactor/.svn/text-base/patcomp.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/patcomp.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,186 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Pattern compiler. ++ ++The grammer is taken from PatternGrammar.txt. ++ ++The compiler compiles a pattern to a pytree.*Pattern instance. ++""" ++ ++__author__ = "Guido van Rossum " ++ ++# Python imports ++import os ++ ++# Fairly local imports ++from .pgen2 import driver ++from .pgen2 import literals ++from .pgen2 import token ++from .pgen2 import tokenize ++ ++# Really local imports ++from . import pytree ++from . import pygram ++ ++# The pattern grammar file ++_PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), ++ "PatternGrammar.txt") ++ ++ ++def tokenize_wrapper(input): ++ """Tokenizes a string suppressing significant whitespace.""" ++ skip = set((token.NEWLINE, token.INDENT, token.DEDENT)) ++ tokens = tokenize.generate_tokens(driver.generate_lines(input).next) ++ for quintuple in tokens: ++ type, value, start, end, line_text = quintuple ++ if type not in skip: ++ yield quintuple ++ ++ ++class PatternCompiler(object): ++ ++ def __init__(self, grammar_file=_PATTERN_GRAMMAR_FILE): ++ """Initializer. ++ ++ Takes an optional alternative filename for the pattern grammar. ++ """ ++ self.grammar = driver.load_grammar(grammar_file) ++ self.syms = pygram.Symbols(self.grammar) ++ self.pygrammar = pygram.python_grammar ++ self.pysyms = pygram.python_symbols ++ self.driver = driver.Driver(self.grammar, convert=pattern_convert) ++ ++ def compile_pattern(self, input, debug=False): ++ """Compiles a pattern string to a nested pytree.*Pattern object.""" ++ tokens = tokenize_wrapper(input) ++ root = self.driver.parse_tokens(tokens, debug=debug) ++ return self.compile_node(root) ++ ++ def compile_node(self, node): ++ """Compiles a node, recursively. ++ ++ This is one big switch on the node type. ++ """ ++ # XXX Optimize certain Wildcard-containing-Wildcard patterns ++ # that can be merged ++ if node.type == self.syms.Matcher: ++ node = node.children[0] # Avoid unneeded recursion ++ ++ if node.type == self.syms.Alternatives: ++ # Skip the odd children since they are just '|' tokens ++ alts = [self.compile_node(ch) for ch in node.children[::2]] ++ if len(alts) == 1: ++ return alts[0] ++ p = pytree.WildcardPattern([[a] for a in alts], min=1, max=1) ++ return p.optimize() ++ ++ if node.type == self.syms.Alternative: ++ units = [self.compile_node(ch) for ch in node.children] ++ if len(units) == 1: ++ return units[0] ++ p = pytree.WildcardPattern([units], min=1, max=1) ++ return p.optimize() ++ ++ if node.type == self.syms.NegatedUnit: ++ pattern = self.compile_basic(node.children[1:]) ++ p = pytree.NegatedPattern(pattern) ++ return p.optimize() ++ ++ assert node.type == self.syms.Unit ++ ++ name = None ++ nodes = node.children ++ if len(nodes) >= 3 and nodes[1].type == token.EQUAL: ++ name = nodes[0].value ++ nodes = nodes[2:] ++ repeat = None ++ if len(nodes) >= 2 and nodes[-1].type == self.syms.Repeater: ++ repeat = nodes[-1] ++ nodes = nodes[:-1] ++ ++ # Now we've reduced it to: STRING | NAME [Details] | (...) | [...] ++ pattern = self.compile_basic(nodes, repeat) ++ ++ if repeat is not None: ++ assert repeat.type == self.syms.Repeater ++ children = repeat.children ++ child = children[0] ++ if child.type == token.STAR: ++ min = 0 ++ max = pytree.HUGE ++ elif child.type == token.PLUS: ++ min = 1 ++ max = pytree.HUGE ++ elif child.type == token.LBRACE: ++ assert children[-1].type == token.RBRACE ++ assert len(children) in (3, 5) ++ min = max = self.get_int(children[1]) ++ if len(children) == 5: ++ max = self.get_int(children[3]) ++ else: ++ assert False ++ if min != 1 or max != 1: ++ pattern = pattern.optimize() ++ pattern = pytree.WildcardPattern([[pattern]], min=min, max=max) ++ ++ if name is not None: ++ pattern.name = name ++ return pattern.optimize() ++ ++ def compile_basic(self, nodes, repeat=None): ++ # Compile STRING | NAME [Details] | (...) | [...] ++ assert len(nodes) >= 1 ++ node = nodes[0] ++ if node.type == token.STRING: ++ value = literals.evalString(node.value) ++ return pytree.LeafPattern(content=value) ++ elif node.type == token.NAME: ++ value = node.value ++ if value.isupper(): ++ if value not in TOKEN_MAP: ++ raise SyntaxError("Invalid token: %r" % value) ++ return pytree.LeafPattern(TOKEN_MAP[value]) ++ else: ++ if value == "any": ++ type = None ++ elif not value.startswith("_"): ++ type = getattr(self.pysyms, value, None) ++ if type is None: ++ raise SyntaxError("Invalid symbol: %r" % value) ++ if nodes[1:]: # Details present ++ content = [self.compile_node(nodes[1].children[1])] ++ else: ++ content = None ++ return pytree.NodePattern(type, content) ++ elif node.value == "(": ++ return self.compile_node(nodes[1]) ++ elif node.value == "[": ++ assert repeat is None ++ subpattern = self.compile_node(nodes[1]) ++ return pytree.WildcardPattern([[subpattern]], min=0, max=1) ++ assert False, node ++ ++ def get_int(self, node): ++ assert node.type == token.NUMBER ++ return int(node.value) ++ ++ ++# Map named tokens to the type value for a LeafPattern ++TOKEN_MAP = {"NAME": token.NAME, ++ "STRING": token.STRING, ++ "NUMBER": token.NUMBER, ++ "TOKEN": None} ++ ++ ++def pattern_convert(grammar, raw_node_info): ++ """Converts raw node information to a Node or Leaf instance.""" ++ type, value, context, children = raw_node_info ++ if children or type in grammar.number2symbol: ++ return pytree.Node(type, children, context=context) ++ else: ++ return pytree.Leaf(type, value, context=context) ++ ++ ++def compile_pattern(pattern): ++ return PatternCompiler().compile_pattern(pattern) +diff -r 531f2e948299 refactor/.svn/text-base/pygram.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/pygram.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,31 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Export the Python grammar and symbols.""" ++ ++# Python imports ++import os ++ ++# Local imports ++from .pgen2 import token ++from .pgen2 import driver ++from . import pytree ++ ++# The grammar file ++_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt") ++ ++ ++class Symbols(object): ++ ++ def __init__(self, grammar): ++ """Initializer. ++ ++ Creates an attribute for each grammar symbol (nonterminal), ++ whose value is the symbol's type (an int >= 256). ++ """ ++ for name, symbol in grammar.symbol2number.iteritems(): ++ setattr(self, name, symbol) ++ ++ ++python_grammar = driver.load_grammar(_GRAMMAR_FILE) ++python_symbols = Symbols(python_grammar) +diff -r 531f2e948299 refactor/.svn/text-base/pytree.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/pytree.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,846 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++""" ++Python parse tree definitions. ++ ++This is a very concrete parse tree; we need to keep every token and ++even the comments and whitespace between tokens. ++ ++There's also a pattern matching implementation here. ++""" ++ ++__author__ = "Guido van Rossum " ++ ++import sys ++from StringIO import StringIO ++ ++ ++HUGE = 0x7FFFFFFF # maximum repeat count, default max ++ ++_type_reprs = {} ++def type_repr(type_num): ++ global _type_reprs ++ if not _type_reprs: ++ from .pygram import python_symbols ++ # printing tokens is possible but not as useful ++ # from .pgen2 import token // token.__dict__.items(): ++ for name, val in python_symbols.__dict__.items(): ++ if type(val) == int: _type_reprs[val] = name ++ return _type_reprs.setdefault(type_num, type_num) ++ ++ ++class Base(object): ++ ++ """ ++ Abstract base class for Node and Leaf. ++ ++ This provides some default functionality and boilerplate using the ++ template pattern. ++ ++ A node may be a subnode of at most one parent. ++ """ ++ ++ # Default values for instance variables ++ type = None # int: token number (< 256) or symbol number (>= 256) ++ parent = None # Parent node pointer, or None ++ children = () # Tuple of subnodes ++ was_changed = False ++ ++ def __new__(cls, *args, **kwds): ++ """Constructor that prevents Base from being instantiated.""" ++ assert cls is not Base, "Cannot instantiate Base" ++ return object.__new__(cls) ++ ++ def __eq__(self, other): ++ """ ++ Compare two nodes for equality. ++ ++ This calls the method _eq(). ++ """ ++ if self.__class__ is not other.__class__: ++ return NotImplemented ++ return self._eq(other) ++ ++ def __ne__(self, other): ++ """ ++ Compare two nodes for inequality. ++ ++ This calls the method _eq(). ++ """ ++ if self.__class__ is not other.__class__: ++ return NotImplemented ++ return not self._eq(other) ++ ++ def _eq(self, other): ++ """ ++ Compare two nodes for equality. ++ ++ This is called by __eq__ and __ne__. It is only called if the two nodes ++ have the same type. This must be implemented by the concrete subclass. ++ Nodes should be considered equal if they have the same structure, ++ ignoring the prefix string and other context information. ++ """ ++ raise NotImplementedError ++ ++ def clone(self): ++ """ ++ Return a cloned (deep) copy of self. ++ ++ This must be implemented by the concrete subclass. ++ """ ++ raise NotImplementedError ++ ++ def post_order(self): ++ """ ++ Return a post-order iterator for the tree. ++ ++ This must be implemented by the concrete subclass. ++ """ ++ raise NotImplementedError ++ ++ def pre_order(self): ++ """ ++ Return a pre-order iterator for the tree. ++ ++ This must be implemented by the concrete subclass. ++ """ ++ raise NotImplementedError ++ ++ def set_prefix(self, prefix): ++ """ ++ Set the prefix for the node (see Leaf class). ++ ++ This must be implemented by the concrete subclass. ++ """ ++ raise NotImplementedError ++ ++ def get_prefix(self): ++ """ ++ Return the prefix for the node (see Leaf class). ++ ++ This must be implemented by the concrete subclass. ++ """ ++ raise NotImplementedError ++ ++ def replace(self, new): ++ """Replace this node with a new one in the parent.""" ++ assert self.parent is not None, str(self) ++ assert new is not None ++ if not isinstance(new, list): ++ new = [new] ++ l_children = [] ++ found = False ++ for ch in self.parent.children: ++ if ch is self: ++ assert not found, (self.parent.children, self, new) ++ if new is not None: ++ l_children.extend(new) ++ found = True ++ else: ++ l_children.append(ch) ++ assert found, (self.children, self, new) ++ self.parent.changed() ++ self.parent.children = l_children ++ for x in new: ++ x.parent = self.parent ++ self.parent = None ++ ++ def get_lineno(self): ++ """Return the line number which generated the invocant node.""" ++ node = self ++ while not isinstance(node, Leaf): ++ if not node.children: ++ return ++ node = node.children[0] ++ return node.lineno ++ ++ def changed(self): ++ if self.parent: ++ self.parent.changed() ++ self.was_changed = True ++ ++ def remove(self): ++ """ ++ Remove the node from the tree. Returns the position of the node in its ++ parent's children before it was removed. ++ """ ++ if self.parent: ++ for i, node in enumerate(self.parent.children): ++ if node is self: ++ self.parent.changed() ++ del self.parent.children[i] ++ self.parent = None ++ return i ++ ++ @property ++ def next_sibling(self): ++ """ ++ The node immediately following the invocant in their parent's children ++ list. If the invocant does not have a next sibling, it is None ++ """ ++ if self.parent is None: ++ return None ++ ++ # Can't use index(); we need to test by identity ++ for i, child in enumerate(self.parent.children): ++ if child is self: ++ try: ++ return self.parent.children[i+1] ++ except IndexError: ++ return None ++ ++ @property ++ def prev_sibling(self): ++ """ ++ The node immediately preceding the invocant in their parent's children ++ list. If the invocant does not have a previous sibling, it is None. ++ """ ++ if self.parent is None: ++ return None ++ ++ # Can't use index(); we need to test by identity ++ for i, child in enumerate(self.parent.children): ++ if child is self: ++ if i == 0: ++ return None ++ return self.parent.children[i-1] ++ ++ def get_suffix(self): ++ """ ++ Return the string immediately following the invocant node. This is ++ effectively equivalent to node.next_sibling.get_prefix() ++ """ ++ next_sib = self.next_sibling ++ if next_sib is None: ++ return "" ++ return next_sib.get_prefix() ++ ++ ++class Node(Base): ++ ++ """Concrete implementation for interior nodes.""" ++ ++ def __init__(self, type, children, context=None, prefix=None): ++ """ ++ Initializer. ++ ++ Takes a type constant (a symbol number >= 256), a sequence of ++ child nodes, and an optional context keyword argument. ++ ++ As a side effect, the parent pointers of the children are updated. ++ """ ++ assert type >= 256, type ++ self.type = type ++ self.children = list(children) ++ for ch in self.children: ++ assert ch.parent is None, repr(ch) ++ ch.parent = self ++ if prefix is not None: ++ self.set_prefix(prefix) ++ ++ def __repr__(self): ++ """Return a canonical string representation.""" ++ return "%s(%s, %r)" % (self.__class__.__name__, ++ type_repr(self.type), ++ self.children) ++ ++ def __str__(self): ++ """ ++ Return a pretty string representation. ++ ++ This reproduces the input source exactly. ++ """ ++ return "".join(map(str, self.children)) ++ ++ def _eq(self, other): ++ """Compare two nodes for equality.""" ++ return (self.type, self.children) == (other.type, other.children) ++ ++ def clone(self): ++ """Return a cloned (deep) copy of self.""" ++ return Node(self.type, [ch.clone() for ch in self.children]) ++ ++ def post_order(self): ++ """Return a post-order iterator for the tree.""" ++ for child in self.children: ++ for node in child.post_order(): ++ yield node ++ yield self ++ ++ def pre_order(self): ++ """Return a pre-order iterator for the tree.""" ++ yield self ++ for child in self.children: ++ for node in child.post_order(): ++ yield node ++ ++ def set_prefix(self, prefix): ++ """ ++ Set the prefix for the node. ++ ++ This passes the responsibility on to the first child. ++ """ ++ if self.children: ++ self.children[0].set_prefix(prefix) ++ ++ def get_prefix(self): ++ """ ++ Return the prefix for the node. ++ ++ This passes the call on to the first child. ++ """ ++ if not self.children: ++ return "" ++ return self.children[0].get_prefix() ++ ++ def set_child(self, i, child): ++ """ ++ Equivalent to 'node.children[i] = child'. This method also sets the ++ child's parent attribute appropriately. ++ """ ++ child.parent = self ++ self.children[i].parent = None ++ self.children[i] = child ++ self.changed() ++ ++ def insert_child(self, i, child): ++ """ ++ Equivalent to 'node.children.insert(i, child)'. This method also sets ++ the child's parent attribute appropriately. ++ """ ++ child.parent = self ++ self.children.insert(i, child) ++ self.changed() ++ ++ def append_child(self, child): ++ """ ++ Equivalent to 'node.children.append(child)'. This method also sets the ++ child's parent attribute appropriately. ++ """ ++ child.parent = self ++ self.children.append(child) ++ self.changed() ++ ++ ++class Leaf(Base): ++ ++ """Concrete implementation for leaf nodes.""" ++ ++ # Default values for instance variables ++ prefix = "" # Whitespace and comments preceding this token in the input ++ lineno = 0 # Line where this token starts in the input ++ column = 0 # Column where this token tarts in the input ++ ++ def __init__(self, type, value, context=None, prefix=None): ++ """ ++ Initializer. ++ ++ Takes a type constant (a token number < 256), a string value, and an ++ optional context keyword argument. ++ """ ++ assert 0 <= type < 256, type ++ if context is not None: ++ self.prefix, (self.lineno, self.column) = context ++ self.type = type ++ self.value = value ++ if prefix is not None: ++ self.prefix = prefix ++ ++ def __repr__(self): ++ """Return a canonical string representation.""" ++ return "%s(%r, %r)" % (self.__class__.__name__, ++ self.type, ++ self.value) ++ ++ def __str__(self): ++ """ ++ Return a pretty string representation. ++ ++ This reproduces the input source exactly. ++ """ ++ return self.prefix + str(self.value) ++ ++ def _eq(self, other): ++ """Compare two nodes for equality.""" ++ return (self.type, self.value) == (other.type, other.value) ++ ++ def clone(self): ++ """Return a cloned (deep) copy of self.""" ++ return Leaf(self.type, self.value, ++ (self.prefix, (self.lineno, self.column))) ++ ++ def post_order(self): ++ """Return a post-order iterator for the tree.""" ++ yield self ++ ++ def pre_order(self): ++ """Return a pre-order iterator for the tree.""" ++ yield self ++ ++ def set_prefix(self, prefix): ++ """Set the prefix for the node.""" ++ self.changed() ++ self.prefix = prefix ++ ++ def get_prefix(self): ++ """Return the prefix for the node.""" ++ return self.prefix ++ ++ ++def convert(gr, raw_node): ++ """ ++ Convert raw node information to a Node or Leaf instance. ++ ++ This is passed to the parser driver which calls it whenever a reduction of a ++ grammar rule produces a new complete node, so that the tree is build ++ strictly bottom-up. ++ """ ++ type, value, context, children = raw_node ++ if children or type in gr.number2symbol: ++ # If there's exactly one child, return that child instead of ++ # creating a new node. ++ if len(children) == 1: ++ return children[0] ++ return Node(type, children, context=context) ++ else: ++ return Leaf(type, value, context=context) ++ ++ ++class BasePattern(object): ++ ++ """ ++ A pattern is a tree matching pattern. ++ ++ It looks for a specific node type (token or symbol), and ++ optionally for a specific content. ++ ++ This is an abstract base class. There are three concrete ++ subclasses: ++ ++ - LeafPattern matches a single leaf node; ++ - NodePattern matches a single node (usually non-leaf); ++ - WildcardPattern matches a sequence of nodes of variable length. ++ """ ++ ++ # Defaults for instance variables ++ type = None # Node type (token if < 256, symbol if >= 256) ++ content = None # Optional content matching pattern ++ name = None # Optional name used to store match in results dict ++ ++ def __new__(cls, *args, **kwds): ++ """Constructor that prevents BasePattern from being instantiated.""" ++ assert cls is not BasePattern, "Cannot instantiate BasePattern" ++ return object.__new__(cls) ++ ++ def __repr__(self): ++ args = [type_repr(self.type), self.content, self.name] ++ while args and args[-1] is None: ++ del args[-1] ++ return "%s(%s)" % (self.__class__.__name__, ", ".join(map(repr, args))) ++ ++ def optimize(self): ++ """ ++ A subclass can define this as a hook for optimizations. ++ ++ Returns either self or another node with the same effect. ++ """ ++ return self ++ ++ def match(self, node, results=None): ++ """ ++ Does this pattern exactly match a node? ++ ++ Returns True if it matches, False if not. ++ ++ If results is not None, it must be a dict which will be ++ updated with the nodes matching named subpatterns. ++ ++ Default implementation for non-wildcard patterns. ++ """ ++ if self.type is not None and node.type != self.type: ++ return False ++ if self.content is not None: ++ r = None ++ if results is not None: ++ r = {} ++ if not self._submatch(node, r): ++ return False ++ if r: ++ results.update(r) ++ if results is not None and self.name: ++ results[self.name] = node ++ return True ++ ++ def match_seq(self, nodes, results=None): ++ """ ++ Does this pattern exactly match a sequence of nodes? ++ ++ Default implementation for non-wildcard patterns. ++ """ ++ if len(nodes) != 1: ++ return False ++ return self.match(nodes[0], results) ++ ++ def generate_matches(self, nodes): ++ """ ++ Generator yielding all matches for this pattern. ++ ++ Default implementation for non-wildcard patterns. ++ """ ++ r = {} ++ if nodes and self.match(nodes[0], r): ++ yield 1, r ++ ++ ++class LeafPattern(BasePattern): ++ ++ def __init__(self, type=None, content=None, name=None): ++ """ ++ Initializer. Takes optional type, content, and name. ++ ++ The type, if given must be a token type (< 256). If not given, ++ this matches any *leaf* node; the content may still be required. ++ ++ The content, if given, must be a string. ++ ++ If a name is given, the matching node is stored in the results ++ dict under that key. ++ """ ++ if type is not None: ++ assert 0 <= type < 256, type ++ if content is not None: ++ assert isinstance(content, basestring), repr(content) ++ self.type = type ++ self.content = content ++ self.name = name ++ ++ def match(self, node, results=None): ++ """Override match() to insist on a leaf node.""" ++ if not isinstance(node, Leaf): ++ return False ++ return BasePattern.match(self, node, results) ++ ++ def _submatch(self, node, results=None): ++ """ ++ Match the pattern's content to the node's children. ++ ++ This assumes the node type matches and self.content is not None. ++ ++ Returns True if it matches, False if not. ++ ++ If results is not None, it must be a dict which will be ++ updated with the nodes matching named subpatterns. ++ ++ When returning False, the results dict may still be updated. ++ """ ++ return self.content == node.value ++ ++ ++class NodePattern(BasePattern): ++ ++ wildcards = False ++ ++ def __init__(self, type=None, content=None, name=None): ++ """ ++ Initializer. Takes optional type, content, and name. ++ ++ The type, if given, must be a symbol type (>= 256). If the ++ type is None this matches *any* single node (leaf or not), ++ except if content is not None, in which it only matches ++ non-leaf nodes that also match the content pattern. ++ ++ The content, if not None, must be a sequence of Patterns that ++ must match the node's children exactly. If the content is ++ given, the type must not be None. ++ ++ If a name is given, the matching node is stored in the results ++ dict under that key. ++ """ ++ if type is not None: ++ assert type >= 256, type ++ if content is not None: ++ assert not isinstance(content, basestring), repr(content) ++ content = list(content) ++ for i, item in enumerate(content): ++ assert isinstance(item, BasePattern), (i, item) ++ if isinstance(item, WildcardPattern): ++ self.wildcards = True ++ self.type = type ++ self.content = content ++ self.name = name ++ ++ def _submatch(self, node, results=None): ++ """ ++ Match the pattern's content to the node's children. ++ ++ This assumes the node type matches and self.content is not None. ++ ++ Returns True if it matches, False if not. ++ ++ If results is not None, it must be a dict which will be ++ updated with the nodes matching named subpatterns. ++ ++ When returning False, the results dict may still be updated. ++ """ ++ if self.wildcards: ++ for c, r in generate_matches(self.content, node.children): ++ if c == len(node.children): ++ if results is not None: ++ results.update(r) ++ return True ++ return False ++ if len(self.content) != len(node.children): ++ return False ++ for subpattern, child in zip(self.content, node.children): ++ if not subpattern.match(child, results): ++ return False ++ return True ++ ++ ++class WildcardPattern(BasePattern): ++ ++ """ ++ A wildcard pattern can match zero or more nodes. ++ ++ This has all the flexibility needed to implement patterns like: ++ ++ .* .+ .? .{m,n} ++ (a b c | d e | f) ++ (...)* (...)+ (...)? (...){m,n} ++ ++ except it always uses non-greedy matching. ++ """ ++ ++ def __init__(self, content=None, min=0, max=HUGE, name=None): ++ """ ++ Initializer. ++ ++ Args: ++ content: optional sequence of subsequences of patterns; ++ if absent, matches one node; ++ if present, each subsequence is an alternative [*] ++ min: optinal minumum number of times to match, default 0 ++ max: optional maximum number of times tro match, default HUGE ++ name: optional name assigned to this match ++ ++ [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is ++ equivalent to (a b c | d e | f g h); if content is None, ++ this is equivalent to '.' in regular expression terms. ++ The min and max parameters work as follows: ++ min=0, max=maxint: .* ++ min=1, max=maxint: .+ ++ min=0, max=1: .? ++ min=1, max=1: . ++ If content is not None, replace the dot with the parenthesized ++ list of alternatives, e.g. (a b c | d e | f g h)* ++ """ ++ assert 0 <= min <= max <= HUGE, (min, max) ++ if content is not None: ++ content = tuple(map(tuple, content)) # Protect against alterations ++ # Check sanity of alternatives ++ assert len(content), repr(content) # Can't have zero alternatives ++ for alt in content: ++ assert len(alt), repr(alt) # Can have empty alternatives ++ self.content = content ++ self.min = min ++ self.max = max ++ self.name = name ++ ++ def optimize(self): ++ """Optimize certain stacked wildcard patterns.""" ++ subpattern = None ++ if (self.content is not None and ++ len(self.content) == 1 and len(self.content[0]) == 1): ++ subpattern = self.content[0][0] ++ if self.min == 1 and self.max == 1: ++ if self.content is None: ++ return NodePattern(name=self.name) ++ if subpattern is not None and self.name == subpattern.name: ++ return subpattern.optimize() ++ if (self.min <= 1 and isinstance(subpattern, WildcardPattern) and ++ subpattern.min <= 1 and self.name == subpattern.name): ++ return WildcardPattern(subpattern.content, ++ self.min*subpattern.min, ++ self.max*subpattern.max, ++ subpattern.name) ++ return self ++ ++ def match(self, node, results=None): ++ """Does this pattern exactly match a node?""" ++ return self.match_seq([node], results) ++ ++ def match_seq(self, nodes, results=None): ++ """Does this pattern exactly match a sequence of nodes?""" ++ for c, r in self.generate_matches(nodes): ++ if c == len(nodes): ++ if results is not None: ++ results.update(r) ++ if self.name: ++ results[self.name] = list(nodes) ++ return True ++ return False ++ ++ def generate_matches(self, nodes): ++ """ ++ Generator yielding matches for a sequence of nodes. ++ ++ Args: ++ nodes: sequence of nodes ++ ++ Yields: ++ (count, results) tuples where: ++ count: the match comprises nodes[:count]; ++ results: dict containing named submatches. ++ """ ++ if self.content is None: ++ # Shortcut for special case (see __init__.__doc__) ++ for count in xrange(self.min, 1 + min(len(nodes), self.max)): ++ r = {} ++ if self.name: ++ r[self.name] = nodes[:count] ++ yield count, r ++ elif self.name == "bare_name": ++ yield self._bare_name_matches(nodes) ++ else: ++ # The reason for this is that hitting the recursion limit usually ++ # results in some ugly messages about how RuntimeErrors are being ++ # ignored. ++ save_stderr = sys.stderr ++ sys.stderr = StringIO() ++ try: ++ for count, r in self._recursive_matches(nodes, 0): ++ if self.name: ++ r[self.name] = nodes[:count] ++ yield count, r ++ except RuntimeError: ++ # We fall back to the iterative pattern matching scheme if the recursive ++ # scheme hits the recursion limit. ++ for count, r in self._iterative_matches(nodes): ++ if self.name: ++ r[self.name] = nodes[:count] ++ yield count, r ++ finally: ++ sys.stderr = save_stderr ++ ++ def _iterative_matches(self, nodes): ++ """Helper to iteratively yield the matches.""" ++ nodelen = len(nodes) ++ if 0 >= self.min: ++ yield 0, {} ++ ++ results = [] ++ # generate matches that use just one alt from self.content ++ for alt in self.content: ++ for c, r in generate_matches(alt, nodes): ++ yield c, r ++ results.append((c, r)) ++ ++ # for each match, iterate down the nodes ++ while results: ++ new_results = [] ++ for c0, r0 in results: ++ # stop if the entire set of nodes has been matched ++ if c0 < nodelen and c0 <= self.max: ++ for alt in self.content: ++ for c1, r1 in generate_matches(alt, nodes[c0:]): ++ if c1 > 0: ++ r = {} ++ r.update(r0) ++ r.update(r1) ++ yield c0 + c1, r ++ new_results.append((c0 + c1, r)) ++ results = new_results ++ ++ def _bare_name_matches(self, nodes): ++ """Special optimized matcher for bare_name.""" ++ count = 0 ++ r = {} ++ done = False ++ max = len(nodes) ++ while not done and count < max: ++ done = True ++ for leaf in self.content: ++ if leaf[0].match(nodes[count], r): ++ count += 1 ++ done = False ++ break ++ r[self.name] = nodes[:count] ++ return count, r ++ ++ def _recursive_matches(self, nodes, count): ++ """Helper to recursively yield the matches.""" ++ assert self.content is not None ++ if count >= self.min: ++ yield 0, {} ++ if count < self.max: ++ for alt in self.content: ++ for c0, r0 in generate_matches(alt, nodes): ++ for c1, r1 in self._recursive_matches(nodes[c0:], count+1): ++ r = {} ++ r.update(r0) ++ r.update(r1) ++ yield c0 + c1, r ++ ++ ++class NegatedPattern(BasePattern): ++ ++ def __init__(self, content=None): ++ """ ++ Initializer. ++ ++ The argument is either a pattern or None. If it is None, this ++ only matches an empty sequence (effectively '$' in regex ++ lingo). If it is not None, this matches whenever the argument ++ pattern doesn't have any matches. ++ """ ++ if content is not None: ++ assert isinstance(content, BasePattern), repr(content) ++ self.content = content ++ ++ def match(self, node): ++ # We never match a node in its entirety ++ return False ++ ++ def match_seq(self, nodes): ++ # We only match an empty sequence of nodes in its entirety ++ return len(nodes) == 0 ++ ++ def generate_matches(self, nodes): ++ if self.content is None: ++ # Return a match if there is an empty sequence ++ if len(nodes) == 0: ++ yield 0, {} ++ else: ++ # Return a match if the argument pattern has no matches ++ for c, r in self.content.generate_matches(nodes): ++ return ++ yield 0, {} ++ ++ ++def generate_matches(patterns, nodes): ++ """ ++ Generator yielding matches for a sequence of patterns and nodes. ++ ++ Args: ++ patterns: a sequence of patterns ++ nodes: a sequence of nodes ++ ++ Yields: ++ (count, results) tuples where: ++ count: the entire sequence of patterns matches nodes[:count]; ++ results: dict containing named submatches. ++ """ ++ if not patterns: ++ yield 0, {} ++ else: ++ p, rest = patterns[0], patterns[1:] ++ for c0, r0 in p.generate_matches(nodes): ++ if not rest: ++ yield c0, r0 ++ else: ++ for c1, r1 in generate_matches(rest, nodes[c0:]): ++ r = {} ++ r.update(r0) ++ r.update(r1) ++ yield c0 + c1, r +diff -r 531f2e948299 refactor/.svn/text-base/refactor.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/.svn/text-base/refactor.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,515 @@ ++#!/usr/bin/env python2.5 ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Refactoring framework. ++ ++Used as a main program, this can refactor any number of files and/or ++recursively descend down directories. Imported as a module, this ++provides infrastructure to write your own refactoring tool. ++""" ++ ++__author__ = "Guido van Rossum " ++ ++ ++# Python imports ++import os ++import sys ++import difflib ++import logging ++import operator ++from collections import defaultdict ++from itertools import chain ++ ++# Local imports ++from .pgen2 import driver ++from .pgen2 import tokenize ++ ++from . import pytree ++from . import patcomp ++from . import fixes ++from . import pygram ++ ++ ++def get_all_fix_names(fixer_pkg, remove_prefix=True): ++ """Return a sorted list of all available fix names in the given package.""" ++ pkg = __import__(fixer_pkg, [], [], ["*"]) ++ fixer_dir = os.path.dirname(pkg.__file__) ++ fix_names = [] ++ for name in sorted(os.listdir(fixer_dir)): ++ if name.startswith("fix_") and name.endswith(".py"): ++ if remove_prefix: ++ name = name[4:] ++ fix_names.append(name[:-3]) ++ return fix_names ++ ++def get_head_types(pat): ++ """ Accepts a pytree Pattern Node and returns a set ++ of the pattern types which will match first. """ ++ ++ if isinstance(pat, (pytree.NodePattern, pytree.LeafPattern)): ++ # NodePatters must either have no type and no content ++ # or a type and content -- so they don't get any farther ++ # Always return leafs ++ return set([pat.type]) ++ ++ if isinstance(pat, pytree.NegatedPattern): ++ if pat.content: ++ return get_head_types(pat.content) ++ return set([None]) # Negated Patterns don't have a type ++ ++ if isinstance(pat, pytree.WildcardPattern): ++ # Recurse on each node in content ++ r = set() ++ for p in pat.content: ++ for x in p: ++ r.update(get_head_types(x)) ++ return r ++ ++ raise Exception("Oh no! I don't understand pattern %s" %(pat)) ++ ++def get_headnode_dict(fixer_list): ++ """ Accepts a list of fixers and returns a dictionary ++ of head node type --> fixer list. """ ++ head_nodes = defaultdict(list) ++ for fixer in fixer_list: ++ if not fixer.pattern: ++ head_nodes[None].append(fixer) ++ continue ++ for t in get_head_types(fixer.pattern): ++ head_nodes[t].append(fixer) ++ return head_nodes ++ ++def get_fixers_from_package(pkg_name): ++ """ ++ Return the fully qualified names for fixers in the package pkg_name. ++ """ ++ return [pkg_name + "." + fix_name ++ for fix_name in get_all_fix_names(pkg_name, False)] ++ ++ ++class FixerError(Exception): ++ """A fixer could not be loaded.""" ++ ++ ++class RefactoringTool(object): ++ ++ _default_options = {"print_function": False} ++ ++ CLASS_PREFIX = "Fix" # The prefix for fixer classes ++ FILE_PREFIX = "fix_" # The prefix for modules with a fixer within ++ ++ def __init__(self, fixer_names, options=None, explicit=None): ++ """Initializer. ++ ++ Args: ++ fixer_names: a list of fixers to import ++ options: an dict with configuration. ++ explicit: a list of fixers to run even if they are explicit. ++ """ ++ self.fixers = fixer_names ++ self.explicit = explicit or [] ++ self.options = self._default_options.copy() ++ if options is not None: ++ self.options.update(options) ++ self.errors = [] ++ self.logger = logging.getLogger("RefactoringTool") ++ self.fixer_log = [] ++ self.wrote = False ++ if self.options["print_function"]: ++ del pygram.python_grammar.keywords["print"] ++ self.driver = driver.Driver(pygram.python_grammar, ++ convert=pytree.convert, ++ logger=self.logger) ++ self.pre_order, self.post_order = self.get_fixers() ++ ++ self.pre_order_heads = get_headnode_dict(self.pre_order) ++ self.post_order_heads = get_headnode_dict(self.post_order) ++ ++ self.files = [] # List of files that were or should be modified ++ ++ def get_fixers(self): ++ """Inspects the options to load the requested patterns and handlers. ++ ++ Returns: ++ (pre_order, post_order), where pre_order is the list of fixers that ++ want a pre-order AST traversal, and post_order is the list that want ++ post-order traversal. ++ """ ++ pre_order_fixers = [] ++ post_order_fixers = [] ++ for fix_mod_path in self.fixers: ++ mod = __import__(fix_mod_path, {}, {}, ["*"]) ++ fix_name = fix_mod_path.rsplit(".", 1)[-1] ++ if fix_name.startswith(self.FILE_PREFIX): ++ fix_name = fix_name[len(self.FILE_PREFIX):] ++ parts = fix_name.split("_") ++ class_name = self.CLASS_PREFIX + "".join([p.title() for p in parts]) ++ try: ++ fix_class = getattr(mod, class_name) ++ except AttributeError: ++ raise FixerError("Can't find %s.%s" % (fix_name, class_name)) ++ fixer = fix_class(self.options, self.fixer_log) ++ if fixer.explicit and self.explicit is not True and \ ++ fix_mod_path not in self.explicit: ++ self.log_message("Skipping implicit fixer: %s", fix_name) ++ continue ++ ++ self.log_debug("Adding transformation: %s", fix_name) ++ if fixer.order == "pre": ++ pre_order_fixers.append(fixer) ++ elif fixer.order == "post": ++ post_order_fixers.append(fixer) ++ else: ++ raise FixerError("Illegal fixer order: %r" % fixer.order) ++ ++ key_func = operator.attrgetter("run_order") ++ pre_order_fixers.sort(key=key_func) ++ post_order_fixers.sort(key=key_func) ++ return (pre_order_fixers, post_order_fixers) ++ ++ def log_error(self, msg, *args, **kwds): ++ """Called when an error occurs.""" ++ raise ++ ++ def log_message(self, msg, *args): ++ """Hook to log a message.""" ++ if args: ++ msg = msg % args ++ self.logger.info(msg) ++ ++ def log_debug(self, msg, *args): ++ if args: ++ msg = msg % args ++ self.logger.debug(msg) ++ ++ def print_output(self, lines): ++ """Called with lines of output to give to the user.""" ++ pass ++ ++ def refactor(self, items, write=False, doctests_only=False): ++ """Refactor a list of files and directories.""" ++ for dir_or_file in items: ++ if os.path.isdir(dir_or_file): ++ self.refactor_dir(dir_or_file, write, doctests_only) ++ else: ++ self.refactor_file(dir_or_file, write, doctests_only) ++ ++ def refactor_dir(self, dir_name, write=False, doctests_only=False): ++ """Descends down a directory and refactor every Python file found. ++ ++ Python files are assumed to have a .py extension. ++ ++ Files and subdirectories starting with '.' are skipped. ++ """ ++ for dirpath, dirnames, filenames in os.walk(dir_name): ++ self.log_debug("Descending into %s", dirpath) ++ dirnames.sort() ++ filenames.sort() ++ for name in filenames: ++ if not name.startswith(".") and name.endswith("py"): ++ fullname = os.path.join(dirpath, name) ++ self.refactor_file(fullname, write, doctests_only) ++ # Modify dirnames in-place to remove subdirs with leading dots ++ dirnames[:] = [dn for dn in dirnames if not dn.startswith(".")] ++ ++ def refactor_file(self, filename, write=False, doctests_only=False): ++ """Refactors a file.""" ++ try: ++ f = open(filename) ++ except IOError, err: ++ self.log_error("Can't open %s: %s", filename, err) ++ return ++ try: ++ input = f.read() + "\n" # Silence certain parse errors ++ finally: ++ f.close() ++ if doctests_only: ++ self.log_debug("Refactoring doctests in %s", filename) ++ output = self.refactor_docstring(input, filename) ++ if output != input: ++ self.processed_file(output, filename, input, write=write) ++ else: ++ self.log_debug("No doctest changes in %s", filename) ++ else: ++ tree = self.refactor_string(input, filename) ++ if tree and tree.was_changed: ++ # The [:-1] is to take off the \n we added earlier ++ self.processed_file(str(tree)[:-1], filename, write=write) ++ else: ++ self.log_debug("No changes in %s", filename) ++ ++ def refactor_string(self, data, name): ++ """Refactor a given input string. ++ ++ Args: ++ data: a string holding the code to be refactored. ++ name: a human-readable name for use in error/log messages. ++ ++ Returns: ++ An AST corresponding to the refactored input stream; None if ++ there were errors during the parse. ++ """ ++ try: ++ tree = self.driver.parse_string(data) ++ except Exception, err: ++ self.log_error("Can't parse %s: %s: %s", ++ name, err.__class__.__name__, err) ++ return ++ self.log_debug("Refactoring %s", name) ++ self.refactor_tree(tree, name) ++ return tree ++ ++ def refactor_stdin(self, doctests_only=False): ++ input = sys.stdin.read() ++ if doctests_only: ++ self.log_debug("Refactoring doctests in stdin") ++ output = self.refactor_docstring(input, "") ++ if output != input: ++ self.processed_file(output, "", input) ++ else: ++ self.log_debug("No doctest changes in stdin") ++ else: ++ tree = self.refactor_string(input, "") ++ if tree and tree.was_changed: ++ self.processed_file(str(tree), "", input) ++ else: ++ self.log_debug("No changes in stdin") ++ ++ def refactor_tree(self, tree, name): ++ """Refactors a parse tree (modifying the tree in place). ++ ++ Args: ++ tree: a pytree.Node instance representing the root of the tree ++ to be refactored. ++ name: a human-readable name for this tree. ++ ++ Returns: ++ True if the tree was modified, False otherwise. ++ """ ++ for fixer in chain(self.pre_order, self.post_order): ++ fixer.start_tree(tree, name) ++ ++ self.traverse_by(self.pre_order_heads, tree.pre_order()) ++ self.traverse_by(self.post_order_heads, tree.post_order()) ++ ++ for fixer in chain(self.pre_order, self.post_order): ++ fixer.finish_tree(tree, name) ++ return tree.was_changed ++ ++ def traverse_by(self, fixers, traversal): ++ """Traverse an AST, applying a set of fixers to each node. ++ ++ This is a helper method for refactor_tree(). ++ ++ Args: ++ fixers: a list of fixer instances. ++ traversal: a generator that yields AST nodes. ++ ++ Returns: ++ None ++ """ ++ if not fixers: ++ return ++ for node in traversal: ++ for fixer in fixers[node.type] + fixers[None]: ++ results = fixer.match(node) ++ if results: ++ new = fixer.transform(node, results) ++ if new is not None and (new != node or ++ str(new) != str(node)): ++ node.replace(new) ++ node = new ++ ++ def processed_file(self, new_text, filename, old_text=None, write=False): ++ """ ++ Called when a file has been refactored, and there are changes. ++ """ ++ self.files.append(filename) ++ if old_text is None: ++ try: ++ f = open(filename, "r") ++ except IOError, err: ++ self.log_error("Can't read %s: %s", filename, err) ++ return ++ try: ++ old_text = f.read() ++ finally: ++ f.close() ++ if old_text == new_text: ++ self.log_debug("No changes to %s", filename) ++ return ++ self.print_output(diff_texts(old_text, new_text, filename)) ++ if write: ++ self.write_file(new_text, filename, old_text) ++ else: ++ self.log_debug("Not writing changes to %s", filename) ++ ++ def write_file(self, new_text, filename, old_text): ++ """Writes a string to a file. ++ ++ It first shows a unified diff between the old text and the new text, and ++ then rewrites the file; the latter is only done if the write option is ++ set. ++ """ ++ try: ++ f = open(filename, "w") ++ except os.error, err: ++ self.log_error("Can't create %s: %s", filename, err) ++ return ++ try: ++ f.write(new_text) ++ except os.error, err: ++ self.log_error("Can't write %s: %s", filename, err) ++ finally: ++ f.close() ++ self.log_debug("Wrote changes to %s", filename) ++ self.wrote = True ++ ++ PS1 = ">>> " ++ PS2 = "... " ++ ++ def refactor_docstring(self, input, filename): ++ """Refactors a docstring, looking for doctests. ++ ++ This returns a modified version of the input string. It looks ++ for doctests, which start with a ">>>" prompt, and may be ++ continued with "..." prompts, as long as the "..." is indented ++ the same as the ">>>". ++ ++ (Unfortunately we can't use the doctest module's parser, ++ since, like most parsers, it is not geared towards preserving ++ the original source.) ++ """ ++ result = [] ++ block = None ++ block_lineno = None ++ indent = None ++ lineno = 0 ++ for line in input.splitlines(True): ++ lineno += 1 ++ if line.lstrip().startswith(self.PS1): ++ if block is not None: ++ result.extend(self.refactor_doctest(block, block_lineno, ++ indent, filename)) ++ block_lineno = lineno ++ block = [line] ++ i = line.find(self.PS1) ++ indent = line[:i] ++ elif (indent is not None and ++ (line.startswith(indent + self.PS2) or ++ line == indent + self.PS2.rstrip() + "\n")): ++ block.append(line) ++ else: ++ if block is not None: ++ result.extend(self.refactor_doctest(block, block_lineno, ++ indent, filename)) ++ block = None ++ indent = None ++ result.append(line) ++ if block is not None: ++ result.extend(self.refactor_doctest(block, block_lineno, ++ indent, filename)) ++ return "".join(result) ++ ++ def refactor_doctest(self, block, lineno, indent, filename): ++ """Refactors one doctest. ++ ++ A doctest is given as a block of lines, the first of which starts ++ with ">>>" (possibly indented), while the remaining lines start ++ with "..." (identically indented). ++ ++ """ ++ try: ++ tree = self.parse_block(block, lineno, indent) ++ except Exception, err: ++ if self.log.isEnabledFor(logging.DEBUG): ++ for line in block: ++ self.log_debug("Source: %s", line.rstrip("\n")) ++ self.log_error("Can't parse docstring in %s line %s: %s: %s", ++ filename, lineno, err.__class__.__name__, err) ++ return block ++ if self.refactor_tree(tree, filename): ++ new = str(tree).splitlines(True) ++ # Undo the adjustment of the line numbers in wrap_toks() below. ++ clipped, new = new[:lineno-1], new[lineno-1:] ++ assert clipped == ["\n"] * (lineno-1), clipped ++ if not new[-1].endswith("\n"): ++ new[-1] += "\n" ++ block = [indent + self.PS1 + new.pop(0)] ++ if new: ++ block += [indent + self.PS2 + line for line in new] ++ return block ++ ++ def summarize(self): ++ if self.wrote: ++ were = "were" ++ else: ++ were = "need to be" ++ if not self.files: ++ self.log_message("No files %s modified.", were) ++ else: ++ self.log_message("Files that %s modified:", were) ++ for file in self.files: ++ self.log_message(file) ++ if self.fixer_log: ++ self.log_message("Warnings/messages while refactoring:") ++ for message in self.fixer_log: ++ self.log_message(message) ++ if self.errors: ++ if len(self.errors) == 1: ++ self.log_message("There was 1 error:") ++ else: ++ self.log_message("There were %d errors:", len(self.errors)) ++ for msg, args, kwds in self.errors: ++ self.log_message(msg, *args, **kwds) ++ ++ def parse_block(self, block, lineno, indent): ++ """Parses a block into a tree. ++ ++ This is necessary to get correct line number / offset information ++ in the parser diagnostics and embedded into the parse tree. ++ """ ++ return self.driver.parse_tokens(self.wrap_toks(block, lineno, indent)) ++ ++ def wrap_toks(self, block, lineno, indent): ++ """Wraps a tokenize stream to systematically modify start/end.""" ++ tokens = tokenize.generate_tokens(self.gen_lines(block, indent).next) ++ for type, value, (line0, col0), (line1, col1), line_text in tokens: ++ line0 += lineno - 1 ++ line1 += lineno - 1 ++ # Don't bother updating the columns; this is too complicated ++ # since line_text would also have to be updated and it would ++ # still break for tokens spanning lines. Let the user guess ++ # that the column numbers for doctests are relative to the ++ # end of the prompt string (PS1 or PS2). ++ yield type, value, (line0, col0), (line1, col1), line_text ++ ++ ++ def gen_lines(self, block, indent): ++ """Generates lines as expected by tokenize from a list of lines. ++ ++ This strips the first len(indent + self.PS1) characters off each line. ++ """ ++ prefix1 = indent + self.PS1 ++ prefix2 = indent + self.PS2 ++ prefix = prefix1 ++ for line in block: ++ if line.startswith(prefix): ++ yield line[len(prefix):] ++ elif line == prefix.rstrip() + "\n": ++ yield "\n" ++ else: ++ raise AssertionError("line=%r, prefix=%r" % (line, prefix)) ++ prefix = prefix2 ++ while True: ++ yield "" ++ ++ ++def diff_texts(a, b, filename): ++ """Return a unified diff of two strings.""" ++ a = a.splitlines() ++ b = b.splitlines() ++ return difflib.unified_diff(a, b, filename, filename, ++ "(original)", "(refactored)", ++ lineterm="") +diff -r 531f2e948299 refactor/Grammar.txt +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/Grammar.txt Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,155 @@ ++# Grammar for Python ++ ++# Note: Changing the grammar specified in this file will most likely ++# require corresponding changes in the parser module ++# (../Modules/parsermodule.c). If you can't make the changes to ++# that module yourself, please co-ordinate the required changes ++# with someone who can; ask around on python-dev for help. Fred ++# Drake will probably be listening there. ++ ++# NOTE WELL: You should also follow all the steps listed in PEP 306, ++# "How to Change Python's Grammar" ++ ++# Commands for Kees Blom's railroad program ++#diagram:token NAME ++#diagram:token NUMBER ++#diagram:token STRING ++#diagram:token NEWLINE ++#diagram:token ENDMARKER ++#diagram:token INDENT ++#diagram:output\input python.bla ++#diagram:token DEDENT ++#diagram:output\textwidth 20.04cm\oddsidemargin 0.0cm\evensidemargin 0.0cm ++#diagram:rules ++ ++# Start symbols for the grammar: ++# file_input is a module or sequence of commands read from an input file; ++# single_input is a single interactive statement; ++# eval_input is the input for the eval() and input() functions. ++# NB: compound_stmt in single_input is followed by extra NEWLINE! ++file_input: (NEWLINE | stmt)* ENDMARKER ++single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE ++eval_input: testlist NEWLINE* ENDMARKER ++ ++decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE ++decorators: decorator+ ++decorated: decorators (classdef | funcdef) ++funcdef: 'def' NAME parameters ['->' test] ':' suite ++parameters: '(' [typedargslist] ')' ++typedargslist: ((tfpdef ['=' test] ',')* ++ ('*' [tname] (',' tname ['=' test])* [',' '**' tname] | '**' tname) ++ | tfpdef ['=' test] (',' tfpdef ['=' test])* [',']) ++tname: NAME [':' test] ++tfpdef: tname | '(' tfplist ')' ++tfplist: tfpdef (',' tfpdef)* [','] ++varargslist: ((vfpdef ['=' test] ',')* ++ ('*' [vname] (',' vname ['=' test])* [',' '**' vname] | '**' vname) ++ | vfpdef ['=' test] (',' vfpdef ['=' test])* [',']) ++vname: NAME ++vfpdef: vname | '(' vfplist ')' ++vfplist: vfpdef (',' vfpdef)* [','] ++ ++stmt: simple_stmt | compound_stmt ++simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE ++small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt | ++ import_stmt | global_stmt | exec_stmt | assert_stmt) ++expr_stmt: testlist (augassign (yield_expr|testlist) | ++ ('=' (yield_expr|testlist))*) ++augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' | ++ '<<=' | '>>=' | '**=' | '//=') ++# For normal assignments, additional restrictions enforced by the interpreter ++print_stmt: 'print' ( [ test (',' test)* [','] ] | ++ '>>' test [ (',' test)+ [','] ] ) ++del_stmt: 'del' exprlist ++pass_stmt: 'pass' ++flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt ++break_stmt: 'break' ++continue_stmt: 'continue' ++return_stmt: 'return' [testlist] ++yield_stmt: yield_expr ++raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]] ++import_stmt: import_name | import_from ++import_name: 'import' dotted_as_names ++import_from: ('from' ('.'* dotted_name | '.'+) ++ 'import' ('*' | '(' import_as_names ')' | import_as_names)) ++import_as_name: NAME ['as' NAME] ++dotted_as_name: dotted_name ['as' NAME] ++import_as_names: import_as_name (',' import_as_name)* [','] ++dotted_as_names: dotted_as_name (',' dotted_as_name)* ++dotted_name: NAME ('.' NAME)* ++global_stmt: ('global' | 'nonlocal') NAME (',' NAME)* ++exec_stmt: 'exec' expr ['in' test [',' test]] ++assert_stmt: 'assert' test [',' test] ++ ++compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated ++if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] ++while_stmt: 'while' test ':' suite ['else' ':' suite] ++for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite] ++try_stmt: ('try' ':' suite ++ ((except_clause ':' suite)+ ++ ['else' ':' suite] ++ ['finally' ':' suite] | ++ 'finally' ':' suite)) ++with_stmt: 'with' test [ with_var ] ':' suite ++with_var: 'as' expr ++# NB compile.c makes sure that the default except clause is last ++except_clause: 'except' [test [(',' | 'as') test]] ++suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT ++ ++# Backward compatibility cruft to support: ++# [ x for x in lambda: True, lambda: False if x() ] ++# even while also allowing: ++# lambda x: 5 if x else 2 ++# (But not a mix of the two) ++testlist_safe: old_test [(',' old_test)+ [',']] ++old_test: or_test | old_lambdef ++old_lambdef: 'lambda' [varargslist] ':' old_test ++ ++test: or_test ['if' or_test 'else' test] | lambdef ++or_test: and_test ('or' and_test)* ++and_test: not_test ('and' not_test)* ++not_test: 'not' not_test | comparison ++comparison: expr (comp_op expr)* ++comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not' ++expr: xor_expr ('|' xor_expr)* ++xor_expr: and_expr ('^' and_expr)* ++and_expr: shift_expr ('&' shift_expr)* ++shift_expr: arith_expr (('<<'|'>>') arith_expr)* ++arith_expr: term (('+'|'-') term)* ++term: factor (('*'|'/'|'%'|'//') factor)* ++factor: ('+'|'-'|'~') factor | power ++power: atom trailer* ['**' factor] ++atom: ('(' [yield_expr|testlist_gexp] ')' | ++ '[' [listmaker] ']' | ++ '{' [dictsetmaker] '}' | ++ '`' testlist1 '`' | ++ NAME | NUMBER | STRING+ | '.' '.' '.') ++listmaker: test ( comp_for | (',' test)* [','] ) ++testlist_gexp: test ( comp_for | (',' test)* [','] ) ++lambdef: 'lambda' [varargslist] ':' test ++trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME ++subscriptlist: subscript (',' subscript)* [','] ++subscript: test | [test] ':' [test] [sliceop] ++sliceop: ':' [test] ++exprlist: expr (',' expr)* [','] ++testlist: test (',' test)* [','] ++dictsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) | ++ (test (comp_for | (',' test)* [','])) ) ++ ++classdef: 'class' NAME ['(' [arglist] ')'] ':' suite ++ ++arglist: (argument ',')* (argument [','] ++ |'*' test (',' argument)* [',' '**' test] ++ |'**' test) ++argument: test [comp_for] | test '=' test # Really [keyword '='] test ++ ++comp_iter: comp_for | comp_if ++comp_for: 'for' exprlist 'in' testlist_safe [comp_iter] ++comp_if: 'if' old_test [comp_iter] ++ ++testlist1: test (',' test)* ++ ++# not used in grammar, but may appear in "node" passed from Parser to Compiler ++encoding_decl: NAME ++ ++yield_expr: 'yield' [testlist] +diff -r 531f2e948299 refactor/PatternGrammar.txt +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/PatternGrammar.txt Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,28 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++# A grammar to describe tree matching patterns. ++# Not shown here: ++# - 'TOKEN' stands for any token (leaf node) ++# - 'any' stands for any node (leaf or interior) ++# With 'any' we can still specify the sub-structure. ++ ++# The start symbol is 'Matcher'. ++ ++Matcher: Alternatives ENDMARKER ++ ++Alternatives: Alternative ('|' Alternative)* ++ ++Alternative: (Unit | NegatedUnit)+ ++ ++Unit: [NAME '='] ( STRING [Repeater] ++ | NAME [Details] [Repeater] ++ | '(' Alternatives ')' [Repeater] ++ | '[' Alternatives ']' ++ ) ++ ++NegatedUnit: 'not' (STRING | NAME [Details] | '(' Alternatives ')') ++ ++Repeater: '*' | '+' | '{' NUMBER [',' NUMBER] '}' ++ ++Details: '<' Alternatives '>' +diff -r 531f2e948299 refactor/__init__.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/__init__.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,8 @@ ++from . import fixer_base ++from . import fixer_util ++from . import main ++from . import patcomp ++from . import pgen2 ++from . import pygram ++from . import pytree ++from . import refactor +diff -r 531f2e948299 refactor/fixer_base.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixer_base.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,178 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Base class for fixers (optional, but recommended).""" ++ ++# Python imports ++import logging ++import itertools ++ ++# Local imports ++from .patcomp import PatternCompiler ++from . import pygram ++from .fixer_util import does_tree_import ++ ++class BaseFix(object): ++ ++ """Optional base class for fixers. ++ ++ The subclass name must be FixFooBar where FooBar is the result of ++ removing underscores and capitalizing the words of the fix name. ++ For example, the class name for a fixer named 'has_key' should be ++ FixHasKey. ++ """ ++ ++ PATTERN = None # Most subclasses should override with a string literal ++ pattern = None # Compiled pattern, set by compile_pattern() ++ options = None # Options object passed to initializer ++ filename = None # The filename (set by set_filename) ++ logger = None # A logger (set by set_filename) ++ numbers = itertools.count(1) # For new_name() ++ used_names = set() # A set of all used NAMEs ++ order = "post" # Does the fixer prefer pre- or post-order traversal ++ explicit = False # Is this ignored by refactor.py -f all? ++ run_order = 5 # Fixers will be sorted by run order before execution ++ # Lower numbers will be run first. ++ ++ # Shortcut for access to Python grammar symbols ++ syms = pygram.python_symbols ++ ++ def __init__(self, options, log): ++ """Initializer. Subclass may override. ++ ++ Args: ++ options: an dict containing the options passed to RefactoringTool ++ that could be used to customize the fixer through the command line. ++ log: a list to append warnings and other messages to. ++ """ ++ self.options = options ++ self.log = log ++ self.compile_pattern() ++ ++ def compile_pattern(self): ++ """Compiles self.PATTERN into self.pattern. ++ ++ Subclass may override if it doesn't want to use ++ self.{pattern,PATTERN} in .match(). ++ """ ++ if self.PATTERN is not None: ++ self.pattern = PatternCompiler().compile_pattern(self.PATTERN) ++ ++ def set_filename(self, filename): ++ """Set the filename, and a logger derived from it. ++ ++ The main refactoring tool should call this. ++ """ ++ self.filename = filename ++ self.logger = logging.getLogger(filename) ++ ++ def match(self, node): ++ """Returns match for a given parse tree node. ++ ++ Should return a true or false object (not necessarily a bool). ++ It may return a non-empty dict of matching sub-nodes as ++ returned by a matching pattern. ++ ++ Subclass may override. ++ """ ++ results = {"node": node} ++ return self.pattern.match(node, results) and results ++ ++ def transform(self, node, results): ++ """Returns the transformation for a given parse tree node. ++ ++ Args: ++ node: the root of the parse tree that matched the fixer. ++ results: a dict mapping symbolic names to part of the match. ++ ++ Returns: ++ None, or a node that is a modified copy of the ++ argument node. The node argument may also be modified in-place to ++ effect the same change. ++ ++ Subclass *must* override. ++ """ ++ raise NotImplementedError() ++ ++ def new_name(self, template="xxx_todo_changeme"): ++ """Return a string suitable for use as an identifier ++ ++ The new name is guaranteed not to conflict with other identifiers. ++ """ ++ name = template ++ while name in self.used_names: ++ name = template + str(self.numbers.next()) ++ self.used_names.add(name) ++ return name ++ ++ def log_message(self, message): ++ if self.first_log: ++ self.first_log = False ++ self.log.append("### In file %s ###" % self.filename) ++ self.log.append(message) ++ ++ def cannot_convert(self, node, reason=None): ++ """Warn the user that a given chunk of code is not valid Python 3, ++ but that it cannot be converted automatically. ++ ++ First argument is the top-level node for the code in question. ++ Optional second argument is why it can't be converted. ++ """ ++ lineno = node.get_lineno() ++ for_output = node.clone() ++ for_output.set_prefix("") ++ msg = "Line %d: could not convert: %s" ++ self.log_message(msg % (lineno, for_output)) ++ if reason: ++ self.log_message(reason) ++ ++ def warning(self, node, reason): ++ """Used for warning the user about possible uncertainty in the ++ translation. ++ ++ First argument is the top-level node for the code in question. ++ Optional second argument is why it can't be converted. ++ """ ++ lineno = node.get_lineno() ++ self.log_message("Line %d: %s" % (lineno, reason)) ++ ++ def start_tree(self, tree, filename): ++ """Some fixers need to maintain tree-wide state. ++ This method is called once, at the start of tree fix-up. ++ ++ tree - the root node of the tree to be processed. ++ filename - the name of the file the tree came from. ++ """ ++ self.used_names = tree.used_names ++ self.set_filename(filename) ++ self.numbers = itertools.count(1) ++ self.first_log = True ++ ++ def finish_tree(self, tree, filename): ++ """Some fixers need to maintain tree-wide state. ++ This method is called once, at the conclusion of tree fix-up. ++ ++ tree - the root node of the tree to be processed. ++ filename - the name of the file the tree came from. ++ """ ++ pass ++ ++ ++class ConditionalFix(BaseFix): ++ """ Base class for fixers which not execute if an import is found. """ ++ ++ # This is the name of the import which, if found, will cause the test to be skipped ++ skip_on = None ++ ++ def start_tree(self, *args): ++ super(ConditionalFix, self).start_tree(*args) ++ self._should_skip = None ++ ++ def should_skip(self, node): ++ if self._should_skip is not None: ++ return self._should_skip ++ pkg = self.skip_on.split(".") ++ name = pkg[-1] ++ pkg = ".".join(pkg[:-1]) ++ self._should_skip = does_tree_import(pkg, name, node) ++ return self._should_skip +diff -r 531f2e948299 refactor/fixer_util.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixer_util.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,425 @@ ++"""Utility functions, node construction macros, etc.""" ++# Author: Collin Winter ++ ++# Local imports ++from .pgen2 import token ++from .pytree import Leaf, Node ++from .pygram import python_symbols as syms ++from . import patcomp ++ ++ ++########################################################### ++### Common node-construction "macros" ++########################################################### ++ ++def KeywordArg(keyword, value): ++ return Node(syms.argument, ++ [keyword, Leaf(token.EQUAL, '='), value]) ++ ++def LParen(): ++ return Leaf(token.LPAR, "(") ++ ++def RParen(): ++ return Leaf(token.RPAR, ")") ++ ++def Assign(target, source): ++ """Build an assignment statement""" ++ if not isinstance(target, list): ++ target = [target] ++ if not isinstance(source, list): ++ source.set_prefix(" ") ++ source = [source] ++ ++ return Node(syms.atom, ++ target + [Leaf(token.EQUAL, "=", prefix=" ")] + source) ++ ++def Name(name, prefix=None): ++ """Return a NAME leaf""" ++ return Leaf(token.NAME, name, prefix=prefix) ++ ++def Attr(obj, attr): ++ """A node tuple for obj.attr""" ++ return [obj, Node(syms.trailer, [Dot(), attr])] ++ ++def Comma(): ++ """A comma leaf""" ++ return Leaf(token.COMMA, ",") ++ ++def Dot(): ++ """A period (.) leaf""" ++ return Leaf(token.DOT, ".") ++ ++def ArgList(args, lparen=LParen(), rparen=RParen()): ++ """A parenthesised argument list, used by Call()""" ++ node = Node(syms.trailer, [lparen.clone(), rparen.clone()]) ++ if args: ++ node.insert_child(1, Node(syms.arglist, args)) ++ return node ++ ++def Call(func_name, args=None, prefix=None): ++ """A function call""" ++ node = Node(syms.power, [func_name, ArgList(args)]) ++ if prefix is not None: ++ node.set_prefix(prefix) ++ return node ++ ++def Newline(): ++ """A newline literal""" ++ return Leaf(token.NEWLINE, "\n") ++ ++def BlankLine(): ++ """A blank line""" ++ return Leaf(token.NEWLINE, "") ++ ++def Number(n, prefix=None): ++ return Leaf(token.NUMBER, n, prefix=prefix) ++ ++def Subscript(index_node): ++ """A numeric or string subscript""" ++ return Node(syms.trailer, [Leaf(token.LBRACE, '['), ++ index_node, ++ Leaf(token.RBRACE, ']')]) ++ ++def String(string, prefix=None): ++ """A string leaf""" ++ return Leaf(token.STRING, string, prefix=prefix) ++ ++def ListComp(xp, fp, it, test=None): ++ """A list comprehension of the form [xp for fp in it if test]. ++ ++ If test is None, the "if test" part is omitted. ++ """ ++ xp.set_prefix("") ++ fp.set_prefix(" ") ++ it.set_prefix(" ") ++ for_leaf = Leaf(token.NAME, "for") ++ for_leaf.set_prefix(" ") ++ in_leaf = Leaf(token.NAME, "in") ++ in_leaf.set_prefix(" ") ++ inner_args = [for_leaf, fp, in_leaf, it] ++ if test: ++ test.set_prefix(" ") ++ if_leaf = Leaf(token.NAME, "if") ++ if_leaf.set_prefix(" ") ++ inner_args.append(Node(syms.comp_if, [if_leaf, test])) ++ inner = Node(syms.listmaker, [xp, Node(syms.comp_for, inner_args)]) ++ return Node(syms.atom, ++ [Leaf(token.LBRACE, "["), ++ inner, ++ Leaf(token.RBRACE, "]")]) ++ ++def FromImport(package_name, name_leafs): ++ """ Return an import statement in the form: ++ from package import name_leafs""" ++ # XXX: May not handle dotted imports properly (eg, package_name='foo.bar') ++ #assert package_name == '.' or '.' not in package_name, "FromImport has "\ ++ # "not been tested with dotted package names -- use at your own "\ ++ # "peril!" ++ ++ for leaf in name_leafs: ++ # Pull the leaves out of their old tree ++ leaf.remove() ++ ++ children = [Leaf(token.NAME, 'from'), ++ Leaf(token.NAME, package_name, prefix=" "), ++ Leaf(token.NAME, 'import', prefix=" "), ++ Node(syms.import_as_names, name_leafs)] ++ imp = Node(syms.import_from, children) ++ return imp ++ ++ ++########################################################### ++### Determine whether a node represents a given literal ++########################################################### ++ ++def is_tuple(node): ++ """Does the node represent a tuple literal?""" ++ if isinstance(node, Node) and node.children == [LParen(), RParen()]: ++ return True ++ return (isinstance(node, Node) ++ and len(node.children) == 3 ++ and isinstance(node.children[0], Leaf) ++ and isinstance(node.children[1], Node) ++ and isinstance(node.children[2], Leaf) ++ and node.children[0].value == "(" ++ and node.children[2].value == ")") ++ ++def is_list(node): ++ """Does the node represent a list literal?""" ++ return (isinstance(node, Node) ++ and len(node.children) > 1 ++ and isinstance(node.children[0], Leaf) ++ and isinstance(node.children[-1], Leaf) ++ and node.children[0].value == "[" ++ and node.children[-1].value == "]") ++ ++ ++########################################################### ++### Misc ++########################################################### ++ ++def parenthesize(node): ++ return Node(syms.atom, [LParen(), node, RParen()]) ++ ++ ++consuming_calls = set(["sorted", "list", "set", "any", "all", "tuple", "sum", ++ "min", "max"]) ++ ++def attr_chain(obj, attr): ++ """Follow an attribute chain. ++ ++ If you have a chain of objects where a.foo -> b, b.foo-> c, etc, ++ use this to iterate over all objects in the chain. Iteration is ++ terminated by getattr(x, attr) is None. ++ ++ Args: ++ obj: the starting object ++ attr: the name of the chaining attribute ++ ++ Yields: ++ Each successive object in the chain. ++ """ ++ next = getattr(obj, attr) ++ while next: ++ yield next ++ next = getattr(next, attr) ++ ++p0 = """for_stmt< 'for' any 'in' node=any ':' any* > ++ | comp_for< 'for' any 'in' node=any any* > ++ """ ++p1 = """ ++power< ++ ( 'iter' | 'list' | 'tuple' | 'sorted' | 'set' | 'sum' | ++ 'any' | 'all' | (any* trailer< '.' 'join' >) ) ++ trailer< '(' node=any ')' > ++ any* ++> ++""" ++p2 = """ ++power< ++ 'sorted' ++ trailer< '(' arglist ')' > ++ any* ++> ++""" ++pats_built = False ++def in_special_context(node): ++ """ Returns true if node is in an environment where all that is required ++ of it is being itterable (ie, it doesn't matter if it returns a list ++ or an itterator). ++ See test_map_nochange in test_fixers.py for some examples and tests. ++ """ ++ global p0, p1, p2, pats_built ++ if not pats_built: ++ p1 = patcomp.compile_pattern(p1) ++ p0 = patcomp.compile_pattern(p0) ++ p2 = patcomp.compile_pattern(p2) ++ pats_built = True ++ patterns = [p0, p1, p2] ++ for pattern, parent in zip(patterns, attr_chain(node, "parent")): ++ results = {} ++ if pattern.match(parent, results) and results["node"] is node: ++ return True ++ return False ++ ++def is_probably_builtin(node): ++ """ ++ Check that something isn't an attribute or function name etc. ++ """ ++ prev = node.prev_sibling ++ if prev is not None and prev.type == token.DOT: ++ # Attribute lookup. ++ return False ++ parent = node.parent ++ if parent.type in (syms.funcdef, syms.classdef): ++ return False ++ if parent.type == syms.expr_stmt and parent.children[0] is node: ++ # Assignment. ++ return False ++ if parent.type == syms.parameters or \ ++ (parent.type == syms.typedargslist and ( ++ (prev is not None and prev.type == token.COMMA) or ++ parent.children[0] is node ++ )): ++ # The name of an argument. ++ return False ++ return True ++ ++########################################################### ++### The following functions are to find bindings in a suite ++########################################################### ++ ++def make_suite(node): ++ if node.type == syms.suite: ++ return node ++ node = node.clone() ++ parent, node.parent = node.parent, None ++ suite = Node(syms.suite, [node]) ++ suite.parent = parent ++ return suite ++ ++def find_root(node): ++ """Find the top level namespace.""" ++ # Scamper up to the top level namespace ++ while node.type != syms.file_input: ++ assert node.parent, "Tree is insane! root found before "\ ++ "file_input node was found." ++ node = node.parent ++ return node ++ ++def does_tree_import(package, name, node): ++ """ Returns true if name is imported from package at the ++ top level of the tree which node belongs to. ++ To cover the case of an import like 'import foo', use ++ None for the package and 'foo' for the name. """ ++ binding = find_binding(name, find_root(node), package) ++ return bool(binding) ++ ++def is_import(node): ++ """Returns true if the node is an import statement.""" ++ return node.type in (syms.import_name, syms.import_from) ++ ++def touch_import(package, name, node): ++ """ Works like `does_tree_import` but adds an import statement ++ if it was not imported. """ ++ def is_import_stmt(node): ++ return node.type == syms.simple_stmt and node.children and \ ++ is_import(node.children[0]) ++ ++ root = find_root(node) ++ ++ if does_tree_import(package, name, root): ++ return ++ ++ add_newline_before = False ++ ++ # figure out where to insert the new import. First try to find ++ # the first import and then skip to the last one. ++ insert_pos = offset = 0 ++ for idx, node in enumerate(root.children): ++ if not is_import_stmt(node): ++ continue ++ for offset, node2 in enumerate(root.children[idx:]): ++ if not is_import_stmt(node2): ++ break ++ insert_pos = idx + offset ++ break ++ ++ # if there are no imports where we can insert, find the docstring. ++ # if that also fails, we stick to the beginning of the file ++ if insert_pos == 0: ++ for idx, node in enumerate(root.children): ++ if node.type == syms.simple_stmt and node.children and \ ++ node.children[0].type == token.STRING: ++ insert_pos = idx + 1 ++ add_newline_before ++ break ++ ++ if package is None: ++ import_ = Node(syms.import_name, [ ++ Leaf(token.NAME, 'import'), ++ Leaf(token.NAME, name, prefix=' ') ++ ]) ++ else: ++ import_ = FromImport(package, [Leaf(token.NAME, name, prefix=' ')]) ++ ++ children = [import_, Newline()] ++ if add_newline_before: ++ children.insert(0, Newline()) ++ root.insert_child(insert_pos, Node(syms.simple_stmt, children)) ++ ++ ++_def_syms = set([syms.classdef, syms.funcdef]) ++def find_binding(name, node, package=None): ++ """ Returns the node which binds variable name, otherwise None. ++ If optional argument package is supplied, only imports will ++ be returned. ++ See test cases for examples.""" ++ for child in node.children: ++ ret = None ++ if child.type == syms.for_stmt: ++ if _find(name, child.children[1]): ++ return child ++ n = find_binding(name, make_suite(child.children[-1]), package) ++ if n: ret = n ++ elif child.type in (syms.if_stmt, syms.while_stmt): ++ n = find_binding(name, make_suite(child.children[-1]), package) ++ if n: ret = n ++ elif child.type == syms.try_stmt: ++ n = find_binding(name, make_suite(child.children[2]), package) ++ if n: ++ ret = n ++ else: ++ for i, kid in enumerate(child.children[3:]): ++ if kid.type == token.COLON and kid.value == ":": ++ # i+3 is the colon, i+4 is the suite ++ n = find_binding(name, make_suite(child.children[i+4]), package) ++ if n: ret = n ++ elif child.type in _def_syms and child.children[1].value == name: ++ ret = child ++ elif _is_import_binding(child, name, package): ++ ret = child ++ elif child.type == syms.simple_stmt: ++ ret = find_binding(name, child, package) ++ elif child.type == syms.expr_stmt: ++ if _find(name, child.children[0]): ++ ret = child ++ ++ if ret: ++ if not package: ++ return ret ++ if is_import(ret): ++ return ret ++ return None ++ ++_block_syms = set([syms.funcdef, syms.classdef, syms.trailer]) ++def _find(name, node): ++ nodes = [node] ++ while nodes: ++ node = nodes.pop() ++ if node.type > 256 and node.type not in _block_syms: ++ nodes.extend(node.children) ++ elif node.type == token.NAME and node.value == name: ++ return node ++ return None ++ ++def _is_import_binding(node, name, package=None): ++ """ Will reuturn node if node will import name, or node ++ will import * from package. None is returned otherwise. ++ See test cases for examples. """ ++ ++ if node.type == syms.import_name and not package: ++ imp = node.children[1] ++ if imp.type == syms.dotted_as_names: ++ for child in imp.children: ++ if child.type == syms.dotted_as_name: ++ if child.children[2].value == name: ++ return node ++ elif child.type == token.NAME and child.value == name: ++ return node ++ elif imp.type == syms.dotted_as_name: ++ last = imp.children[-1] ++ if last.type == token.NAME and last.value == name: ++ return node ++ elif imp.type == token.NAME and imp.value == name: ++ return node ++ elif node.type == syms.import_from: ++ # unicode(...) is used to make life easier here, because ++ # from a.b import parses to ['import', ['a', '.', 'b'], ...] ++ if package and unicode(node.children[1]).strip() != package: ++ return None ++ n = node.children[3] ++ if package and _find('as', n): ++ # See test_from_import_as for explanation ++ return None ++ elif n.type == syms.import_as_names and _find(name, n): ++ return node ++ elif n.type == syms.import_as_name: ++ child = n.children[2] ++ if child.type == token.NAME and child.value == name: ++ return node ++ elif n.type == token.NAME and n.value == name: ++ return node ++ elif package and n.type == token.STAR: ++ return node ++ return None +diff -r 531f2e948299 refactor/fixes/.svn/all-wcprops +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/all-wcprops Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,305 @@ ++K 25 ++svn:wc:ra_dav:version-url ++V 57 ++/projects/!svn/ver/69679/sandbox/trunk/2to3/lib2to3/fixes ++END ++fix_dict.py ++K 25 ++svn:wc:ra_dav:version-url ++V 69 ++/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/fixes/fix_dict.py ++END ++fix_has_key.py ++K 25 ++svn:wc:ra_dav:version-url ++V 72 ++/projects/!svn/ver/67769/sandbox/trunk/2to3/lib2to3/fixes/fix_has_key.py ++END ++fix_exec.py ++K 25 ++svn:wc:ra_dav:version-url ++V 69 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_exec.py ++END ++fix_idioms.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_idioms.py ++END ++__init__.py ++K 25 ++svn:wc:ra_dav:version-url ++V 69 ++/projects/!svn/ver/61428/sandbox/trunk/2to3/lib2to3/fixes/__init__.py ++END ++fix_urllib.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/68368/sandbox/trunk/2to3/lib2to3/fixes/fix_urllib.py ++END ++fix_nonzero.py ++K 25 ++svn:wc:ra_dav:version-url ++V 72 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_nonzero.py ++END ++fix_print.py ++K 25 ++svn:wc:ra_dav:version-url ++V 70 ++/projects/!svn/ver/66418/sandbox/trunk/2to3/lib2to3/fixes/fix_print.py ++END ++fix_imports.py ++K 25 ++svn:wc:ra_dav:version-url ++V 72 ++/projects/!svn/ver/69054/sandbox/trunk/2to3/lib2to3/fixes/fix_imports.py ++END ++fix_numliterals.py ++K 25 ++svn:wc:ra_dav:version-url ++V 76 ++/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/fixes/fix_numliterals.py ++END ++fix_input.py ++K 25 ++svn:wc:ra_dav:version-url ++V 70 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_input.py ++END ++fix_itertools_imports.py ++K 25 ++svn:wc:ra_dav:version-url ++V 82 ++/projects/!svn/ver/69673/sandbox/trunk/2to3/lib2to3/fixes/fix_itertools_imports.py ++END ++fix_getcwdu.py ++K 25 ++svn:wc:ra_dav:version-url ++V 72 ++/projects/!svn/ver/66782/sandbox/trunk/2to3/lib2to3/fixes/fix_getcwdu.py ++END ++fix_zip.py ++K 25 ++svn:wc:ra_dav:version-url ++V 68 ++/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_zip.py ++END ++fix_raise.py ++K 25 ++svn:wc:ra_dav:version-url ++V 70 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_raise.py ++END ++fix_throw.py ++K 25 ++svn:wc:ra_dav:version-url ++V 70 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_throw.py ++END ++fix_types.py ++K 25 ++svn:wc:ra_dav:version-url ++V 70 ++/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_types.py ++END ++fix_paren.py ++K 25 ++svn:wc:ra_dav:version-url ++V 70 ++/projects/!svn/ver/65981/sandbox/trunk/2to3/lib2to3/fixes/fix_paren.py ++END ++fix_ws_comma.py ++K 25 ++svn:wc:ra_dav:version-url ++V 73 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_ws_comma.py ++END ++fix_reduce.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/67657/sandbox/trunk/2to3/lib2to3/fixes/fix_reduce.py ++END ++fix_raw_input.py ++K 25 ++svn:wc:ra_dav:version-url ++V 74 ++/projects/!svn/ver/65887/sandbox/trunk/2to3/lib2to3/fixes/fix_raw_input.py ++END ++fix_repr.py ++K 25 ++svn:wc:ra_dav:version-url ++V 69 ++/projects/!svn/ver/67769/sandbox/trunk/2to3/lib2to3/fixes/fix_repr.py ++END ++fix_buffer.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_buffer.py ++END ++fix_funcattrs.py ++K 25 ++svn:wc:ra_dav:version-url ++V 74 ++/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_funcattrs.py ++END ++fix_import.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/67928/sandbox/trunk/2to3/lib2to3/fixes/fix_import.py ++END ++fix_standarderror.py ++K 25 ++svn:wc:ra_dav:version-url ++V 78 ++/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_standarderror.py ++END ++fix_map.py ++K 25 ++svn:wc:ra_dav:version-url ++V 68 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_map.py ++END ++fix_next.py ++K 25 ++svn:wc:ra_dav:version-url ++V 69 ++/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/fixes/fix_next.py ++END ++fix_itertools.py ++K 25 ++svn:wc:ra_dav:version-url ++V 74 ++/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_itertools.py ++END ++fix_execfile.py ++K 25 ++svn:wc:ra_dav:version-url ++V 73 ++/projects/!svn/ver/67901/sandbox/trunk/2to3/lib2to3/fixes/fix_execfile.py ++END ++fix_xrange.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/67705/sandbox/trunk/2to3/lib2to3/fixes/fix_xrange.py ++END ++fix_apply.py ++K 25 ++svn:wc:ra_dav:version-url ++V 70 ++/projects/!svn/ver/67769/sandbox/trunk/2to3/lib2to3/fixes/fix_apply.py ++END ++fix_filter.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py ++END ++fix_unicode.py ++K 25 ++svn:wc:ra_dav:version-url ++V 72 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_unicode.py ++END ++fix_except.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/68694/sandbox/trunk/2to3/lib2to3/fixes/fix_except.py ++END ++fix_renames.py ++K 25 ++svn:wc:ra_dav:version-url ++V 72 ++/projects/!svn/ver/67389/sandbox/trunk/2to3/lib2to3/fixes/fix_renames.py ++END ++fix_tuple_params.py ++K 25 ++svn:wc:ra_dav:version-url ++V 77 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_tuple_params.py ++END ++fix_methodattrs.py ++K 25 ++svn:wc:ra_dav:version-url ++V 76 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_methodattrs.py ++END ++fix_xreadlines.py ++K 25 ++svn:wc:ra_dav:version-url ++V 75 ++/projects/!svn/ver/67433/sandbox/trunk/2to3/lib2to3/fixes/fix_xreadlines.py ++END ++fix_long.py ++K 25 ++svn:wc:ra_dav:version-url ++V 69 ++/projects/!svn/ver/68110/sandbox/trunk/2to3/lib2to3/fixes/fix_long.py ++END ++fix_intern.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/67657/sandbox/trunk/2to3/lib2to3/fixes/fix_intern.py ++END ++fix_callable.py ++K 25 ++svn:wc:ra_dav:version-url ++V 73 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_callable.py ++END ++fix_isinstance.py ++K 25 ++svn:wc:ra_dav:version-url ++V 75 ++/projects/!svn/ver/67767/sandbox/trunk/2to3/lib2to3/fixes/fix_isinstance.py ++END ++fix_basestring.py ++K 25 ++svn:wc:ra_dav:version-url ++V 75 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_basestring.py ++END ++fix_ne.py ++K 25 ++svn:wc:ra_dav:version-url ++V 67 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_ne.py ++END ++fix_set_literal.py ++K 25 ++svn:wc:ra_dav:version-url ++V 76 ++/projects/!svn/ver/69679/sandbox/trunk/2to3/lib2to3/fixes/fix_set_literal.py ++END ++fix_future.py ++K 25 ++svn:wc:ra_dav:version-url ++V 71 ++/projects/!svn/ver/63880/sandbox/trunk/2to3/lib2to3/fixes/fix_future.py ++END ++fix_metaclass.py ++K 25 ++svn:wc:ra_dav:version-url ++V 74 ++/projects/!svn/ver/67371/sandbox/trunk/2to3/lib2to3/fixes/fix_metaclass.py ++END ++fix_sys_exc.py ++K 25 ++svn:wc:ra_dav:version-url ++V 72 ++/projects/!svn/ver/65968/sandbox/trunk/2to3/lib2to3/fixes/fix_sys_exc.py ++END ++fix_imports2.py ++K 25 ++svn:wc:ra_dav:version-url ++V 73 ++/projects/!svn/ver/68422/sandbox/trunk/2to3/lib2to3/fixes/fix_imports2.py ++END +diff -r 531f2e948299 refactor/fixes/.svn/dir-prop-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/dir-prop-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,10 @@ ++K 10 ++svn:ignore ++V 25 ++*.pyc ++*.pyo ++*.pickle ++@* ++ ++ ++END +diff -r 531f2e948299 refactor/fixes/.svn/entries +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/entries Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,1728 @@ ++9 ++ ++dir ++70822 ++http://svn.python.org/projects/sandbox/trunk/2to3/lib2to3/fixes ++http://svn.python.org/projects ++ ++ ++ ++2009-02-16T17:36:06.789054Z ++69679 ++benjamin.peterson ++has-props ++ ++svn:special svn:externals svn:needs-lock ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++6015fed2-1504-0410-9fe1-9d1591cc4771 ++ ++fix_dict.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++d12677f15a5a34c7754e90cb06bc153e ++2008-11-25T23:13:17.968453Z ++67389 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++3588 ++ ++fix_has_key.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++1b88e2b6b4c60df9b85a168a07e13fa7 ++2008-12-14T20:59:10.846867Z ++67769 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++3209 ++ ++fix_exec.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++679db75847dfd56367a8cd2b4286949c ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++985 ++ ++fix_idioms.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++0281b19c721594c6eb341c83270d37bd ++2008-11-28T23:18:48.744865Z ++67433 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++3939 ++ ++__init__.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++97781d2954bbc2eebdc963de519fe2de ++2006-12-12T14:56:29.604692Z ++53006 ++guido.van.rossum ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++47 ++ ++fix_urllib.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++c883d34902a6e74c08f4370a978e5b86 ++2009-01-06T23:56:10.682943Z ++68368 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++7484 ++ ++fix_nonzero.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++6f8983345b023d63ddce248a93c5db83 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++578 ++ ++fix_print.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++478786e57412307d598aee1a20595102 ++2008-09-12T23:49:48.354778Z ++66418 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++2957 ++ ++fix_imports.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++fa0f30cff73ee261c93c85286007c761 ++2009-01-28T16:01:54.183761Z ++69054 ++guilherme.polo ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++5692 ++ ++fix_numliterals.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++7e04fa79f3ff3ff475ec1716021b8489 ++2008-11-25T23:13:17.968453Z ++67389 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++789 ++ ++fix_input.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++3a704e4f30c9f72c236274118f093034 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++692 ++ ++fix_itertools_imports.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++05420b3d189c8130eca6bf051bd31a17 ++2009-02-16T15:38:22.416590Z ++69673 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1837 ++ ++fix_getcwdu.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++1bf89e0a81cc997173d5d63078f8ea5a ++2008-10-03T22:51:36.115136Z ++66782 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++432 ++ ++fix_zip.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++8e61d2105f3122181e793e7c9b4caf31 ++2008-11-28T23:18:48.744865Z ++67433 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++889 ++ ++fix_throw.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++52c18fcf966a4c7f44940e331784f51c ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1564 ++ ++fix_raise.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++7f69130d4008f2b870fbc5d88ed726de ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++2587 ++ ++fix_types.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++08728aeba77665139ce3f967cb24c2f1 ++2008-11-28T23:18:48.744865Z ++67433 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1779 ++ ++fix_paren.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++e2b3bd30551e285f3bc45eed6a797014 ++2008-08-22T20:41:30.636639Z ++65981 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1213 ++ ++fix_ws_comma.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++8e92e7f56434c9b2263874296578ea53 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1108 ++ ++fix_reduce.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++2fee5cc1f796c98a749dc789199da016 ++2008-12-08T00:29:35.627027Z ++67657 ++armin.ronacher ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++816 ++ ++fix_raw_input.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++18666e7c36b850f0b6d5666504bec0ae ++2008-08-19T22:45:04.505207Z ++65887 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++435 ++ ++fix_repr.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++badd6b1054395732bd64df829d16cf96 ++2008-12-14T20:59:10.846867Z ++67769 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++594 ++ ++fix_buffer.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++d6f8cc141ad7ab3f197f1638b9e3e1aa ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++566 ++ ++fix_funcattrs.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++7a7b9d2abe6fbecfdf2e5c0095978f0b ++2008-11-28T23:18:48.744865Z ++67433 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++624 ++ ++fix_import.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++9973617c9e868b2b9afb0a609ef30b35 ++2008-12-27T02:49:30.983707Z ++67928 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++2953 ++ ++fix_standarderror.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++f76efc435650b1eba8bf73dbdfdeef3e ++2008-11-28T23:18:48.744865Z ++67433 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++431 ++ ++fix_map.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++0cdf1b348ed0dc9348377ad6ce1aef42 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++2537 ++ ++fix_next.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++46917a2b5128a18a5224f6ae5dc021db ++2008-11-25T23:13:17.968453Z ++67389 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++3205 ++ ++fix_itertools.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++d2b48acbc9d415b64f3c71575fbfb9df ++2008-11-28T23:18:48.744865Z ++67433 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1483 ++ ++fix_execfile.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++f968686ed347fd544fc69fd9cb6073cd ++2008-12-22T20:09:55.444195Z ++67901 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1974 ++ ++fix_xrange.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++4f054eb9bb8f4d4916f1b33eec5175f9 ++2008-12-11T19:04:08.320821Z ++67705 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++2291 ++ ++fix_apply.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++2a00b679f13c1dca9b45bc23a3b2a695 ++2008-12-14T20:59:10.846867Z ++67769 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1894 ++ ++fix_filter.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++0879d4b1af4eeb93b1a8baff1fd298c1 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++2089 ++ ++fix_unicode.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++05e9e9ae6cbc1c396bc11b19b5dab25a ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++832 ++ ++fix_except.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++450c9cbb28a5be9d21719abcb33a59f5 ++2009-01-17T23:55:59.992428Z ++68694 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++3251 ++ ++fix_renames.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++52f66737c4206d8cfa77bbb07af4a056 ++2008-11-25T23:13:17.968453Z ++67389 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++2192 ++ ++fix_tuple_params.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++557690cc5399b0ade14c16089df2effb ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++5405 ++ ++fix_methodattrs.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++6ee0925ec01e9ae632326855ab5cb016 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++587 ++ ++fix_xreadlines.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++ade2c0b61ba9f8effa9df543a2fbdc4a ++2008-11-28T23:18:48.744865Z ++67433 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++670 ++ ++fix_long.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++2aaca612bae42bfe84dd0d6139260749 ++2008-12-31T20:13:26.408132Z ++68110 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++538 ++ ++fix_intern.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++00e20c0723e807004c3fd0ae88d26b09 ++2008-12-08T00:29:35.627027Z ++67657 ++armin.ronacher ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1368 ++ ++fix_callable.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++37990663703ff5ea2fabb3095a9ad189 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++952 ++ ++fix_isinstance.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++921a0f20d0a6e47b4b1291d37599bf09 ++2008-12-14T20:28:12.506842Z ++67767 ++benjamin.peterson ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1594 ++ ++fix_basestring.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++0fe11afa759b94c75323aa2a3188089d ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++301 ++ ++fix_ne.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++a787f8744fda47bffd7f2b6a9ee4ff38 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++590 ++ ++fix_set_literal.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++fc4742a5a8d78f9dd84b1c5f0040003b ++2009-02-16T17:36:06.789054Z ++69679 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1699 ++ ++fix_future.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++0e2786c94aac6b11a47d8ec46d8b19d6 ++2008-06-01T23:09:38.597843Z ++63880 ++collin.winter ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++527 ++ ++fix_metaclass.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++c608b0bf4a9c0c1028051ffe82d055f4 ++2008-11-24T22:02:00.590445Z ++67371 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++8213 ++ ++fix_sys_exc.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++2b412acd29c54b0101163bb8be2ab5c7 ++2008-08-21T23:45:13.840810Z ++65968 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++1030 ++ ++fix_imports2.py ++file ++ ++ ++ ++ ++2009-03-31T00:29:37.000000Z ++15274809df396bec14aeafccd2ab9875 ++2009-01-09T02:01:03.956074Z ++68422 ++benjamin.peterson ++has-props ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++289 ++ +diff -r 531f2e948299 refactor/fixes/.svn/format +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/format Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,1 @@ ++9 +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/__init__.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/__init__.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_apply.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_apply.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_basestring.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_basestring.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_buffer.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_buffer.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_callable.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_callable.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_dict.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_dict.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_except.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_except.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_exec.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_exec.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_execfile.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_execfile.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_filter.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_filter.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_funcattrs.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_funcattrs.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_future.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_future.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 13 ++'Id Revision' ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_getcwdu.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_getcwdu.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_has_key.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_has_key.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_idioms.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_idioms.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_import.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_import.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_imports.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_imports.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_imports2.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_imports2.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_input.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_input.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_intern.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_intern.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_itertools.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_itertools.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_itertools_imports.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_itertools_imports.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_long.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_long.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_map.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_map.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_metaclass.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_metaclass.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_methodattrs.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_methodattrs.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 13 ++'Id Revision' ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_ne.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_ne.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_next.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_next.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_nonzero.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_nonzero.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_numliterals.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_numliterals.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_paren.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_paren.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_print.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_print.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_raise.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_raise.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_raw_input.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_raw_input.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_renames.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_renames.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 13 ++'Id Revision' ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_repr.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_repr.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_set_literal.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_set_literal.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_standarderror.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_standarderror.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_sys_exc.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_sys_exc.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_throw.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_throw.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_tuple_params.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_tuple_params.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_types.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_types.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_unicode.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_unicode.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_urllib.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_urllib.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_ws_comma.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_ws_comma.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_xrange.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_xrange.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,9 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++K 12 ++svn:keywords ++V 2 ++Id ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_xreadlines.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_xreadlines.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/prop-base/fix_zip.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/prop-base/fix_zip.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,5 @@ ++K 13 ++svn:eol-style ++V 6 ++native ++END +diff -r 531f2e948299 refactor/fixes/.svn/text-base/__init__.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/__init__.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,1 @@ ++# Dummy file to make this directory a package. +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_apply.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_apply.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,58 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for apply(). ++ ++This converts apply(func, v, k) into (func)(*v, **k).""" ++ ++# Local imports ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Call, Comma, parenthesize ++ ++class FixApply(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< 'apply' ++ trailer< ++ '(' ++ arglist< ++ (not argument ++ ')' ++ > ++ > ++ """ ++ ++ def transform(self, node, results): ++ syms = self.syms ++ assert results ++ func = results["func"] ++ args = results["args"] ++ kwds = results.get("kwds") ++ prefix = node.get_prefix() ++ func = func.clone() ++ if (func.type not in (token.NAME, syms.atom) and ++ (func.type != syms.power or ++ func.children[-2].type == token.DOUBLESTAR)): ++ # Need to parenthesize ++ func = parenthesize(func) ++ func.set_prefix("") ++ args = args.clone() ++ args.set_prefix("") ++ if kwds is not None: ++ kwds = kwds.clone() ++ kwds.set_prefix("") ++ l_newargs = [pytree.Leaf(token.STAR, "*"), args] ++ if kwds is not None: ++ l_newargs.extend([Comma(), ++ pytree.Leaf(token.DOUBLESTAR, "**"), ++ kwds]) ++ l_newargs[-2].set_prefix(" ") # that's the ** token ++ # XXX Sometimes we could be cleverer, e.g. apply(f, (x, y) + t) ++ # can be translated into f(x, y, *t) instead of f(*(x, y) + t) ++ #new = pytree.Node(syms.power, (func, ArgList(l_newargs))) ++ return Call(func, l_newargs, prefix=prefix) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_basestring.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_basestring.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,13 @@ ++"""Fixer for basestring -> str.""" ++# Author: Christian Heimes ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++class FixBasestring(fixer_base.BaseFix): ++ ++ PATTERN = "'basestring'" ++ ++ def transform(self, node, results): ++ return Name("str", prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_buffer.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_buffer.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,21 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that changes buffer(...) into memoryview(...).""" ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++ ++class FixBuffer(fixer_base.BaseFix): ++ ++ explicit = True # The user must ask for this fixer ++ ++ PATTERN = """ ++ power< name='buffer' trailer< '(' [any] ')' > > ++ """ ++ ++ def transform(self, node, results): ++ name = results["name"] ++ name.replace(Name("memoryview", prefix=name.get_prefix())) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_callable.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_callable.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,31 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for callable(). ++ ++This converts callable(obj) into hasattr(obj, '__call__').""" ++ ++# Local imports ++from .. import pytree ++from .. import fixer_base ++from ..fixer_util import Call, Name, String ++ ++class FixCallable(fixer_base.BaseFix): ++ ++ # Ignore callable(*args) or use of keywords. ++ # Either could be a hint that the builtin callable() is not being used. ++ PATTERN = """ ++ power< 'callable' ++ trailer< lpar='(' ++ ( not(arglist | argument) any ','> ) ++ rpar=')' > ++ after=any* ++ > ++ """ ++ ++ def transform(self, node, results): ++ func = results["func"] ++ ++ args = [func.clone(), String(', '), String("'__call__'")] ++ return Call(Name("hasattr"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_dict.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_dict.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,99 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for dict methods. ++ ++d.keys() -> list(d.keys()) ++d.items() -> list(d.items()) ++d.values() -> list(d.values()) ++ ++d.iterkeys() -> iter(d.keys()) ++d.iteritems() -> iter(d.items()) ++d.itervalues() -> iter(d.values()) ++ ++Except in certain very specific contexts: the iter() can be dropped ++when the context is list(), sorted(), iter() or for...in; the list() ++can be dropped when the context is list() or sorted() (but not iter() ++or for...in!). Special contexts that apply to both: list(), sorted(), tuple() ++set(), any(), all(), sum(). ++ ++Note: iter(d.keys()) could be written as iter(d) but since the ++original d.iterkeys() was also redundant we don't fix this. And there ++are (rare) contexts where it makes a difference (e.g. when passing it ++as an argument to a function that introspects the argument). ++""" ++ ++# Local imports ++from .. import pytree ++from .. import patcomp ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Name, Call, LParen, RParen, ArgList, Dot ++from .. import fixer_util ++ ++ ++iter_exempt = fixer_util.consuming_calls | set(["iter"]) ++ ++ ++class FixDict(fixer_base.BaseFix): ++ PATTERN = """ ++ power< head=any+ ++ trailer< '.' method=('keys'|'items'|'values'| ++ 'iterkeys'|'iteritems'|'itervalues') > ++ parens=trailer< '(' ')' > ++ tail=any* ++ > ++ """ ++ ++ def transform(self, node, results): ++ head = results["head"] ++ method = results["method"][0] # Extract node for method name ++ tail = results["tail"] ++ syms = self.syms ++ method_name = method.value ++ isiter = method_name.startswith("iter") ++ if isiter: ++ method_name = method_name[4:] ++ assert method_name in ("keys", "items", "values"), repr(method) ++ head = [n.clone() for n in head] ++ tail = [n.clone() for n in tail] ++ special = not tail and self.in_special_context(node, isiter) ++ args = head + [pytree.Node(syms.trailer, ++ [Dot(), ++ Name(method_name, ++ prefix=method.get_prefix())]), ++ results["parens"].clone()] ++ new = pytree.Node(syms.power, args) ++ if not special: ++ new.set_prefix("") ++ new = Call(Name(isiter and "iter" or "list"), [new]) ++ if tail: ++ new = pytree.Node(syms.power, [new] + tail) ++ new.set_prefix(node.get_prefix()) ++ return new ++ ++ P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" ++ p1 = patcomp.compile_pattern(P1) ++ ++ P2 = """for_stmt< 'for' any 'in' node=any ':' any* > ++ | comp_for< 'for' any 'in' node=any any* > ++ """ ++ p2 = patcomp.compile_pattern(P2) ++ ++ def in_special_context(self, node, isiter): ++ if node.parent is None: ++ return False ++ results = {} ++ if (node.parent.parent is not None and ++ self.p1.match(node.parent.parent, results) and ++ results["node"] is node): ++ if isiter: ++ # iter(d.iterkeys()) -> iter(d.keys()), etc. ++ return results["func"].value in iter_exempt ++ else: ++ # list(d.keys()) -> list(d.keys()), etc. ++ return results["func"].value in fixer_util.consuming_calls ++ if not isiter: ++ return False ++ # for ... in d.iterkeys() -> for ... in d.keys(), etc. ++ return self.p2.match(node.parent, results) and results["node"] is node +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_except.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_except.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,92 @@ ++"""Fixer for except statements with named exceptions. ++ ++The following cases will be converted: ++ ++- "except E, T:" where T is a name: ++ ++ except E as T: ++ ++- "except E, T:" where T is not a name, tuple or list: ++ ++ except E as t: ++ T = t ++ ++ This is done because the target of an "except" clause must be a ++ name. ++ ++- "except E, T:" where T is a tuple or list literal: ++ ++ except E as t: ++ T = t.args ++""" ++# Author: Collin Winter ++ ++# Local imports ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Assign, Attr, Name, is_tuple, is_list, syms ++ ++def find_excepts(nodes): ++ for i, n in enumerate(nodes): ++ if n.type == syms.except_clause: ++ if n.children[0].value == 'except': ++ yield (n, nodes[i+2]) ++ ++class FixExcept(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ try_stmt< 'try' ':' suite ++ cleanup=(except_clause ':' suite)+ ++ tail=(['except' ':' suite] ++ ['else' ':' suite] ++ ['finally' ':' suite]) > ++ """ ++ ++ def transform(self, node, results): ++ syms = self.syms ++ ++ tail = [n.clone() for n in results["tail"]] ++ ++ try_cleanup = [ch.clone() for ch in results["cleanup"]] ++ for except_clause, e_suite in find_excepts(try_cleanup): ++ if len(except_clause.children) == 4: ++ (E, comma, N) = except_clause.children[1:4] ++ comma.replace(Name("as", prefix=" ")) ++ ++ if N.type != token.NAME: ++ # Generate a new N for the except clause ++ new_N = Name(self.new_name(), prefix=" ") ++ target = N.clone() ++ target.set_prefix("") ++ N.replace(new_N) ++ new_N = new_N.clone() ++ ++ # Insert "old_N = new_N" as the first statement in ++ # the except body. This loop skips leading whitespace ++ # and indents ++ #TODO(cwinter) suite-cleanup ++ suite_stmts = e_suite.children ++ for i, stmt in enumerate(suite_stmts): ++ if isinstance(stmt, pytree.Node): ++ break ++ ++ # The assignment is different if old_N is a tuple or list ++ # In that case, the assignment is old_N = new_N.args ++ if is_tuple(N) or is_list(N): ++ assign = Assign(target, Attr(new_N, Name('args'))) ++ else: ++ assign = Assign(target, new_N) ++ ++ #TODO(cwinter) stopgap until children becomes a smart list ++ for child in reversed(suite_stmts[:i]): ++ e_suite.insert_child(0, child) ++ e_suite.insert_child(i, assign) ++ elif N.get_prefix() == "": ++ # No space after a comma is legal; no space after "as", ++ # not so much. ++ N.set_prefix(" ") ++ ++ #TODO(cwinter) fix this when children becomes a smart list ++ children = [c.clone() for c in node.children[:3]] + try_cleanup + tail ++ return pytree.Node(node.type, children) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_exec.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_exec.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,39 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for exec. ++ ++This converts usages of the exec statement into calls to a built-in ++exec() function. ++ ++exec code in ns1, ns2 -> exec(code, ns1, ns2) ++""" ++ ++# Local imports ++from .. import pytree ++from .. import fixer_base ++from ..fixer_util import Comma, Name, Call ++ ++ ++class FixExec(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ exec_stmt< 'exec' a=any 'in' b=any [',' c=any] > ++ | ++ exec_stmt< 'exec' (not atom<'(' [any] ')'>) a=any > ++ """ ++ ++ def transform(self, node, results): ++ assert results ++ syms = self.syms ++ a = results["a"] ++ b = results.get("b") ++ c = results.get("c") ++ args = [a.clone()] ++ args[0].set_prefix("") ++ if b is not None: ++ args.extend([Comma(), b.clone()]) ++ if c is not None: ++ args.extend([Comma(), c.clone()]) ++ ++ return Call(Name("exec"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_execfile.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_execfile.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,51 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for execfile. ++ ++This converts usages of the execfile function into calls to the built-in ++exec() function. ++""" ++ ++from .. import fixer_base ++from ..fixer_util import (Comma, Name, Call, LParen, RParen, Dot, Node, ++ ArgList, String, syms) ++ ++ ++class FixExecfile(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< 'execfile' trailer< '(' arglist< filename=any [',' globals=any [',' locals=any ] ] > ')' > > ++ | ++ power< 'execfile' trailer< '(' filename=any ')' > > ++ """ ++ ++ def transform(self, node, results): ++ assert results ++ filename = results["filename"] ++ globals = results.get("globals") ++ locals = results.get("locals") ++ ++ # Copy over the prefix from the right parentheses end of the execfile ++ # call. ++ execfile_paren = node.children[-1].children[-1].clone() ++ # Construct open().read(). ++ open_args = ArgList([filename.clone()], rparen=execfile_paren) ++ open_call = Node(syms.power, [Name("open"), open_args]) ++ read = [Node(syms.trailer, [Dot(), Name('read')]), ++ Node(syms.trailer, [LParen(), RParen()])] ++ open_expr = [open_call] + read ++ # Wrap the open call in a compile call. This is so the filename will be ++ # preserved in the execed code. ++ filename_arg = filename.clone() ++ filename_arg.set_prefix(" ") ++ exec_str = String("'exec'", " ") ++ compile_args = open_expr + [Comma(), filename_arg, Comma(), exec_str] ++ compile_call = Call(Name("compile"), compile_args, "") ++ # Finally, replace the execfile call with an exec call. ++ args = [compile_call] ++ if globals is not None: ++ args.extend([Comma(), globals.clone()]) ++ if locals is not None: ++ args.extend([Comma(), locals.clone()]) ++ return Call(Name("exec"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_filter.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_filter.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,75 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that changes filter(F, X) into list(filter(F, X)). ++ ++We avoid the transformation if the filter() call is directly contained ++in iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or ++for V in <>:. ++ ++NOTE: This is still not correct if the original code was depending on ++filter(F, X) to return a string if X is a string and a tuple if X is a ++tuple. That would require type inference, which we don't do. Let ++Python 2.6 figure it out. ++""" ++ ++# Local imports ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Name, Call, ListComp, in_special_context ++ ++class FixFilter(fixer_base.ConditionalFix): ++ ++ PATTERN = """ ++ filter_lambda=power< ++ 'filter' ++ trailer< ++ '(' ++ arglist< ++ lambdef< 'lambda' ++ (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any ++ > ++ ',' ++ it=any ++ > ++ ')' ++ > ++ > ++ | ++ power< ++ 'filter' ++ trailer< '(' arglist< none='None' ',' seq=any > ')' > ++ > ++ | ++ power< ++ 'filter' ++ args=trailer< '(' [any] ')' > ++ > ++ """ ++ ++ skip_on = "future_builtins.filter" ++ ++ def transform(self, node, results): ++ if self.should_skip(node): ++ return ++ ++ if "filter_lambda" in results: ++ new = ListComp(results.get("fp").clone(), ++ results.get("fp").clone(), ++ results.get("it").clone(), ++ results.get("xp").clone()) ++ ++ elif "none" in results: ++ new = ListComp(Name("_f"), ++ Name("_f"), ++ results["seq"].clone(), ++ Name("_f")) ++ ++ else: ++ if in_special_context(node): ++ return None ++ new = node.clone() ++ new.set_prefix("") ++ new = Call(Name("list"), [new]) ++ new.set_prefix(node.get_prefix()) ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_funcattrs.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_funcattrs.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,19 @@ ++"""Fix function attribute names (f.func_x -> f.__x__).""" ++# Author: Collin Winter ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++ ++class FixFuncattrs(fixer_base.BaseFix): ++ PATTERN = """ ++ power< any+ trailer< '.' attr=('func_closure' | 'func_doc' | 'func_globals' ++ | 'func_name' | 'func_defaults' | 'func_code' ++ | 'func_dict') > any* > ++ """ ++ ++ def transform(self, node, results): ++ attr = results["attr"][0] ++ attr.replace(Name(("__%s__" % attr.value[5:]), ++ prefix=attr.get_prefix())) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_future.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_future.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,20 @@ ++"""Remove __future__ imports ++ ++from __future__ import foo is replaced with an empty line. ++""" ++# Author: Christian Heimes ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import BlankLine ++ ++class FixFuture(fixer_base.BaseFix): ++ PATTERN = """import_from< 'from' module_name="__future__" 'import' any >""" ++ ++ # This should be run last -- some things check for the import ++ run_order = 10 ++ ++ def transform(self, node, results): ++ new = BlankLine() ++ new.prefix = node.get_prefix() ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_getcwdu.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_getcwdu.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,18 @@ ++""" ++Fixer that changes os.getcwdu() to os.getcwd(). ++""" ++# Author: Victor Stinner ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++class FixGetcwdu(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< 'os' trailer< dot='.' name='getcwdu' > any* > ++ """ ++ ++ def transform(self, node, results): ++ name = results["name"] ++ name.replace(Name("getcwd", prefix=name.get_prefix())) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_has_key.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_has_key.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,109 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for has_key(). ++ ++Calls to .has_key() methods are expressed in terms of the 'in' ++operator: ++ ++ d.has_key(k) -> k in d ++ ++CAVEATS: ++1) While the primary target of this fixer is dict.has_key(), the ++ fixer will change any has_key() method call, regardless of its ++ class. ++ ++2) Cases like this will not be converted: ++ ++ m = d.has_key ++ if m(k): ++ ... ++ ++ Only *calls* to has_key() are converted. While it is possible to ++ convert the above to something like ++ ++ m = d.__contains__ ++ if m(k): ++ ... ++ ++ this is currently not done. ++""" ++ ++# Local imports ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Name, parenthesize ++ ++ ++class FixHasKey(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ anchor=power< ++ before=any+ ++ trailer< '.' 'has_key' > ++ trailer< ++ '(' ++ ( not(arglist | argument) arg=any ','> ++ ) ++ ')' ++ > ++ after=any* ++ > ++ | ++ negation=not_test< ++ 'not' ++ anchor=power< ++ before=any+ ++ trailer< '.' 'has_key' > ++ trailer< ++ '(' ++ ( not(arglist | argument) arg=any ','> ++ ) ++ ')' ++ > ++ > ++ > ++ """ ++ ++ def transform(self, node, results): ++ assert results ++ syms = self.syms ++ if (node.parent.type == syms.not_test and ++ self.pattern.match(node.parent)): ++ # Don't transform a node matching the first alternative of the ++ # pattern when its parent matches the second alternative ++ return None ++ negation = results.get("negation") ++ anchor = results["anchor"] ++ prefix = node.get_prefix() ++ before = [n.clone() for n in results["before"]] ++ arg = results["arg"].clone() ++ after = results.get("after") ++ if after: ++ after = [n.clone() for n in after] ++ if arg.type in (syms.comparison, syms.not_test, syms.and_test, ++ syms.or_test, syms.test, syms.lambdef, syms.argument): ++ arg = parenthesize(arg) ++ if len(before) == 1: ++ before = before[0] ++ else: ++ before = pytree.Node(syms.power, before) ++ before.set_prefix(" ") ++ n_op = Name("in", prefix=" ") ++ if negation: ++ n_not = Name("not", prefix=" ") ++ n_op = pytree.Node(syms.comp_op, (n_not, n_op)) ++ new = pytree.Node(syms.comparison, (arg, n_op, before)) ++ if after: ++ new = parenthesize(new) ++ new = pytree.Node(syms.power, (new,) + tuple(after)) ++ if node.parent.type in (syms.comparison, syms.expr, syms.xor_expr, ++ syms.and_expr, syms.shift_expr, ++ syms.arith_expr, syms.term, ++ syms.factor, syms.power): ++ new = parenthesize(new) ++ new.set_prefix(prefix) ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_idioms.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_idioms.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,134 @@ ++"""Adjust some old Python 2 idioms to their modern counterparts. ++ ++* Change some type comparisons to isinstance() calls: ++ type(x) == T -> isinstance(x, T) ++ type(x) is T -> isinstance(x, T) ++ type(x) != T -> not isinstance(x, T) ++ type(x) is not T -> not isinstance(x, T) ++ ++* Change "while 1:" into "while True:". ++ ++* Change both ++ ++ v = list(EXPR) ++ v.sort() ++ foo(v) ++ ++and the more general ++ ++ v = EXPR ++ v.sort() ++ foo(v) ++ ++into ++ ++ v = sorted(EXPR) ++ foo(v) ++""" ++# Author: Jacques Frechet, Collin Winter ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Call, Comma, Name, Node, syms ++ ++CMP = "(n='!=' | '==' | 'is' | n=comp_op< 'is' 'not' >)" ++TYPE = "power< 'type' trailer< '(' x=any ')' > >" ++ ++class FixIdioms(fixer_base.BaseFix): ++ ++ explicit = True # The user must ask for this fixer ++ ++ PATTERN = r""" ++ isinstance=comparison< %s %s T=any > ++ | ++ isinstance=comparison< T=any %s %s > ++ | ++ while_stmt< 'while' while='1' ':' any+ > ++ | ++ sorted=any< ++ any* ++ simple_stmt< ++ expr_stmt< id1=any '=' ++ power< list='list' trailer< '(' (not arglist) any ')' > > ++ > ++ '\n' ++ > ++ sort= ++ simple_stmt< ++ power< id2=any ++ trailer< '.' 'sort' > trailer< '(' ')' > ++ > ++ '\n' ++ > ++ next=any* ++ > ++ | ++ sorted=any< ++ any* ++ simple_stmt< expr_stmt< id1=any '=' expr=any > '\n' > ++ sort= ++ simple_stmt< ++ power< id2=any ++ trailer< '.' 'sort' > trailer< '(' ')' > ++ > ++ '\n' ++ > ++ next=any* ++ > ++ """ % (TYPE, CMP, CMP, TYPE) ++ ++ def match(self, node): ++ r = super(FixIdioms, self).match(node) ++ # If we've matched one of the sort/sorted subpatterns above, we ++ # want to reject matches where the initial assignment and the ++ # subsequent .sort() call involve different identifiers. ++ if r and "sorted" in r: ++ if r["id1"] == r["id2"]: ++ return r ++ return None ++ return r ++ ++ def transform(self, node, results): ++ if "isinstance" in results: ++ return self.transform_isinstance(node, results) ++ elif "while" in results: ++ return self.transform_while(node, results) ++ elif "sorted" in results: ++ return self.transform_sort(node, results) ++ else: ++ raise RuntimeError("Invalid match") ++ ++ def transform_isinstance(self, node, results): ++ x = results["x"].clone() # The thing inside of type() ++ T = results["T"].clone() # The type being compared against ++ x.set_prefix("") ++ T.set_prefix(" ") ++ test = Call(Name("isinstance"), [x, Comma(), T]) ++ if "n" in results: ++ test.set_prefix(" ") ++ test = Node(syms.not_test, [Name("not"), test]) ++ test.set_prefix(node.get_prefix()) ++ return test ++ ++ def transform_while(self, node, results): ++ one = results["while"] ++ one.replace(Name("True", prefix=one.get_prefix())) ++ ++ def transform_sort(self, node, results): ++ sort_stmt = results["sort"] ++ next_stmt = results["next"] ++ list_call = results.get("list") ++ simple_expr = results.get("expr") ++ ++ if list_call: ++ list_call.replace(Name("sorted", prefix=list_call.get_prefix())) ++ elif simple_expr: ++ new = simple_expr.clone() ++ new.set_prefix("") ++ simple_expr.replace(Call(Name("sorted"), [new], ++ prefix=simple_expr.get_prefix())) ++ else: ++ raise RuntimeError("should not have reached here") ++ sort_stmt.remove() ++ if next_stmt: ++ next_stmt[0].set_prefix(sort_stmt.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_import.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_import.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,90 @@ ++"""Fixer for import statements. ++If spam is being imported from the local directory, this import: ++ from spam import eggs ++Becomes: ++ from .spam import eggs ++ ++And this import: ++ import spam ++Becomes: ++ from . import spam ++""" ++ ++# Local imports ++from .. import fixer_base ++from os.path import dirname, join, exists, pathsep ++from ..fixer_util import FromImport, syms, token ++ ++ ++def traverse_imports(names): ++ """ ++ Walks over all the names imported in a dotted_as_names node. ++ """ ++ pending = [names] ++ while pending: ++ node = pending.pop() ++ if node.type == token.NAME: ++ yield node.value ++ elif node.type == syms.dotted_name: ++ yield "".join([ch.value for ch in node.children]) ++ elif node.type == syms.dotted_as_name: ++ pending.append(node.children[0]) ++ elif node.type == syms.dotted_as_names: ++ pending.extend(node.children[::-2]) ++ else: ++ raise AssertionError("unkown node type") ++ ++ ++class FixImport(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ import_from< 'from' imp=any 'import' ['('] any [')'] > ++ | ++ import_name< 'import' imp=any > ++ """ ++ ++ def transform(self, node, results): ++ imp = results['imp'] ++ ++ if node.type == syms.import_from: ++ # Some imps are top-level (eg: 'import ham') ++ # some are first level (eg: 'import ham.eggs') ++ # some are third level (eg: 'import ham.eggs as spam') ++ # Hence, the loop ++ while not hasattr(imp, 'value'): ++ imp = imp.children[0] ++ if self.probably_a_local_import(imp.value): ++ imp.value = "." + imp.value ++ imp.changed() ++ return node ++ else: ++ have_local = False ++ have_absolute = False ++ for mod_name in traverse_imports(imp): ++ if self.probably_a_local_import(mod_name): ++ have_local = True ++ else: ++ have_absolute = True ++ if have_absolute: ++ if have_local: ++ # We won't handle both sibling and absolute imports in the ++ # same statement at the moment. ++ self.warning(node, "absolute and local imports together") ++ return ++ ++ new = FromImport('.', [imp]) ++ new.set_prefix(node.get_prefix()) ++ return new ++ ++ def probably_a_local_import(self, imp_name): ++ imp_name = imp_name.split('.', 1)[0] ++ base_path = dirname(self.filename) ++ base_path = join(base_path, imp_name) ++ # If there is no __init__.py next to the file its not in a package ++ # so can't be a relative import. ++ if not exists(join(dirname(base_path), '__init__.py')): ++ return False ++ for ext in ['.py', pathsep, '.pyc', '.so', '.sl', '.pyd']: ++ if exists(base_path + ext): ++ return True ++ return False +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_imports.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_imports.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,145 @@ ++"""Fix incompatible imports and module references.""" ++# Authors: Collin Winter, Nick Edds ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name, attr_chain ++ ++MAPPING = {'StringIO': 'io', ++ 'cStringIO': 'io', ++ 'cPickle': 'pickle', ++ '__builtin__' : 'builtins', ++ 'copy_reg': 'copyreg', ++ 'Queue': 'queue', ++ 'SocketServer': 'socketserver', ++ 'ConfigParser': 'configparser', ++ 'repr': 'reprlib', ++ 'FileDialog': 'tkinter.filedialog', ++ 'tkFileDialog': 'tkinter.filedialog', ++ 'SimpleDialog': 'tkinter.simpledialog', ++ 'tkSimpleDialog': 'tkinter.simpledialog', ++ 'tkColorChooser': 'tkinter.colorchooser', ++ 'tkCommonDialog': 'tkinter.commondialog', ++ 'Dialog': 'tkinter.dialog', ++ 'Tkdnd': 'tkinter.dnd', ++ 'tkFont': 'tkinter.font', ++ 'tkMessageBox': 'tkinter.messagebox', ++ 'ScrolledText': 'tkinter.scrolledtext', ++ 'Tkconstants': 'tkinter.constants', ++ 'Tix': 'tkinter.tix', ++ 'ttk': 'tkinter.ttk', ++ 'Tkinter': 'tkinter', ++ 'markupbase': '_markupbase', ++ '_winreg': 'winreg', ++ 'thread': '_thread', ++ 'dummy_thread': '_dummy_thread', ++ # anydbm and whichdb are handled by fix_imports2 ++ 'dbhash': 'dbm.bsd', ++ 'dumbdbm': 'dbm.dumb', ++ 'dbm': 'dbm.ndbm', ++ 'gdbm': 'dbm.gnu', ++ 'xmlrpclib': 'xmlrpc.client', ++ 'DocXMLRPCServer': 'xmlrpc.server', ++ 'SimpleXMLRPCServer': 'xmlrpc.server', ++ 'httplib': 'http.client', ++ 'htmlentitydefs' : 'html.entities', ++ 'HTMLParser' : 'html.parser', ++ 'Cookie': 'http.cookies', ++ 'cookielib': 'http.cookiejar', ++ 'BaseHTTPServer': 'http.server', ++ 'SimpleHTTPServer': 'http.server', ++ 'CGIHTTPServer': 'http.server', ++ #'test.test_support': 'test.support', ++ 'commands': 'subprocess', ++ 'UserString' : 'collections', ++ 'UserList' : 'collections', ++ 'urlparse' : 'urllib.parse', ++ 'robotparser' : 'urllib.robotparser', ++} ++ ++ ++def alternates(members): ++ return "(" + "|".join(map(repr, members)) + ")" ++ ++ ++def build_pattern(mapping=MAPPING): ++ mod_list = ' | '.join(["module_name='%s'" % key for key in mapping]) ++ bare_names = alternates(mapping.keys()) ++ ++ yield """name_import=import_name< 'import' ((%s) | ++ multiple_imports=dotted_as_names< any* (%s) any* >) > ++ """ % (mod_list, mod_list) ++ yield """import_from< 'from' (%s) 'import' ['('] ++ ( any | import_as_name< any 'as' any > | ++ import_as_names< any* >) [')'] > ++ """ % mod_list ++ yield """import_name< 'import' (dotted_as_name< (%s) 'as' any > | ++ multiple_imports=dotted_as_names< ++ any* dotted_as_name< (%s) 'as' any > any* >) > ++ """ % (mod_list, mod_list) ++ ++ # Find usages of module members in code e.g. thread.foo(bar) ++ yield "power< bare_with_attr=(%s) trailer<'.' any > any* >" % bare_names ++ ++ ++class FixImports(fixer_base.BaseFix): ++ ++ order = "pre" # Pre-order tree traversal ++ ++ # This is overridden in fix_imports2. ++ mapping = MAPPING ++ ++ # We want to run this fixer late, so fix_import doesn't try to make stdlib ++ # renames into relative imports. ++ run_order = 6 ++ ++ def build_pattern(self): ++ return "|".join(build_pattern(self.mapping)) ++ ++ def compile_pattern(self): ++ # We override this, so MAPPING can be pragmatically altered and the ++ # changes will be reflected in PATTERN. ++ self.PATTERN = self.build_pattern() ++ super(FixImports, self).compile_pattern() ++ ++ # Don't match the node if it's within another match. ++ def match(self, node): ++ match = super(FixImports, self).match ++ results = match(node) ++ if results: ++ # Module usage could be in the trailer of an attribute lookup, so we ++ # might have nested matches when "bare_with_attr" is present. ++ if "bare_with_attr" not in results and \ ++ any([match(obj) for obj in attr_chain(node, "parent")]): ++ return False ++ return results ++ return False ++ ++ def start_tree(self, tree, filename): ++ super(FixImports, self).start_tree(tree, filename) ++ self.replace = {} ++ ++ def transform(self, node, results): ++ import_mod = results.get("module_name") ++ if import_mod: ++ mod_name = import_mod.value ++ new_name = self.mapping[mod_name] ++ import_mod.replace(Name(new_name, prefix=import_mod.get_prefix())) ++ if "name_import" in results: ++ # If it's not a "from x import x, y" or "import x as y" import, ++ # marked its usage to be replaced. ++ self.replace[mod_name] = new_name ++ if "multiple_imports" in results: ++ # This is a nasty hack to fix multiple imports on a line (e.g., ++ # "import StringIO, urlparse"). The problem is that I can't ++ # figure out an easy way to make a pattern recognize the keys of ++ # MAPPING randomly sprinkled in an import statement. ++ results = self.match(node) ++ if results: ++ self.transform(node, results) ++ else: ++ # Replace usage of the module. ++ bare_name = results["bare_with_attr"][0] ++ new_name = self.replace.get(bare_name.value) ++ if new_name: ++ bare_name.replace(Name(new_name, prefix=bare_name.get_prefix())) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_imports2.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_imports2.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,16 @@ ++"""Fix incompatible imports and module references that must be fixed after ++fix_imports.""" ++from . import fix_imports ++ ++ ++MAPPING = { ++ 'whichdb': 'dbm', ++ 'anydbm': 'dbm', ++ } ++ ++ ++class FixImports2(fix_imports.FixImports): ++ ++ run_order = 7 ++ ++ mapping = MAPPING +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_input.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_input.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,26 @@ ++"""Fixer that changes input(...) into eval(input(...)).""" ++# Author: Andre Roberge ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Call, Name ++from .. import patcomp ++ ++ ++context = patcomp.compile_pattern("power< 'eval' trailer< '(' any ')' > >") ++ ++ ++class FixInput(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< 'input' args=trailer< '(' [any] ')' > > ++ """ ++ ++ def transform(self, node, results): ++ # If we're already wrapped in a eval() call, we're done. ++ if context.match(node.parent.parent): ++ return ++ ++ new = node.clone() ++ new.set_prefix("") ++ return Call(Name("eval"), [new], prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_intern.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_intern.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,44 @@ ++# Copyright 2006 Georg Brandl. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for intern(). ++ ++intern(s) -> sys.intern(s)""" ++ ++# Local imports ++from .. import pytree ++from .. import fixer_base ++from ..fixer_util import Name, Attr, touch_import ++ ++ ++class FixIntern(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< 'intern' ++ trailer< lpar='(' ++ ( not(arglist | argument) any ','> ) ++ rpar=')' > ++ after=any* ++ > ++ """ ++ ++ def transform(self, node, results): ++ syms = self.syms ++ obj = results["obj"].clone() ++ if obj.type == syms.arglist: ++ newarglist = obj.clone() ++ else: ++ newarglist = pytree.Node(syms.arglist, [obj.clone()]) ++ after = results["after"] ++ if after: ++ after = [n.clone() for n in after] ++ new = pytree.Node(syms.power, ++ Attr(Name("sys"), Name("intern")) + ++ [pytree.Node(syms.trailer, ++ [results["lpar"].clone(), ++ newarglist, ++ results["rpar"].clone()])] + after) ++ new.set_prefix(node.get_prefix()) ++ touch_import(None, 'sys', node) ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_isinstance.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_isinstance.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,52 @@ ++# Copyright 2008 Armin Ronacher. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that cleans up a tuple argument to isinstance after the tokens ++in it were fixed. This is mainly used to remove double occurrences of ++tokens as a leftover of the long -> int / unicode -> str conversion. ++ ++eg. isinstance(x, (int, long)) -> isinstance(x, (int, int)) ++ -> isinstance(x, int) ++""" ++ ++from .. import fixer_base ++from ..fixer_util import token ++ ++ ++class FixIsinstance(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< ++ 'isinstance' ++ trailer< '(' arglist< any ',' atom< '(' ++ args=testlist_gexp< any+ > ++ ')' > > ')' > ++ > ++ """ ++ ++ run_order = 6 ++ ++ def transform(self, node, results): ++ names_inserted = set() ++ testlist = results["args"] ++ args = testlist.children ++ new_args = [] ++ iterator = enumerate(args) ++ for idx, arg in iterator: ++ if arg.type == token.NAME and arg.value in names_inserted: ++ if idx < len(args) - 1 and args[idx + 1].type == token.COMMA: ++ iterator.next() ++ continue ++ else: ++ new_args.append(arg) ++ if arg.type == token.NAME: ++ names_inserted.add(arg.value) ++ if new_args and new_args[-1].type == token.COMMA: ++ del new_args[-1] ++ if len(new_args) == 1: ++ atom = testlist.parent ++ new_args[0].set_prefix(atom.get_prefix()) ++ atom.replace(new_args[0]) ++ else: ++ args[:] = new_args ++ node.changed() +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_itertools.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_itertools.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,41 @@ ++""" Fixer for itertools.(imap|ifilter|izip) --> (map|filter|zip) and ++ itertools.ifilterfalse --> itertools.filterfalse (bugs 2360-2363) ++ ++ imports from itertools are fixed in fix_itertools_import.py ++ ++ If itertools is imported as something else (ie: import itertools as it; ++ it.izip(spam, eggs)) method calls will not get fixed. ++ """ ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++class FixItertools(fixer_base.BaseFix): ++ it_funcs = "('imap'|'ifilter'|'izip'|'ifilterfalse')" ++ PATTERN = """ ++ power< it='itertools' ++ trailer< ++ dot='.' func=%(it_funcs)s > trailer< '(' [any] ')' > > ++ | ++ power< func=%(it_funcs)s trailer< '(' [any] ')' > > ++ """ %(locals()) ++ ++ # Needs to be run after fix_(map|zip|filter) ++ run_order = 6 ++ ++ def transform(self, node, results): ++ prefix = None ++ func = results['func'][0] ++ if 'it' in results and func.value != 'ifilterfalse': ++ dot, it = (results['dot'], results['it']) ++ # Remove the 'itertools' ++ prefix = it.get_prefix() ++ it.remove() ++ # Replace the node wich contains ('.', 'function') with the ++ # function (to be consistant with the second part of the pattern) ++ dot.remove() ++ func.parent.replace(func) ++ ++ prefix = prefix or func.get_prefix() ++ func.replace(Name(func.value[1:], prefix=prefix)) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_itertools_imports.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_itertools_imports.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,52 @@ ++""" Fixer for imports of itertools.(imap|ifilter|izip|ifilterfalse) """ ++ ++# Local imports ++from lib2to3 import fixer_base ++from lib2to3.fixer_util import BlankLine, syms, token ++ ++ ++class FixItertoolsImports(fixer_base.BaseFix): ++ PATTERN = """ ++ import_from< 'from' 'itertools' 'import' imports=any > ++ """ %(locals()) ++ ++ def transform(self, node, results): ++ imports = results['imports'] ++ if imports.type == syms.import_as_name or not imports.children: ++ children = [imports] ++ else: ++ children = imports.children ++ for child in children[::2]: ++ if child.type == token.NAME: ++ member = child.value ++ name_node = child ++ else: ++ assert child.type == syms.import_as_name ++ name_node = child.children[0] ++ member_name = name_node.value ++ if member_name in ('imap', 'izip', 'ifilter'): ++ child.value = None ++ child.remove() ++ elif member_name == 'ifilterfalse': ++ node.changed() ++ name_node.value = 'filterfalse' ++ ++ # Make sure the import statement is still sane ++ children = imports.children[:] or [imports] ++ remove_comma = True ++ for child in children: ++ if remove_comma and child.type == token.COMMA: ++ child.remove() ++ else: ++ remove_comma ^= True ++ ++ if children[-1].type == token.COMMA: ++ children[-1].remove() ++ ++ # If there are no imports left, just get rid of the entire statement ++ if not (imports.children or getattr(imports, 'value', None)) or \ ++ imports.parent is None: ++ p = node.get_prefix() ++ node = BlankLine() ++ node.prefix = p ++ return node +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_long.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_long.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,22 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that turns 'long' into 'int' everywhere. ++""" ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name, Number, is_probably_builtin ++ ++ ++class FixLong(fixer_base.BaseFix): ++ ++ PATTERN = "'long'" ++ ++ static_int = Name("int") ++ ++ def transform(self, node, results): ++ if is_probably_builtin(node): ++ new = self.static_int.clone() ++ new.set_prefix(node.get_prefix()) ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_map.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_map.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,82 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that changes map(F, ...) into list(map(F, ...)) unless there ++exists a 'from future_builtins import map' statement in the top-level ++namespace. ++ ++As a special case, map(None, X) is changed into list(X). (This is ++necessary because the semantics are changed in this case -- the new ++map(None, X) is equivalent to [(x,) for x in X].) ++ ++We avoid the transformation (except for the special case mentioned ++above) if the map() call is directly contained in iter(<>), list(<>), ++tuple(<>), sorted(<>), ...join(<>), or for V in <>:. ++ ++NOTE: This is still not correct if the original code was depending on ++map(F, X, Y, ...) to go on until the longest argument is exhausted, ++substituting None for missing values -- like zip(), it now stops as ++soon as the shortest argument is exhausted. ++""" ++ ++# Local imports ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Name, Call, ListComp, in_special_context ++from ..pygram import python_symbols as syms ++ ++class FixMap(fixer_base.ConditionalFix): ++ ++ PATTERN = """ ++ map_none=power< ++ 'map' ++ trailer< '(' arglist< 'None' ',' arg=any [','] > ')' > ++ > ++ | ++ map_lambda=power< ++ 'map' ++ trailer< ++ '(' ++ arglist< ++ lambdef< 'lambda' ++ (fp=NAME | vfpdef< '(' fp=NAME ')'> ) ':' xp=any ++ > ++ ',' ++ it=any ++ > ++ ')' ++ > ++ > ++ | ++ power< ++ 'map' ++ args=trailer< '(' [any] ')' > ++ > ++ """ ++ ++ skip_on = 'future_builtins.map' ++ ++ def transform(self, node, results): ++ if self.should_skip(node): ++ return ++ ++ if node.parent.type == syms.simple_stmt: ++ self.warning(node, "You should use a for loop here") ++ new = node.clone() ++ new.set_prefix("") ++ new = Call(Name("list"), [new]) ++ elif "map_lambda" in results: ++ new = ListComp(results.get("xp").clone(), ++ results.get("fp").clone(), ++ results.get("it").clone()) ++ else: ++ if "map_none" in results: ++ new = results["arg"].clone() ++ else: ++ if in_special_context(node): ++ return None ++ new = node.clone() ++ new.set_prefix("") ++ new = Call(Name("list"), [new]) ++ new.set_prefix(node.get_prefix()) ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_metaclass.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_metaclass.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,227 @@ ++"""Fixer for __metaclass__ = X -> (metaclass=X) methods. ++ ++ The various forms of classef (inherits nothing, inherits once, inherints ++ many) don't parse the same in the CST so we look at ALL classes for ++ a __metaclass__ and if we find one normalize the inherits to all be ++ an arglist. ++ ++ For one-liner classes ('class X: pass') there is no indent/dedent so ++ we normalize those into having a suite. ++ ++ Moving the __metaclass__ into the classdef can also cause the class ++ body to be empty so there is some special casing for that as well. ++ ++ This fixer also tries very hard to keep original indenting and spacing ++ in all those corner cases. ++ ++""" ++# Author: Jack Diederich ++ ++# Local imports ++from .. import fixer_base ++from ..pygram import token ++from ..fixer_util import Name, syms, Node, Leaf ++ ++ ++def has_metaclass(parent): ++ """ we have to check the cls_node without changing it. ++ There are two possiblities: ++ 1) clsdef => suite => simple_stmt => expr_stmt => Leaf('__meta') ++ 2) clsdef => simple_stmt => expr_stmt => Leaf('__meta') ++ """ ++ for node in parent.children: ++ if node.type == syms.suite: ++ return has_metaclass(node) ++ elif node.type == syms.simple_stmt and node.children: ++ expr_node = node.children[0] ++ if expr_node.type == syms.expr_stmt and expr_node.children: ++ left_side = expr_node.children[0] ++ if isinstance(left_side, Leaf) and \ ++ left_side.value == '__metaclass__': ++ return True ++ return False ++ ++ ++def fixup_parse_tree(cls_node): ++ """ one-line classes don't get a suite in the parse tree so we add ++ one to normalize the tree ++ """ ++ for node in cls_node.children: ++ if node.type == syms.suite: ++ # already in the prefered format, do nothing ++ return ++ ++ # !%@#! oneliners have no suite node, we have to fake one up ++ for i, node in enumerate(cls_node.children): ++ if node.type == token.COLON: ++ break ++ else: ++ raise ValueError("No class suite and no ':'!") ++ ++ # move everything into a suite node ++ suite = Node(syms.suite, []) ++ while cls_node.children[i+1:]: ++ move_node = cls_node.children[i+1] ++ suite.append_child(move_node.clone()) ++ move_node.remove() ++ cls_node.append_child(suite) ++ node = suite ++ ++ ++def fixup_simple_stmt(parent, i, stmt_node): ++ """ if there is a semi-colon all the parts count as part of the same ++ simple_stmt. We just want the __metaclass__ part so we move ++ everything efter the semi-colon into its own simple_stmt node ++ """ ++ for semi_ind, node in enumerate(stmt_node.children): ++ if node.type == token.SEMI: # *sigh* ++ break ++ else: ++ return ++ ++ node.remove() # kill the semicolon ++ new_expr = Node(syms.expr_stmt, []) ++ new_stmt = Node(syms.simple_stmt, [new_expr]) ++ while stmt_node.children[semi_ind:]: ++ move_node = stmt_node.children[semi_ind] ++ new_expr.append_child(move_node.clone()) ++ move_node.remove() ++ parent.insert_child(i, new_stmt) ++ new_leaf1 = new_stmt.children[0].children[0] ++ old_leaf1 = stmt_node.children[0].children[0] ++ new_leaf1.set_prefix(old_leaf1.get_prefix()) ++ ++ ++def remove_trailing_newline(node): ++ if node.children and node.children[-1].type == token.NEWLINE: ++ node.children[-1].remove() ++ ++ ++def find_metas(cls_node): ++ # find the suite node (Mmm, sweet nodes) ++ for node in cls_node.children: ++ if node.type == syms.suite: ++ break ++ else: ++ raise ValueError("No class suite!") ++ ++ # look for simple_stmt[ expr_stmt[ Leaf('__metaclass__') ] ] ++ for i, simple_node in list(enumerate(node.children)): ++ if simple_node.type == syms.simple_stmt and simple_node.children: ++ expr_node = simple_node.children[0] ++ if expr_node.type == syms.expr_stmt and expr_node.children: ++ # Check if the expr_node is a simple assignment. ++ left_node = expr_node.children[0] ++ if isinstance(left_node, Leaf) and \ ++ left_node.value == '__metaclass__': ++ # We found a assignment to __metaclass__. ++ fixup_simple_stmt(node, i, simple_node) ++ remove_trailing_newline(simple_node) ++ yield (node, i, simple_node) ++ ++ ++def fixup_indent(suite): ++ """ If an INDENT is followed by a thing with a prefix then nuke the prefix ++ Otherwise we get in trouble when removing __metaclass__ at suite start ++ """ ++ kids = suite.children[::-1] ++ # find the first indent ++ while kids: ++ node = kids.pop() ++ if node.type == token.INDENT: ++ break ++ ++ # find the first Leaf ++ while kids: ++ node = kids.pop() ++ if isinstance(node, Leaf) and node.type != token.DEDENT: ++ if node.prefix: ++ node.set_prefix('') ++ return ++ else: ++ kids.extend(node.children[::-1]) ++ ++ ++class FixMetaclass(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ classdef ++ """ ++ ++ def transform(self, node, results): ++ if not has_metaclass(node): ++ return node ++ ++ fixup_parse_tree(node) ++ ++ # find metaclasses, keep the last one ++ last_metaclass = None ++ for suite, i, stmt in find_metas(node): ++ last_metaclass = stmt ++ stmt.remove() ++ ++ text_type = node.children[0].type # always Leaf(nnn, 'class') ++ ++ # figure out what kind of classdef we have ++ if len(node.children) == 7: ++ # Node(classdef, ['class', 'name', '(', arglist, ')', ':', suite]) ++ # 0 1 2 3 4 5 6 ++ if node.children[3].type == syms.arglist: ++ arglist = node.children[3] ++ # Node(classdef, ['class', 'name', '(', 'Parent', ')', ':', suite]) ++ else: ++ parent = node.children[3].clone() ++ arglist = Node(syms.arglist, [parent]) ++ node.set_child(3, arglist) ++ elif len(node.children) == 6: ++ # Node(classdef, ['class', 'name', '(', ')', ':', suite]) ++ # 0 1 2 3 4 5 ++ arglist = Node(syms.arglist, []) ++ node.insert_child(3, arglist) ++ elif len(node.children) == 4: ++ # Node(classdef, ['class', 'name', ':', suite]) ++ # 0 1 2 3 ++ arglist = Node(syms.arglist, []) ++ node.insert_child(2, Leaf(token.RPAR, ')')) ++ node.insert_child(2, arglist) ++ node.insert_child(2, Leaf(token.LPAR, '(')) ++ else: ++ raise ValueError("Unexpected class definition") ++ ++ # now stick the metaclass in the arglist ++ meta_txt = last_metaclass.children[0].children[0] ++ meta_txt.value = 'metaclass' ++ orig_meta_prefix = meta_txt.get_prefix() ++ ++ if arglist.children: ++ arglist.append_child(Leaf(token.COMMA, ',')) ++ meta_txt.set_prefix(' ') ++ else: ++ meta_txt.set_prefix('') ++ ++ # compact the expression "metaclass = Meta" -> "metaclass=Meta" ++ expr_stmt = last_metaclass.children[0] ++ assert expr_stmt.type == syms.expr_stmt ++ expr_stmt.children[1].set_prefix('') ++ expr_stmt.children[2].set_prefix('') ++ ++ arglist.append_child(last_metaclass) ++ ++ fixup_indent(suite) ++ ++ # check for empty suite ++ if not suite.children: ++ # one-liner that was just __metaclass_ ++ suite.remove() ++ pass_leaf = Leaf(text_type, 'pass') ++ pass_leaf.set_prefix(orig_meta_prefix) ++ node.append_child(pass_leaf) ++ node.append_child(Leaf(token.NEWLINE, '\n')) ++ ++ elif len(suite.children) > 1 and \ ++ (suite.children[-2].type == token.INDENT and ++ suite.children[-1].type == token.DEDENT): ++ # there was only one line in the class body and it was __metaclass__ ++ pass_leaf = Leaf(text_type, 'pass') ++ suite.insert_child(-1, pass_leaf) ++ suite.insert_child(-1, Leaf(token.NEWLINE, '\n')) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_methodattrs.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_methodattrs.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,23 @@ ++"""Fix bound method attributes (method.im_? -> method.__?__). ++""" ++# Author: Christian Heimes ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++MAP = { ++ "im_func" : "__func__", ++ "im_self" : "__self__", ++ "im_class" : "__self__.__class__" ++ } ++ ++class FixMethodattrs(fixer_base.BaseFix): ++ PATTERN = """ ++ power< any+ trailer< '.' attr=('im_func' | 'im_self' | 'im_class') > any* > ++ """ ++ ++ def transform(self, node, results): ++ attr = results["attr"][0] ++ new = MAP[attr.value] ++ attr.replace(Name(new, prefix=attr.get_prefix())) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_ne.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_ne.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,22 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that turns <> into !=.""" ++ ++# Local imports ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++ ++ ++class FixNe(fixer_base.BaseFix): ++ # This is so simple that we don't need the pattern compiler. ++ ++ def match(self, node): ++ # Override ++ return node.type == token.NOTEQUAL and node.value == "<>" ++ ++ def transform(self, node, results): ++ new = pytree.Leaf(token.NOTEQUAL, "!=") ++ new.set_prefix(node.get_prefix()) ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_next.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_next.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,103 @@ ++"""Fixer for it.next() -> next(it), per PEP 3114.""" ++# Author: Collin Winter ++ ++# Things that currently aren't covered: ++# - listcomp "next" names aren't warned ++# - "with" statement targets aren't checked ++ ++# Local imports ++from ..pgen2 import token ++from ..pygram import python_symbols as syms ++from .. import fixer_base ++from ..fixer_util import Name, Call, find_binding ++ ++bind_warning = "Calls to builtin next() possibly shadowed by global binding" ++ ++ ++class FixNext(fixer_base.BaseFix): ++ PATTERN = """ ++ power< base=any+ trailer< '.' attr='next' > trailer< '(' ')' > > ++ | ++ power< head=any+ trailer< '.' attr='next' > not trailer< '(' ')' > > ++ | ++ classdef< 'class' any+ ':' ++ suite< any* ++ funcdef< 'def' ++ name='next' ++ parameters< '(' NAME ')' > any+ > ++ any* > > ++ | ++ global=global_stmt< 'global' any* 'next' any* > ++ """ ++ ++ order = "pre" # Pre-order tree traversal ++ ++ def start_tree(self, tree, filename): ++ super(FixNext, self).start_tree(tree, filename) ++ ++ n = find_binding('next', tree) ++ if n: ++ self.warning(n, bind_warning) ++ self.shadowed_next = True ++ else: ++ self.shadowed_next = False ++ ++ def transform(self, node, results): ++ assert results ++ ++ base = results.get("base") ++ attr = results.get("attr") ++ name = results.get("name") ++ mod = results.get("mod") ++ ++ if base: ++ if self.shadowed_next: ++ attr.replace(Name("__next__", prefix=attr.get_prefix())) ++ else: ++ base = [n.clone() for n in base] ++ base[0].set_prefix("") ++ node.replace(Call(Name("next", prefix=node.get_prefix()), base)) ++ elif name: ++ n = Name("__next__", prefix=name.get_prefix()) ++ name.replace(n) ++ elif attr: ++ # We don't do this transformation if we're assigning to "x.next". ++ # Unfortunately, it doesn't seem possible to do this in PATTERN, ++ # so it's being done here. ++ if is_assign_target(node): ++ head = results["head"] ++ if "".join([str(n) for n in head]).strip() == '__builtin__': ++ self.warning(node, bind_warning) ++ return ++ attr.replace(Name("__next__")) ++ elif "global" in results: ++ self.warning(node, bind_warning) ++ self.shadowed_next = True ++ ++ ++### The following functions help test if node is part of an assignment ++### target. ++ ++def is_assign_target(node): ++ assign = find_assign(node) ++ if assign is None: ++ return False ++ ++ for child in assign.children: ++ if child.type == token.EQUAL: ++ return False ++ elif is_subtree(child, node): ++ return True ++ return False ++ ++def find_assign(node): ++ if node.type == syms.expr_stmt: ++ return node ++ if node.type == syms.simple_stmt or node.parent is None: ++ return None ++ return find_assign(node.parent) ++ ++def is_subtree(root, node): ++ if root == node: ++ return True ++ return any([is_subtree(c, node) for c in root.children]) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_nonzero.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_nonzero.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,20 @@ ++"""Fixer for __nonzero__ -> __bool__ methods.""" ++# Author: Collin Winter ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name, syms ++ ++class FixNonzero(fixer_base.BaseFix): ++ PATTERN = """ ++ classdef< 'class' any+ ':' ++ suite< any* ++ funcdef< 'def' name='__nonzero__' ++ parameters< '(' NAME ')' > any+ > ++ any* > > ++ """ ++ ++ def transform(self, node, results): ++ name = results["name"] ++ new = Name("__bool__", prefix=name.get_prefix()) ++ name.replace(new) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_numliterals.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_numliterals.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,27 @@ ++"""Fixer that turns 1L into 1, 0755 into 0o755. ++""" ++# Copyright 2007 Georg Brandl. ++# Licensed to PSF under a Contributor Agreement. ++ ++# Local imports ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Number ++ ++ ++class FixNumliterals(fixer_base.BaseFix): ++ # This is so simple that we don't need the pattern compiler. ++ ++ def match(self, node): ++ # Override ++ return (node.type == token.NUMBER and ++ (node.value.startswith("0") or node.value[-1] in "Ll")) ++ ++ def transform(self, node, results): ++ val = node.value ++ if val[-1] in 'Ll': ++ val = val[:-1] ++ elif val.startswith('0') and val.isdigit() and len(set(val)) > 1: ++ val = "0o" + val[1:] ++ ++ return Number(val, prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_paren.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_paren.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,42 @@ ++"""Fixer that addes parentheses where they are required ++ ++This converts ``[x for x in 1, 2]`` to ``[x for x in (1, 2)]``.""" ++ ++# By Taek Joo Kim and Benjamin Peterson ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import LParen, RParen ++ ++# XXX This doesn't support nested for loops like [x for x in 1, 2 for x in 1, 2] ++class FixParen(fixer_base.BaseFix): ++ PATTERN = """ ++ atom< ('[' | '(') ++ (listmaker< any ++ comp_for< ++ 'for' NAME 'in' ++ target=testlist_safe< any (',' any)+ [','] ++ > ++ [any] ++ > ++ > ++ | ++ testlist_gexp< any ++ comp_for< ++ 'for' NAME 'in' ++ target=testlist_safe< any (',' any)+ [','] ++ > ++ [any] ++ > ++ >) ++ (']' | ')') > ++ """ ++ ++ def transform(self, node, results): ++ target = results["target"] ++ ++ lparen = LParen() ++ lparen.set_prefix(target.get_prefix()) ++ target.set_prefix("") # Make it hug the parentheses ++ target.insert_child(0, lparen) ++ target.append_child(RParen()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_print.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_print.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,90 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for print. ++ ++Change: ++ 'print' into 'print()' ++ 'print ...' into 'print(...)' ++ 'print ... ,' into 'print(..., end=" ")' ++ 'print >>x, ...' into 'print(..., file=x)' ++ ++No changes are applied if print_function is imported from __future__ ++ ++""" ++ ++# Local imports ++from .. import patcomp ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Name, Call, Comma, String, is_tuple ++ ++ ++parend_expr = patcomp.compile_pattern( ++ """atom< '(' [atom|STRING|NAME] ')' >""" ++ ) ++ ++ ++class FixPrint(fixer_base.ConditionalFix): ++ ++ PATTERN = """ ++ simple_stmt< any* bare='print' any* > | print_stmt ++ """ ++ ++ skip_on = '__future__.print_function' ++ ++ def transform(self, node, results): ++ assert results ++ ++ if self.should_skip(node): ++ return ++ ++ bare_print = results.get("bare") ++ ++ if bare_print: ++ # Special-case print all by itself ++ bare_print.replace(Call(Name("print"), [], ++ prefix=bare_print.get_prefix())) ++ return ++ assert node.children[0] == Name("print") ++ args = node.children[1:] ++ if len(args) == 1 and parend_expr.match(args[0]): ++ # We don't want to keep sticking parens around an ++ # already-parenthesised expression. ++ return ++ ++ sep = end = file = None ++ if args and args[-1] == Comma(): ++ args = args[:-1] ++ end = " " ++ if args and args[0] == pytree.Leaf(token.RIGHTSHIFT, ">>"): ++ assert len(args) >= 2 ++ file = args[1].clone() ++ args = args[3:] # Strip a possible comma after the file expression ++ # Now synthesize a print(args, sep=..., end=..., file=...) node. ++ l_args = [arg.clone() for arg in args] ++ if l_args: ++ l_args[0].set_prefix("") ++ if sep is not None or end is not None or file is not None: ++ if sep is not None: ++ self.add_kwarg(l_args, "sep", String(repr(sep))) ++ if end is not None: ++ self.add_kwarg(l_args, "end", String(repr(end))) ++ if file is not None: ++ self.add_kwarg(l_args, "file", file) ++ n_stmt = Call(Name("print"), l_args) ++ n_stmt.set_prefix(node.get_prefix()) ++ return n_stmt ++ ++ def add_kwarg(self, l_nodes, s_kwd, n_expr): ++ # XXX All this prefix-setting may lose comments (though rarely) ++ n_expr.set_prefix("") ++ n_argument = pytree.Node(self.syms.argument, ++ (Name(s_kwd), ++ pytree.Leaf(token.EQUAL, "="), ++ n_expr)) ++ if l_nodes: ++ l_nodes.append(Comma()) ++ n_argument.set_prefix(" ") ++ l_nodes.append(n_argument) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_raise.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_raise.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,82 @@ ++"""Fixer for 'raise E, V, T' ++ ++raise -> raise ++raise E -> raise E ++raise E, V -> raise E(V) ++raise E, V, T -> raise E(V).with_traceback(T) ++ ++raise (((E, E'), E''), E'''), V -> raise E(V) ++raise "foo", V, T -> warns about string exceptions ++ ++ ++CAVEATS: ++1) "raise E, V" will be incorrectly translated if V is an exception ++ instance. The correct Python 3 idiom is ++ ++ raise E from V ++ ++ but since we can't detect instance-hood by syntax alone and since ++ any client code would have to be changed as well, we don't automate ++ this. ++""" ++# Author: Collin Winter ++ ++# Local imports ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Name, Call, Attr, ArgList, is_tuple ++ ++class FixRaise(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ raise_stmt< 'raise' exc=any [',' val=any [',' tb=any]] > ++ """ ++ ++ def transform(self, node, results): ++ syms = self.syms ++ ++ exc = results["exc"].clone() ++ if exc.type is token.STRING: ++ self.cannot_convert(node, "Python 3 does not support string exceptions") ++ return ++ ++ # Python 2 supports ++ # raise ((((E1, E2), E3), E4), E5), V ++ # as a synonym for ++ # raise E1, V ++ # Since Python 3 will not support this, we recurse down any tuple ++ # literals, always taking the first element. ++ if is_tuple(exc): ++ while is_tuple(exc): ++ # exc.children[1:-1] is the unparenthesized tuple ++ # exc.children[1].children[0] is the first element of the tuple ++ exc = exc.children[1].children[0].clone() ++ exc.set_prefix(" ") ++ ++ if "val" not in results: ++ # One-argument raise ++ new = pytree.Node(syms.raise_stmt, [Name("raise"), exc]) ++ new.set_prefix(node.get_prefix()) ++ return new ++ ++ val = results["val"].clone() ++ if is_tuple(val): ++ args = [c.clone() for c in val.children[1:-1]] ++ else: ++ val.set_prefix("") ++ args = [val] ++ ++ if "tb" in results: ++ tb = results["tb"].clone() ++ tb.set_prefix("") ++ ++ e = Call(exc, args) ++ with_tb = Attr(e, Name('with_traceback')) + [ArgList([tb])] ++ new = pytree.Node(syms.simple_stmt, [Name("raise")] + with_tb) ++ new.set_prefix(node.get_prefix()) ++ return new ++ else: ++ return pytree.Node(syms.raise_stmt, ++ [Name("raise"), Call(exc, args)], ++ prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_raw_input.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_raw_input.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,16 @@ ++"""Fixer that changes raw_input(...) into input(...).""" ++# Author: Andre Roberge ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++class FixRawInput(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< name='raw_input' trailer< '(' [any] ')' > any* > ++ """ ++ ++ def transform(self, node, results): ++ name = results["name"] ++ name.replace(Name("input", prefix=name.get_prefix())) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_reduce.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_reduce.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,33 @@ ++# Copyright 2008 Armin Ronacher. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for reduce(). ++ ++Makes sure reduce() is imported from the functools module if reduce is ++used in that module. ++""" ++ ++from .. import pytree ++from .. import fixer_base ++from ..fixer_util import Name, Attr, touch_import ++ ++ ++ ++class FixReduce(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< 'reduce' ++ trailer< '(' ++ arglist< ( ++ (not(argument) any ',' ++ not(argument ++ > ++ """ ++ ++ def transform(self, node, results): ++ touch_import('functools', 'reduce', node) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_renames.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_renames.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,69 @@ ++"""Fix incompatible renames ++ ++Fixes: ++ * sys.maxint -> sys.maxsize ++""" ++# Author: Christian Heimes ++# based on Collin Winter's fix_import ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name, attr_chain ++ ++MAPPING = {"sys": {"maxint" : "maxsize"}, ++ } ++LOOKUP = {} ++ ++def alternates(members): ++ return "(" + "|".join(map(repr, members)) + ")" ++ ++ ++def build_pattern(): ++ #bare = set() ++ for module, replace in MAPPING.items(): ++ for old_attr, new_attr in replace.items(): ++ LOOKUP[(module, old_attr)] = new_attr ++ #bare.add(module) ++ #bare.add(old_attr) ++ #yield """ ++ # import_name< 'import' (module=%r ++ # | dotted_as_names< any* module=%r any* >) > ++ # """ % (module, module) ++ yield """ ++ import_from< 'from' module_name=%r 'import' ++ ( attr_name=%r | import_as_name< attr_name=%r 'as' any >) > ++ """ % (module, old_attr, old_attr) ++ yield """ ++ power< module_name=%r trailer< '.' attr_name=%r > any* > ++ """ % (module, old_attr) ++ #yield """bare_name=%s""" % alternates(bare) ++ ++ ++class FixRenames(fixer_base.BaseFix): ++ PATTERN = "|".join(build_pattern()) ++ ++ order = "pre" # Pre-order tree traversal ++ ++ # Don't match the node if it's within another match ++ def match(self, node): ++ match = super(FixRenames, self).match ++ results = match(node) ++ if results: ++ if any([match(obj) for obj in attr_chain(node, "parent")]): ++ return False ++ return results ++ return False ++ ++ #def start_tree(self, tree, filename): ++ # super(FixRenames, self).start_tree(tree, filename) ++ # self.replace = {} ++ ++ def transform(self, node, results): ++ mod_name = results.get("module_name") ++ attr_name = results.get("attr_name") ++ #bare_name = results.get("bare_name") ++ #import_mod = results.get("module") ++ ++ if mod_name and attr_name: ++ new_attr = LOOKUP[(mod_name.value, attr_name.value)] ++ attr_name.replace(Name(new_attr, prefix=attr_name.get_prefix())) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_repr.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_repr.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,22 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that transforms `xyzzy` into repr(xyzzy).""" ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Call, Name, parenthesize ++ ++ ++class FixRepr(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ atom < '`' expr=any '`' > ++ """ ++ ++ def transform(self, node, results): ++ expr = results["expr"].clone() ++ ++ if expr.type == self.syms.testlist1: ++ expr = parenthesize(expr) ++ return Call(Name("repr"), [expr], prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_set_literal.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_set_literal.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,52 @@ ++""" ++Optional fixer to transform set() calls to set literals. ++""" ++ ++# Author: Benjamin Peterson ++ ++from lib2to3 import fixer_base, pytree ++from lib2to3.fixer_util import token, syms ++ ++ ++ ++class FixSetLiteral(fixer_base.BaseFix): ++ ++ explicit = True ++ ++ PATTERN = """power< 'set' trailer< '(' ++ (atom=atom< '[' (items=listmaker< any ((',' any)* [',']) > ++ | ++ single=any) ']' > ++ | ++ atom< '(' items=testlist_gexp< any ((',' any)* [',']) > ')' > ++ ) ++ ')' > > ++ """ ++ ++ def transform(self, node, results): ++ single = results.get("single") ++ if single: ++ # Make a fake listmaker ++ fake = pytree.Node(syms.listmaker, [single.clone()]) ++ single.replace(fake) ++ items = fake ++ else: ++ items = results["items"] ++ ++ # Build the contents of the literal ++ literal = [pytree.Leaf(token.LBRACE, "{")] ++ literal.extend(n.clone() for n in items.children) ++ literal.append(pytree.Leaf(token.RBRACE, "}")) ++ # Set the prefix of the right brace to that of the ')' or ']' ++ literal[-1].set_prefix(items.next_sibling.get_prefix()) ++ maker = pytree.Node(syms.dictsetmaker, literal) ++ maker.set_prefix(node.get_prefix()) ++ ++ # If the original was a one tuple, we need to remove the extra comma. ++ if len(maker.children) == 4: ++ n = maker.children[2] ++ n.remove() ++ maker.children[-1].set_prefix(n.get_prefix()) ++ ++ # Finally, replace the set call with our shiny new literal. ++ return maker +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_standarderror.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_standarderror.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,18 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for StandardError -> Exception.""" ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++ ++class FixStandarderror(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ 'StandardError' ++ """ ++ ++ def transform(self, node, results): ++ return Name("Exception", prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_sys_exc.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_sys_exc.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,29 @@ ++"""Fixer for sys.exc_{type, value, traceback} ++ ++sys.exc_type -> sys.exc_info()[0] ++sys.exc_value -> sys.exc_info()[1] ++sys.exc_traceback -> sys.exc_info()[2] ++""" ++ ++# By Jeff Balogh and Benjamin Peterson ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Attr, Call, Name, Number, Subscript, Node, syms ++ ++class FixSysExc(fixer_base.BaseFix): ++ # This order matches the ordering of sys.exc_info(). ++ exc_info = ["exc_type", "exc_value", "exc_traceback"] ++ PATTERN = """ ++ power< 'sys' trailer< dot='.' attribute=(%s) > > ++ """ % '|'.join("'%s'" % e for e in exc_info) ++ ++ def transform(self, node, results): ++ sys_attr = results["attribute"][0] ++ index = Number(self.exc_info.index(sys_attr.value)) ++ ++ call = Call(Name("exc_info"), prefix=sys_attr.get_prefix()) ++ attr = Attr(Name("sys"), call) ++ attr[1].children[0].set_prefix(results["dot"].get_prefix()) ++ attr.append(Subscript(index)) ++ return Node(syms.power, attr, prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_throw.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_throw.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,56 @@ ++"""Fixer for generator.throw(E, V, T). ++ ++g.throw(E) -> g.throw(E) ++g.throw(E, V) -> g.throw(E(V)) ++g.throw(E, V, T) -> g.throw(E(V).with_traceback(T)) ++ ++g.throw("foo"[, V[, T]]) will warn about string exceptions.""" ++# Author: Collin Winter ++ ++# Local imports ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Name, Call, ArgList, Attr, is_tuple ++ ++class FixThrow(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< any trailer< '.' 'throw' > ++ trailer< '(' args=arglist< exc=any ',' val=any [',' tb=any] > ')' > ++ > ++ | ++ power< any trailer< '.' 'throw' > trailer< '(' exc=any ')' > > ++ """ ++ ++ def transform(self, node, results): ++ syms = self.syms ++ ++ exc = results["exc"].clone() ++ if exc.type is token.STRING: ++ self.cannot_convert(node, "Python 3 does not support string exceptions") ++ return ++ ++ # Leave "g.throw(E)" alone ++ val = results.get("val") ++ if val is None: ++ return ++ ++ val = val.clone() ++ if is_tuple(val): ++ args = [c.clone() for c in val.children[1:-1]] ++ else: ++ val.set_prefix("") ++ args = [val] ++ ++ throw_args = results["args"] ++ ++ if "tb" in results: ++ tb = results["tb"].clone() ++ tb.set_prefix("") ++ ++ e = Call(exc, args) ++ with_tb = Attr(e, Name('with_traceback')) + [ArgList([tb])] ++ throw_args.replace(pytree.Node(syms.power, with_tb)) ++ else: ++ throw_args.replace(Call(exc, args)) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_tuple_params.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_tuple_params.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,169 @@ ++"""Fixer for function definitions with tuple parameters. ++ ++def func(((a, b), c), d): ++ ... ++ ++ -> ++ ++def func(x, d): ++ ((a, b), c) = x ++ ... ++ ++It will also support lambdas: ++ ++ lambda (x, y): x + y -> lambda t: t[0] + t[1] ++ ++ # The parens are a syntax error in Python 3 ++ lambda (x): x + y -> lambda x: x + y ++""" ++# Author: Collin Winter ++ ++# Local imports ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Assign, Name, Newline, Number, Subscript, syms ++ ++def is_docstring(stmt): ++ return isinstance(stmt, pytree.Node) and \ ++ stmt.children[0].type == token.STRING ++ ++class FixTupleParams(fixer_base.BaseFix): ++ PATTERN = """ ++ funcdef< 'def' any parameters< '(' args=any ')' > ++ ['->' any] ':' suite=any+ > ++ | ++ lambda= ++ lambdef< 'lambda' args=vfpdef< '(' inner=any ')' > ++ ':' body=any ++ > ++ """ ++ ++ def transform(self, node, results): ++ if "lambda" in results: ++ return self.transform_lambda(node, results) ++ ++ new_lines = [] ++ suite = results["suite"] ++ args = results["args"] ++ # This crap is so "def foo(...): x = 5; y = 7" is handled correctly. ++ # TODO(cwinter): suite-cleanup ++ if suite[0].children[1].type == token.INDENT: ++ start = 2 ++ indent = suite[0].children[1].value ++ end = Newline() ++ else: ++ start = 0 ++ indent = "; " ++ end = pytree.Leaf(token.INDENT, "") ++ ++ # We need access to self for new_name(), and making this a method ++ # doesn't feel right. Closing over self and new_lines makes the ++ # code below cleaner. ++ def handle_tuple(tuple_arg, add_prefix=False): ++ n = Name(self.new_name()) ++ arg = tuple_arg.clone() ++ arg.set_prefix("") ++ stmt = Assign(arg, n.clone()) ++ if add_prefix: ++ n.set_prefix(" ") ++ tuple_arg.replace(n) ++ new_lines.append(pytree.Node(syms.simple_stmt, ++ [stmt, end.clone()])) ++ ++ if args.type == syms.tfpdef: ++ handle_tuple(args) ++ elif args.type == syms.typedargslist: ++ for i, arg in enumerate(args.children): ++ if arg.type == syms.tfpdef: ++ # Without add_prefix, the emitted code is correct, ++ # just ugly. ++ handle_tuple(arg, add_prefix=(i > 0)) ++ ++ if not new_lines: ++ return node ++ ++ # This isn't strictly necessary, but it plays nicely with other fixers. ++ # TODO(cwinter) get rid of this when children becomes a smart list ++ for line in new_lines: ++ line.parent = suite[0] ++ ++ # TODO(cwinter) suite-cleanup ++ after = start ++ if start == 0: ++ new_lines[0].set_prefix(" ") ++ elif is_docstring(suite[0].children[start]): ++ new_lines[0].set_prefix(indent) ++ after = start + 1 ++ ++ suite[0].children[after:after] = new_lines ++ for i in range(after+1, after+len(new_lines)+1): ++ suite[0].children[i].set_prefix(indent) ++ suite[0].changed() ++ ++ def transform_lambda(self, node, results): ++ args = results["args"] ++ body = results["body"] ++ inner = simplify_args(results["inner"]) ++ ++ # Replace lambda ((((x)))): x with lambda x: x ++ if inner.type == token.NAME: ++ inner = inner.clone() ++ inner.set_prefix(" ") ++ args.replace(inner) ++ return ++ ++ params = find_params(args) ++ to_index = map_to_index(params) ++ tup_name = self.new_name(tuple_name(params)) ++ ++ new_param = Name(tup_name, prefix=" ") ++ args.replace(new_param.clone()) ++ for n in body.post_order(): ++ if n.type == token.NAME and n.value in to_index: ++ subscripts = [c.clone() for c in to_index[n.value]] ++ new = pytree.Node(syms.power, ++ [new_param.clone()] + subscripts) ++ new.set_prefix(n.get_prefix()) ++ n.replace(new) ++ ++ ++### Helper functions for transform_lambda() ++ ++def simplify_args(node): ++ if node.type in (syms.vfplist, token.NAME): ++ return node ++ elif node.type == syms.vfpdef: ++ # These look like vfpdef< '(' x ')' > where x is NAME ++ # or another vfpdef instance (leading to recursion). ++ while node.type == syms.vfpdef: ++ node = node.children[1] ++ return node ++ raise RuntimeError("Received unexpected node %s" % node) ++ ++def find_params(node): ++ if node.type == syms.vfpdef: ++ return find_params(node.children[1]) ++ elif node.type == token.NAME: ++ return node.value ++ return [find_params(c) for c in node.children if c.type != token.COMMA] ++ ++def map_to_index(param_list, prefix=[], d=None): ++ if d is None: ++ d = {} ++ for i, obj in enumerate(param_list): ++ trailer = [Subscript(Number(i))] ++ if isinstance(obj, list): ++ map_to_index(obj, trailer, d=d) ++ else: ++ d[obj] = prefix + trailer ++ return d ++ ++def tuple_name(param_list): ++ l = [] ++ for obj in param_list: ++ if isinstance(obj, list): ++ l.append(tuple_name(obj)) ++ else: ++ l.append(obj) ++ return "_".join(l) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_types.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_types.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,62 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for removing uses of the types module. ++ ++These work for only the known names in the types module. The forms above ++can include types. or not. ie, It is assumed the module is imported either as: ++ ++ import types ++ from types import ... # either * or specific types ++ ++The import statements are not modified. ++ ++There should be another fixer that handles at least the following constants: ++ ++ type([]) -> list ++ type(()) -> tuple ++ type('') -> str ++ ++""" ++ ++# Local imports ++from ..pgen2 import token ++from .. import fixer_base ++from ..fixer_util import Name ++ ++_TYPE_MAPPING = { ++ 'BooleanType' : 'bool', ++ 'BufferType' : 'memoryview', ++ 'ClassType' : 'type', ++ 'ComplexType' : 'complex', ++ 'DictType': 'dict', ++ 'DictionaryType' : 'dict', ++ 'EllipsisType' : 'type(Ellipsis)', ++ #'FileType' : 'io.IOBase', ++ 'FloatType': 'float', ++ 'IntType': 'int', ++ 'ListType': 'list', ++ 'LongType': 'int', ++ 'ObjectType' : 'object', ++ 'NoneType': 'type(None)', ++ 'NotImplementedType' : 'type(NotImplemented)', ++ 'SliceType' : 'slice', ++ 'StringType': 'bytes', # XXX ? ++ 'StringTypes' : 'str', # XXX ? ++ 'TupleType': 'tuple', ++ 'TypeType' : 'type', ++ 'UnicodeType': 'str', ++ 'XRangeType' : 'range', ++ } ++ ++_pats = ["power< 'types' trailer< '.' name='%s' > >" % t for t in _TYPE_MAPPING] ++ ++class FixTypes(fixer_base.BaseFix): ++ ++ PATTERN = '|'.join(_pats) ++ ++ def transform(self, node, results): ++ new_value = _TYPE_MAPPING.get(results["name"].value) ++ if new_value: ++ return Name(new_value, prefix=node.get_prefix()) ++ return None +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_unicode.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_unicode.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,28 @@ ++"""Fixer that changes unicode to str, unichr to chr, and u"..." into "...". ++ ++""" ++ ++import re ++from ..pgen2 import token ++from .. import fixer_base ++ ++class FixUnicode(fixer_base.BaseFix): ++ ++ PATTERN = "STRING | NAME<'unicode' | 'unichr'>" ++ ++ def transform(self, node, results): ++ if node.type == token.NAME: ++ if node.value == "unicode": ++ new = node.clone() ++ new.value = "str" ++ return new ++ if node.value == "unichr": ++ new = node.clone() ++ new.value = "chr" ++ return new ++ # XXX Warn when __unicode__ found? ++ elif node.type == token.STRING: ++ if re.match(r"[uU][rR]?[\'\"]", node.value): ++ new = node.clone() ++ new.value = new.value[1:] ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_urllib.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_urllib.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,180 @@ ++"""Fix changes imports of urllib which are now incompatible. ++ This is rather similar to fix_imports, but because of the more ++ complex nature of the fixing for urllib, it has its own fixer. ++""" ++# Author: Nick Edds ++ ++# Local imports ++from .fix_imports import alternates, FixImports ++from .. import fixer_base ++from ..fixer_util import Name, Comma, FromImport, Newline, attr_chain ++ ++MAPPING = {'urllib': [ ++ ('urllib.request', ++ ['URLOpener', 'FancyURLOpener', 'urlretrieve', ++ '_urlopener', 'urlcleanup']), ++ ('urllib.parse', ++ ['quote', 'quote_plus', 'unquote', 'unquote_plus', ++ 'urlencode', 'pathname2url', 'url2pathname', 'splitattr', ++ 'splithost', 'splitnport', 'splitpasswd', 'splitport', ++ 'splitquery', 'splittag', 'splittype', 'splituser', ++ 'splitvalue', ]), ++ ('urllib.error', ++ ['ContentTooShortError'])], ++ 'urllib2' : [ ++ ('urllib.request', ++ ['urlopen', 'install_opener', 'build_opener', ++ 'Request', 'OpenerDirector', 'BaseHandler', ++ 'HTTPDefaultErrorHandler', 'HTTPRedirectHandler', ++ 'HTTPCookieProcessor', 'ProxyHandler', ++ 'HTTPPasswordMgr', ++ 'HTTPPasswordMgrWithDefaultRealm', ++ 'AbstractBasicAuthHandler', ++ 'HTTPBasicAuthHandler', 'ProxyBasicAuthHandler', ++ 'AbstractDigestAuthHandler', ++ 'HTTPDigestAuthHandler', 'ProxyDigestAuthHandler', ++ 'HTTPHandler', 'HTTPSHandler', 'FileHandler', ++ 'FTPHandler', 'CacheFTPHandler', ++ 'UnknownHandler']), ++ ('urllib.error', ++ ['URLError', 'HTTPError']), ++ ] ++} ++ ++# Duplicate the url parsing functions for urllib2. ++MAPPING["urllib2"].append(MAPPING["urllib"][1]) ++ ++ ++def build_pattern(): ++ bare = set() ++ for old_module, changes in MAPPING.items(): ++ for change in changes: ++ new_module, members = change ++ members = alternates(members) ++ yield """import_name< 'import' (module=%r ++ | dotted_as_names< any* module=%r any* >) > ++ """ % (old_module, old_module) ++ yield """import_from< 'from' mod_member=%r 'import' ++ ( member=%s | import_as_name< member=%s 'as' any > | ++ import_as_names< members=any* >) > ++ """ % (old_module, members, members) ++ yield """import_from< 'from' module_star=%r 'import' star='*' > ++ """ % old_module ++ yield """import_name< 'import' ++ dotted_as_name< module_as=%r 'as' any > > ++ """ % old_module ++ yield """power< module_dot=%r trailer< '.' member=%s > any* > ++ """ % (old_module, members) ++ ++ ++class FixUrllib(FixImports): ++ ++ def build_pattern(self): ++ return "|".join(build_pattern()) ++ ++ def transform_import(self, node, results): ++ """Transform for the basic import case. Replaces the old ++ import name with a comma separated list of its ++ replacements. ++ """ ++ import_mod = results.get('module') ++ pref = import_mod.get_prefix() ++ ++ names = [] ++ ++ # create a Node list of the replacement modules ++ for name in MAPPING[import_mod.value][:-1]: ++ names.extend([Name(name[0], prefix=pref), Comma()]) ++ names.append(Name(MAPPING[import_mod.value][-1][0], prefix=pref)) ++ import_mod.replace(names) ++ ++ def transform_member(self, node, results): ++ """Transform for imports of specific module elements. Replaces ++ the module to be imported from with the appropriate new ++ module. ++ """ ++ mod_member = results.get('mod_member') ++ pref = mod_member.get_prefix() ++ member = results.get('member') ++ ++ # Simple case with only a single member being imported ++ if member: ++ # this may be a list of length one, or just a node ++ if isinstance(member, list): ++ member = member[0] ++ new_name = None ++ for change in MAPPING[mod_member.value]: ++ if member.value in change[1]: ++ new_name = change[0] ++ break ++ if new_name: ++ mod_member.replace(Name(new_name, prefix=pref)) ++ else: ++ self.cannot_convert(node, ++ 'This is an invalid module element') ++ ++ # Multiple members being imported ++ else: ++ # a dictionary for replacements, order matters ++ modules = [] ++ mod_dict = {} ++ members = results.get('members') ++ for member in members: ++ member = member.value ++ # we only care about the actual members ++ if member != ',': ++ for change in MAPPING[mod_member.value]: ++ if member in change[1]: ++ if change[0] in mod_dict: ++ mod_dict[change[0]].append(member) ++ else: ++ mod_dict[change[0]] = [member] ++ modules.append(change[0]) ++ ++ new_nodes = [] ++ for module in modules: ++ elts = mod_dict[module] ++ names = [] ++ for elt in elts[:-1]: ++ names.extend([Name(elt, prefix=pref), Comma()]) ++ names.append(Name(elts[-1], prefix=pref)) ++ new_nodes.append(FromImport(module, names)) ++ if new_nodes: ++ nodes = [] ++ for new_node in new_nodes[:-1]: ++ nodes.extend([new_node, Newline()]) ++ nodes.append(new_nodes[-1]) ++ node.replace(nodes) ++ else: ++ self.cannot_convert(node, 'All module elements are invalid') ++ ++ def transform_dot(self, node, results): ++ """Transform for calls to module members in code.""" ++ module_dot = results.get('module_dot') ++ member = results.get('member') ++ # this may be a list of length one, or just a node ++ if isinstance(member, list): ++ member = member[0] ++ new_name = None ++ for change in MAPPING[module_dot.value]: ++ if member.value in change[1]: ++ new_name = change[0] ++ break ++ if new_name: ++ module_dot.replace(Name(new_name, ++ prefix=module_dot.get_prefix())) ++ else: ++ self.cannot_convert(node, 'This is an invalid module element') ++ ++ def transform(self, node, results): ++ if results.get('module'): ++ self.transform_import(node, results) ++ elif results.get('mod_member'): ++ self.transform_member(node, results) ++ elif results.get('module_dot'): ++ self.transform_dot(node, results) ++ # Renaming and star imports are not supported for these modules. ++ elif results.get('module_star'): ++ self.cannot_convert(node, 'Cannot handle star imports.') ++ elif results.get('module_as'): ++ self.cannot_convert(node, 'This module is now multiple modules') +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_ws_comma.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_ws_comma.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,39 @@ ++"""Fixer that changes 'a ,b' into 'a, b'. ++ ++This also changes '{a :b}' into '{a: b}', but does not touch other ++uses of colons. It does not touch other uses of whitespace. ++ ++""" ++ ++from .. import pytree ++from ..pgen2 import token ++from .. import fixer_base ++ ++class FixWsComma(fixer_base.BaseFix): ++ ++ explicit = True # The user must ask for this fixers ++ ++ PATTERN = """ ++ any<(not(',') any)+ ',' ((not(',') any)+ ',')* [not(',') any]> ++ """ ++ ++ COMMA = pytree.Leaf(token.COMMA, ",") ++ COLON = pytree.Leaf(token.COLON, ":") ++ SEPS = (COMMA, COLON) ++ ++ def transform(self, node, results): ++ new = node.clone() ++ comma = False ++ for child in new.children: ++ if child in self.SEPS: ++ prefix = child.get_prefix() ++ if prefix.isspace() and "\n" not in prefix: ++ child.set_prefix("") ++ comma = True ++ else: ++ if comma: ++ prefix = child.get_prefix() ++ if not prefix: ++ child.set_prefix(" ") ++ comma = False ++ return new +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_xrange.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_xrange.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,64 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that changes xrange(...) into range(...).""" ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name, Call, consuming_calls ++from .. import patcomp ++ ++ ++class FixXrange(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< ++ (name='range'|name='xrange') trailer< '(' args=any ')' > ++ rest=any* > ++ """ ++ ++ def transform(self, node, results): ++ name = results["name"] ++ if name.value == "xrange": ++ return self.transform_xrange(node, results) ++ elif name.value == "range": ++ return self.transform_range(node, results) ++ else: ++ raise ValueError(repr(name)) ++ ++ def transform_xrange(self, node, results): ++ name = results["name"] ++ name.replace(Name("range", prefix=name.get_prefix())) ++ ++ def transform_range(self, node, results): ++ if not self.in_special_context(node): ++ range_call = Call(Name("range"), [results["args"].clone()]) ++ # Encase the range call in list(). ++ list_call = Call(Name("list"), [range_call], ++ prefix=node.get_prefix()) ++ # Put things that were after the range() call after the list call. ++ for n in results["rest"]: ++ list_call.append_child(n) ++ return list_call ++ return node ++ ++ P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" ++ p1 = patcomp.compile_pattern(P1) ++ ++ P2 = """for_stmt< 'for' any 'in' node=any ':' any* > ++ | comp_for< 'for' any 'in' node=any any* > ++ | comparison< any 'in' node=any any*> ++ """ ++ p2 = patcomp.compile_pattern(P2) ++ ++ def in_special_context(self, node): ++ if node.parent is None: ++ return False ++ results = {} ++ if (node.parent.parent is not None and ++ self.p1.match(node.parent.parent, results) and ++ results["node"] is node): ++ # list(d.keys()) -> list(d.keys()), etc. ++ return results["func"].value in consuming_calls ++ # for ... in d.iterkeys() -> for ... in d.keys(), etc. ++ return self.p2.match(node.parent, results) and results["node"] is node +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_xreadlines.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_xreadlines.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,24 @@ ++"""Fix "for x in f.xreadlines()" -> "for x in f". ++ ++This fixer will also convert g(f.xreadlines) into g(f.__iter__).""" ++# Author: Collin Winter ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name ++ ++ ++class FixXreadlines(fixer_base.BaseFix): ++ PATTERN = """ ++ power< call=any+ trailer< '.' 'xreadlines' > trailer< '(' ')' > > ++ | ++ power< any+ trailer< '.' no_call='xreadlines' > > ++ """ ++ ++ def transform(self, node, results): ++ no_call = results.get("no_call") ++ ++ if no_call: ++ no_call.replace(Name("__iter__", prefix=no_call.get_prefix())) ++ else: ++ node.replace([x.clone() for x in results["call"]]) +diff -r 531f2e948299 refactor/fixes/.svn/text-base/fix_zip.py.svn-base +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/.svn/text-base/fix_zip.py.svn-base Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,34 @@ ++""" ++Fixer that changes zip(seq0, seq1, ...) into list(zip(seq0, seq1, ...) ++unless there exists a 'from future_builtins import zip' statement in the ++top-level namespace. ++ ++We avoid the transformation if the zip() call is directly contained in ++iter(<>), list(<>), tuple(<>), sorted(<>), ...join(<>), or for V in <>:. ++""" ++ ++# Local imports ++from .. import fixer_base ++from ..fixer_util import Name, Call, in_special_context ++ ++class FixZip(fixer_base.ConditionalFix): ++ ++ PATTERN = """ ++ power< 'zip' args=trailer< '(' [any] ')' > ++ > ++ """ ++ ++ skip_on = "future_builtins.zip" ++ ++ def transform(self, node, results): ++ if self.should_skip(node): ++ return ++ ++ if in_special_context(node): ++ return None ++ ++ new = node.clone() ++ new.set_prefix("") ++ new = Call(Name("list"), [new]) ++ new.set_prefix(node.get_prefix()) ++ return new +diff -r 531f2e948299 refactor/fixes/__init__.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/__init__.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,3 @@ ++from . import from2 ++from . import from3 ++from .from2 import * +diff -r 531f2e948299 refactor/fixes/fixer_common.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/fixer_common.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,4 @@ ++# Common fixer imports ++from .. import fixer_base ++from ..fixer_util import Name, Call, consuming_calls, attr_chain ++from .. import patcomp +diff -r 531f2e948299 refactor/fixes/from2/__init__.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/__init__.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,49 @@ ++from . import fix_apply ++from . import fix_basestring ++from . import fix_buffer ++from . import fix_callable ++from . import fix_dict ++from . import fix_except ++from . import fix_exec ++from . import fix_execfile ++from . import fix_filter ++from . import fix_funcattrs ++from . import fix_future ++from . import fix_getcwdu ++from . import fix_has_key ++from . import fix_idioms ++from . import fix_import ++from . import fix_imports ++from . import fix_imports2 ++from . import fix_input ++from . import fix_intern ++from . import fix_isinstance ++from . import fix_itertools ++from . import fix_itertools_imports ++from . import fix_long ++from . import fix_map ++from . import fix_metaclass ++from . import fix_methodattrs ++from . import fix_ne ++from . import fix_next ++from . import fix_nonzero ++from . import fix_numliterals ++from . import fix_paren ++from . import fix_print ++from . import fix_raise ++from . import fix_raw_input ++from . import fix_reduce ++from . import fix_renames ++from . import fix_repr ++from . import fix_set_literal ++from . import fix_standarderror ++from . import fix_sys_exc ++from . import fix_throw ++from . import fix_tuple_params ++from . import fix_types ++from . import fix_unicode ++from . import fix_urllib ++from . import fix_ws_comma ++from . import fix_xrange ++from . import fix_xreadlines ++from . import fix_zip +diff -r 531f2e948299 refactor/fixes/from2/fix_apply.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/fix_apply.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,58 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for apply(). ++ ++This converts apply(func, v, k) into (func)(*v, **k).""" ++ ++# Local imports ++from ... import pytree ++from ...pgen2 import token ++from ... import fixer_base ++from ...fixer_util import Call, Comma, parenthesize ++ ++class FixApply(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< 'apply' ++ trailer< ++ '(' ++ arglist< ++ (not argument ++ ')' ++ > ++ > ++ """ ++ ++ def transform(self, node, results): ++ syms = self.syms ++ assert results ++ func = results["func"] ++ args = results["args"] ++ kwds = results.get("kwds") ++ prefix = node.get_prefix() ++ func = func.clone() ++ if (func.type not in (token.NAME, syms.atom) and ++ (func.type != syms.power or ++ func.children[-2].type == token.DOUBLESTAR)): ++ # Need to parenthesize ++ func = parenthesize(func) ++ func.set_prefix("") ++ args = args.clone() ++ args.set_prefix("") ++ if kwds is not None: ++ kwds = kwds.clone() ++ kwds.set_prefix("") ++ l_newargs = [pytree.Leaf(token.STAR, "*"), args] ++ if kwds is not None: ++ l_newargs.extend([Comma(), ++ pytree.Leaf(token.DOUBLESTAR, "**"), ++ kwds]) ++ l_newargs[-2].set_prefix(" ") # that's the ** token ++ # XXX Sometimes we could be cleverer, e.g. apply(f, (x, y) + t) ++ # can be translated into f(x, y, *t) instead of f(*(x, y) + t) ++ #new = pytree.Node(syms.power, (func, ArgList(l_newargs))) ++ return Call(func, l_newargs, prefix=prefix) +diff -r 531f2e948299 refactor/fixes/from2/fix_basestring.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/fix_basestring.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,13 @@ ++"""Fixer for basestring -> str.""" ++# Author: Christian Heimes ++ ++# Local imports ++from ... import fixer_base ++from ...fixer_util import Name ++ ++class FixBasestring(fixer_base.BaseFix): ++ ++ PATTERN = "'basestring'" ++ ++ def transform(self, node, results): ++ return Name("str", prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/from2/fix_buffer.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/fix_buffer.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,21 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer that changes buffer(...) into memoryview(...).""" ++ ++# Local imports ++from ... import fixer_base ++from ...fixer_util import Name ++ ++ ++class FixBuffer(fixer_base.BaseFix): ++ ++ explicit = True # The user must ask for this fixer ++ ++ PATTERN = """ ++ power< name='buffer' trailer< '(' [any] ')' > > ++ """ ++ ++ def transform(self, node, results): ++ name = results["name"] ++ name.replace(Name("memoryview", prefix=name.get_prefix())) +diff -r 531f2e948299 refactor/fixes/from2/fix_callable.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/fix_callable.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,31 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for callable(). ++ ++This converts callable(obj) into hasattr(obj, '__call__').""" ++ ++# Local imports ++from ... import pytree ++from ... import fixer_base ++from ...fixer_util import Call, Name, String ++ ++class FixCallable(fixer_base.BaseFix): ++ ++ # Ignore callable(*args) or use of keywords. ++ # Either could be a hint that the builtin callable() is not being used. ++ PATTERN = """ ++ power< 'callable' ++ trailer< lpar='(' ++ ( not(arglist | argument) any ','> ) ++ rpar=')' > ++ after=any* ++ > ++ """ ++ ++ def transform(self, node, results): ++ func = results["func"] ++ ++ args = [func.clone(), String(', '), String("'__call__'")] ++ return Call(Name("hasattr"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/from2/fix_dict.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/fix_dict.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,99 @@ ++# Copyright 2007 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for dict methods. ++ ++d.keys() -> list(d.keys()) ++d.items() -> list(d.items()) ++d.values() -> list(d.values()) ++ ++d.iterkeys() -> iter(d.keys()) ++d.iteritems() -> iter(d.items()) ++d.itervalues() -> iter(d.values()) ++ ++Except in certain very specific contexts: the iter() can be dropped ++when the context is list(), sorted(), iter() or for...in; the list() ++can be dropped when the context is list() or sorted() (but not iter() ++or for...in!). Special contexts that apply to both: list(), sorted(), tuple() ++set(), any(), all(), sum(). ++ ++Note: iter(d.keys()) could be written as iter(d) but since the ++original d.iterkeys() was also redundant we don't fix this. And there ++are (rare) contexts where it makes a difference (e.g. when passing it ++as an argument to a function that introspects the argument). ++""" ++ ++# Local imports ++from ... import pytree ++from ... import patcomp ++from ...pgen2 import token ++from ... import fixer_base ++from ...fixer_util import Name, Call, LParen, RParen, ArgList, Dot ++from ... import fixer_util ++ ++ ++iter_exempt = fixer_util.consuming_calls | set(["iter"]) ++ ++ ++class FixDict(fixer_base.BaseFix): ++ PATTERN = """ ++ power< head=any+ ++ trailer< '.' method=('keys'|'items'|'values'| ++ 'iterkeys'|'iteritems'|'itervalues') > ++ parens=trailer< '(' ')' > ++ tail=any* ++ > ++ """ ++ ++ def transform(self, node, results): ++ head = results["head"] ++ method = results["method"][0] # Extract node for method name ++ tail = results["tail"] ++ syms = self.syms ++ method_name = method.value ++ isiter = method_name.startswith("iter") ++ if isiter: ++ method_name = method_name[4:] ++ assert method_name in ("keys", "items", "values"), repr(method) ++ head = [n.clone() for n in head] ++ tail = [n.clone() for n in tail] ++ special = not tail and self.in_special_context(node, isiter) ++ args = head + [pytree.Node(syms.trailer, ++ [Dot(), ++ Name(method_name, ++ prefix=method.get_prefix())]), ++ results["parens"].clone()] ++ new = pytree.Node(syms.power, args) ++ if not special: ++ new.set_prefix("") ++ new = Call(Name(isiter and "iter" or "list"), [new]) ++ if tail: ++ new = pytree.Node(syms.power, [new] + tail) ++ new.set_prefix(node.get_prefix()) ++ return new ++ ++ P1 = "power< func=NAME trailer< '(' node=any ')' > any* >" ++ p1 = patcomp.compile_pattern(P1) ++ ++ P2 = """for_stmt< 'for' any 'in' node=any ':' any* > ++ | comp_for< 'for' any 'in' node=any any* > ++ """ ++ p2 = patcomp.compile_pattern(P2) ++ ++ def in_special_context(self, node, isiter): ++ if node.parent is None: ++ return False ++ results = {} ++ if (node.parent.parent is not None and ++ self.p1.match(node.parent.parent, results) and ++ results["node"] is node): ++ if isiter: ++ # iter(d.iterkeys()) -> iter(d.keys()), etc. ++ return results["func"].value in iter_exempt ++ else: ++ # list(d.keys()) -> list(d.keys()), etc. ++ return results["func"].value in fixer_util.consuming_calls ++ if not isiter: ++ return False ++ # for ... in d.iterkeys() -> for ... in d.keys(), etc. ++ return self.p2.match(node.parent, results) and results["node"] is node +diff -r 531f2e948299 refactor/fixes/from2/fix_except.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/fix_except.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,92 @@ ++"""Fixer for except statements with named exceptions. ++ ++The following cases will be converted: ++ ++- "except E, T:" where T is a name: ++ ++ except E as T: ++ ++- "except E, T:" where T is not a name, tuple or list: ++ ++ except E as t: ++ T = t ++ ++ This is done because the target of an "except" clause must be a ++ name. ++ ++- "except E, T:" where T is a tuple or list literal: ++ ++ except E as t: ++ T = t.args ++""" ++# Author: Collin Winter ++ ++# Local imports ++from ... import pytree ++from ...pgen2 import token ++from ... import fixer_base ++from ...fixer_util import Assign, Attr, Name, is_tuple, is_list, syms ++ ++def find_excepts(nodes): ++ for i, n in enumerate(nodes): ++ if n.type == syms.except_clause: ++ if n.children[0].value == 'except': ++ yield (n, nodes[i+2]) ++ ++class FixExcept(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ try_stmt< 'try' ':' suite ++ cleanup=(except_clause ':' suite)+ ++ tail=(['except' ':' suite] ++ ['else' ':' suite] ++ ['finally' ':' suite]) > ++ """ ++ ++ def transform(self, node, results): ++ syms = self.syms ++ ++ tail = [n.clone() for n in results["tail"]] ++ ++ try_cleanup = [ch.clone() for ch in results["cleanup"]] ++ for except_clause, e_suite in find_excepts(try_cleanup): ++ if len(except_clause.children) == 4: ++ (E, comma, N) = except_clause.children[1:4] ++ comma.replace(Name("as", prefix=" ")) ++ ++ if N.type != token.NAME: ++ # Generate a new N for the except clause ++ new_N = Name(self.new_name(), prefix=" ") ++ target = N.clone() ++ target.set_prefix("") ++ N.replace(new_N) ++ new_N = new_N.clone() ++ ++ # Insert "old_N = new_N" as the first statement in ++ # the except body. This loop skips leading whitespace ++ # and indents ++ #TODO(cwinter) suite-cleanup ++ suite_stmts = e_suite.children ++ for i, stmt in enumerate(suite_stmts): ++ if isinstance(stmt, pytree.Node): ++ break ++ ++ # The assignment is different if old_N is a tuple or list ++ # In that case, the assignment is old_N = new_N.args ++ if is_tuple(N) or is_list(N): ++ assign = Assign(target, Attr(new_N, Name('args'))) ++ else: ++ assign = Assign(target, new_N) ++ ++ #TODO(cwinter) stopgap until children becomes a smart list ++ for child in reversed(suite_stmts[:i]): ++ e_suite.insert_child(0, child) ++ e_suite.insert_child(i, assign) ++ elif N.get_prefix() == "": ++ # No space after a comma is legal; no space after "as", ++ # not so much. ++ N.set_prefix(" ") ++ ++ #TODO(cwinter) fix this when children becomes a smart list ++ children = [c.clone() for c in node.children[:3]] + try_cleanup + tail ++ return pytree.Node(node.type, children) +diff -r 531f2e948299 refactor/fixes/from2/fix_exec.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/fix_exec.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,39 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for exec. ++ ++This converts usages of the exec statement into calls to a built-in ++exec() function. ++ ++exec code in ns1, ns2 -> exec(code, ns1, ns2) ++""" ++ ++# Local imports ++from ... import pytree ++from ... import fixer_base ++from ...fixer_util import Comma, Name, Call ++ ++ ++class FixExec(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ exec_stmt< 'exec' a=any 'in' b=any [',' c=any] > ++ | ++ exec_stmt< 'exec' (not atom<'(' [any] ')'>) a=any > ++ """ ++ ++ def transform(self, node, results): ++ assert results ++ syms = self.syms ++ a = results["a"] ++ b = results.get("b") ++ c = results.get("c") ++ args = [a.clone()] ++ args[0].set_prefix("") ++ if b is not None: ++ args.extend([Comma(), b.clone()]) ++ if c is not None: ++ args.extend([Comma(), c.clone()]) ++ ++ return Call(Name("exec"), args, prefix=node.get_prefix()) +diff -r 531f2e948299 refactor/fixes/from2/fix_execfile.py +--- /dev/null Thu Jan 01 00:00:00 1970 +0000 ++++ b/refactor/fixes/from2/fix_execfile.py Wed Apr 01 13:59:47 2009 -0500 +@@ -0,0 +1,51 @@ ++# Copyright 2006 Google, Inc. All Rights Reserved. ++# Licensed to PSF under a Contributor Agreement. ++ ++"""Fixer for execfile. ++ ++This converts usages of the execfile function into calls to the built-in ++exec() function. ++""" ++ ++from ... import fixer_base ++from ...fixer_util import (Comma, Name, Call, LParen, RParen, Dot, Node, ++ ArgList, String, syms) ++ ++ ++class FixExecfile(fixer_base.BaseFix): ++ ++ PATTERN = """ ++ power< 'execfile' trailer< '(' arglist< filename=any [',' globals=any [',' locals=any ] ] > ')' > > ++ | ++ power< 'execfile' trailer< '(' filename=any ')' > > ++ """ ++ ++ def transform(self, node, results): ++ assert results ++ filename = results["filename"] ++ globals = results.get("globals") ++