[Python-checkins] r61428 - in sandbox/trunk/2to3: 2to3 Grammar.txt PatternGrammar.txt find_pattern.py fixes lib2to3 lib2to3/Grammar.txt lib2to3/PatternGrammar.txt lib2to3/__init__.py lib2to3/fixes lib2to3/fixes/basefix.py lib2to3/fixes/fix_apply.py lib2to3/fixes/fix_basestring.py lib2to3/fixes/fix_buffer.py lib2to3/fixes/fix_callable.py lib2to3/fixes/fix_dict.py lib2to3/fixes/fix_except.py lib2to3/fixes/fix_exec.py lib2to3/fixes/fix_execfile.py lib2to3/fixes/fix_filter.py lib2to3/fixes/fix_funcattrs.py lib2to3/fixes/fix_future.py lib2to3/fixes/fix_has_key.py lib2to3/fixes/fix_idioms.py lib2to3/fixes/fix_imports.py lib2to3/fixes/fix_input.py lib2to3/fixes/fix_intern.py lib2to3/fixes/fix_long.py lib2to3/fixes/fix_map.py lib2to3/fixes/fix_methodattrs.py lib2to3/fixes/fix_ne.py lib2to3/fixes/fix_next.py lib2to3/fixes/fix_nonzero.py lib2to3/fixes/fix_numliterals.py lib2to3/fixes/fix_print.py lib2to3/fixes/fix_raise.py lib2to3/fixes/fix_raw_input.py lib2to3/fixes/fix_renames.py lib2to3/fixes/fix_repr.py lib2to3/fixes/fix_standarderror.py lib2to3/fixes/fix_throw.py lib2to3/fixes/fix_tuple_params.py lib2to3/fixes/fix_types.py lib2to3/fixes/fix_unicode.py lib2to3/fixes/fix_ws_comma.py lib2to3/fixes/fix_xrange.py lib2to3/fixes/fix_xreadlines.py lib2to3/fixes/util.py lib2to3/patcomp.py lib2to3/pygram.py lib2to3/pytree.py lib2to3/refactor.py lib2to3/tests lib2to3/tests/__init__.py lib2to3/tests/benchmark.py lib2to3/tests/pytree_idempotency.py lib2to3/tests/support.py lib2to3/tests/test_all_fixers.py lib2to3/tests/test_fixers.py lib2to3/tests/test_parser.py lib2to3/tests/test_pytree.py lib2to3/tests/test_util.py patcomp.py pygram.py pytree.py refactor.py setup.py test.py tests

martin.v.loewis python-checkins at python.org
Sun Mar 16 20:36:17 CET 2008


Author: martin.v.loewis
Date: Sun Mar 16 20:36:15 2008
New Revision: 61428

Added:
   sandbox/trunk/2to3/2to3
   sandbox/trunk/2to3/lib2to3/
   sandbox/trunk/2to3/lib2to3/Grammar.txt
      - copied unchanged from r61376, sandbox/trunk/2to3/Grammar.txt
   sandbox/trunk/2to3/lib2to3/PatternGrammar.txt
      - copied unchanged from r61376, sandbox/trunk/2to3/PatternGrammar.txt
   sandbox/trunk/2to3/lib2to3/__init__.py
   sandbox/trunk/2to3/lib2to3/fixes/
      - copied from r61376, sandbox/trunk/2to3/fixes/
   sandbox/trunk/2to3/lib2to3/fixes/fix_methodattrs.py
      - copied, changed from r61427, sandbox/trunk/2to3/fixes/fix_methodattrs.py
   sandbox/trunk/2to3/lib2to3/patcomp.py
      - copied unchanged from r61376, sandbox/trunk/2to3/patcomp.py
   sandbox/trunk/2to3/lib2to3/pygram.py
      - copied unchanged from r61376, sandbox/trunk/2to3/pygram.py
   sandbox/trunk/2to3/lib2to3/pytree.py
      - copied unchanged from r61376, sandbox/trunk/2to3/pytree.py
   sandbox/trunk/2to3/lib2to3/refactor.py
      - copied, changed from r61376, sandbox/trunk/2to3/refactor.py
   sandbox/trunk/2to3/lib2to3/tests/
      - copied from r61376, sandbox/trunk/2to3/tests/
   sandbox/trunk/2to3/lib2to3/tests/test_fixers.py
      - copied, changed from r61427, sandbox/trunk/2to3/tests/test_fixers.py
   sandbox/trunk/2to3/setup.py
Removed:
   sandbox/trunk/2to3/Grammar.txt
   sandbox/trunk/2to3/PatternGrammar.txt
   sandbox/trunk/2to3/fixes/
   sandbox/trunk/2to3/patcomp.py
   sandbox/trunk/2to3/pygram.py
   sandbox/trunk/2to3/pytree.py
   sandbox/trunk/2to3/refactor.py
   sandbox/trunk/2to3/tests/
Modified:
   sandbox/trunk/2to3/find_pattern.py
   sandbox/trunk/2to3/lib2to3/fixes/basefix.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_apply.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_basestring.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_buffer.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_callable.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_dict.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_except.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_exec.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_execfile.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_funcattrs.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_future.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_has_key.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_idioms.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_imports.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_input.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_intern.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_long.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_map.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_ne.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_next.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_nonzero.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_numliterals.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_print.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_raise.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_raw_input.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_renames.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_repr.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_standarderror.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_throw.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_tuple_params.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_types.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_unicode.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_ws_comma.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_xrange.py
   sandbox/trunk/2to3/lib2to3/fixes/fix_xreadlines.py
   sandbox/trunk/2to3/lib2to3/fixes/util.py
   sandbox/trunk/2to3/lib2to3/tests/__init__.py
   sandbox/trunk/2to3/lib2to3/tests/benchmark.py
   sandbox/trunk/2to3/lib2to3/tests/pytree_idempotency.py
   sandbox/trunk/2to3/lib2to3/tests/support.py
   sandbox/trunk/2to3/lib2to3/tests/test_all_fixers.py
   sandbox/trunk/2to3/lib2to3/tests/test_parser.py
   sandbox/trunk/2to3/lib2to3/tests/test_pytree.py
   sandbox/trunk/2to3/lib2to3/tests/test_util.py
   sandbox/trunk/2to3/test.py
Log:
Create lib2to3; move fixes and tests to this package.
Add setup.py.


Added: sandbox/trunk/2to3/2to3
==============================================================================
--- (empty file)
+++ sandbox/trunk/2to3/2to3	Sun Mar 16 20:36:15 2008
@@ -0,0 +1,5 @@
+#!/usr/bin/python
+from lib2to3 import refactor
+import sys
+
+sys.exit(refactor.main())
\ No newline at end of file

Deleted: /sandbox/trunk/2to3/Grammar.txt
==============================================================================
--- /sandbox/trunk/2to3/Grammar.txt	Sun Mar 16 20:36:15 2008
+++ (empty file)
@@ -1,153 +0,0 @@
-# Grammar for Python
-
-# Note:  Changing the grammar specified in this file will most likely
-#        require corresponding changes in the parser module
-#        (../Modules/parsermodule.c).  If you can't make the changes to
-#        that module yourself, please co-ordinate the required changes
-#        with someone who can; ask around on python-dev for help.  Fred
-#        Drake <fdrake at acm.org> will probably be listening there.
-
-# NOTE WELL: You should also follow all the steps listed in PEP 306,
-# "How to Change Python's Grammar"
-
-# Commands for Kees Blom's railroad program
-#diagram:token NAME
-#diagram:token NUMBER
-#diagram:token STRING
-#diagram:token NEWLINE
-#diagram:token ENDMARKER
-#diagram:token INDENT
-#diagram:output\input python.bla
-#diagram:token DEDENT
-#diagram:output\textwidth 20.04cm\oddsidemargin  0.0cm\evensidemargin 0.0cm
-#diagram:rules
-
-# Start symbols for the grammar:
-#	file_input is a module or sequence of commands read from an input file;
-#	single_input is a single interactive statement;
-#	eval_input is the input for the eval() and input() functions.
-# NB: compound_stmt in single_input is followed by extra NEWLINE!
-file_input: (NEWLINE | stmt)* ENDMARKER
-single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
-eval_input: testlist NEWLINE* ENDMARKER
-
-decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
-decorators: decorator+
-decorated: decorators (classdef | funcdef)
-funcdef: 'def' NAME parameters ['->' test] ':' suite
-parameters: '(' [typedargslist] ')'
-typedargslist: ((tfpdef ['=' test] ',')*
-                ('*' [tname] (',' tname ['=' test])* [',' '**' tname] | '**' tname)
-                | tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
-tname: NAME [':' test]
-tfpdef: tname | '(' tfplist ')'
-tfplist: tfpdef (',' tfpdef)* [',']
-varargslist: ((vfpdef ['=' test] ',')*
-              ('*' [vname] (',' vname ['=' test])*  [',' '**' vname] | '**' vname)
-              | vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
-vname: NAME
-vfpdef: vname | '(' vfplist ')'
-vfplist: vfpdef (',' vfpdef)* [',']
-
-stmt: simple_stmt | compound_stmt
-simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
-small_stmt: (expr_stmt | print_stmt  | del_stmt | pass_stmt | flow_stmt |
-             import_stmt | global_stmt | exec_stmt | assert_stmt)
-expr_stmt: testlist (augassign (yield_expr|testlist) |
-                     ('=' (yield_expr|testlist))*)
-augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
-            '<<=' | '>>=' | '**=' | '//=')
-# For normal assignments, additional restrictions enforced by the interpreter
-print_stmt: 'print' ( [ test (',' test)* [','] ] |
-                      '>>' test [ (',' test)+ [','] ] )
-del_stmt: 'del' exprlist
-pass_stmt: 'pass'
-flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
-break_stmt: 'break'
-continue_stmt: 'continue'
-return_stmt: 'return' [testlist]
-yield_stmt: yield_expr
-raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]]
-import_stmt: import_name | import_from
-import_name: 'import' dotted_as_names
-import_from: ('from' ('.'* dotted_name | '.'+)
-              'import' ('*' | '(' import_as_names ')' | import_as_names))
-import_as_name: NAME ['as' NAME]
-dotted_as_name: dotted_name ['as' NAME]
-import_as_names: import_as_name (',' import_as_name)* [',']
-dotted_as_names: dotted_as_name (',' dotted_as_name)*
-dotted_name: NAME ('.' NAME)*
-global_stmt: ('global' | 'nonlocal') NAME (',' NAME)*
-exec_stmt: 'exec' expr ['in' test [',' test]]
-assert_stmt: 'assert' test [',' test]
-
-compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated
-if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
-while_stmt: 'while' test ':' suite ['else' ':' suite]
-for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
-try_stmt: ('try' ':' suite
-           ((except_clause ':' suite)+
-	    ['else' ':' suite]
-	    ['finally' ':' suite] |
-	   'finally' ':' suite))
-with_stmt: 'with' test [ with_var ] ':' suite
-with_var: 'as' expr
-# NB compile.c makes sure that the default except clause is last
-except_clause: 'except' [test [(',' | 'as') test]]
-suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
-
-# Backward compatibility cruft to support:
-# [ x for x in lambda: True, lambda: False if x() ]
-# even while also allowing:
-# lambda x: 5 if x else 2
-# (But not a mix of the two)
-testlist_safe: old_test [(',' old_test)+ [',']]
-old_test: or_test | old_lambdef
-old_lambdef: 'lambda' [varargslist] ':' old_test
-
-test: or_test ['if' or_test 'else' test] | lambdef
-or_test: and_test ('or' and_test)*
-and_test: not_test ('and' not_test)*
-not_test: 'not' not_test | comparison
-comparison: expr (comp_op expr)*
-comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
-expr: xor_expr ('|' xor_expr)*
-xor_expr: and_expr ('^' and_expr)*
-and_expr: shift_expr ('&' shift_expr)*
-shift_expr: arith_expr (('<<'|'>>') arith_expr)*
-arith_expr: term (('+'|'-') term)*
-term: factor (('*'|'/'|'%'|'//') factor)*
-factor: ('+'|'-'|'~') factor | power
-power: atom trailer* ['**' factor]
-atom: ('(' [yield_expr|testlist_gexp] ')' |
-       '[' [listmaker] ']' |
-       '{' [dictsetmaker] '}' |
-       '`' testlist1 '`' |
-       NAME | NUMBER | STRING+ | '.' '.' '.')
-listmaker: test ( comp_for | (',' test)* [','] )
-testlist_gexp: test ( comp_for | (',' test)* [','] )
-lambdef: 'lambda' [varargslist] ':' test
-trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
-subscriptlist: subscript (',' subscript)* [',']
-subscript: test | [test] ':' [test] [sliceop]
-sliceop: ':' [test]
-exprlist: expr (',' expr)* [',']
-testlist: test (',' test)* [',']
-dictsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
-                (test (comp_for | (',' test)* [','])) )
-
-classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
-
-arglist: (argument ',')* (argument [',']| '*' test [',' '**' test] | '**' test)
-argument: test [comp_for] | test '=' test  # Really [keyword '='] test
-
-comp_iter: comp_for | comp_if
-comp_for: 'for' exprlist 'in' testlist_safe [comp_iter]
-comp_if: 'if' old_test [comp_iter]
-
-testlist1: test (',' test)*
-
-# not used in grammar, but may appear in "node" passed from Parser to Compiler
-encoding_decl: NAME
-
-yield_expr: 'yield' [testlist]

Deleted: /sandbox/trunk/2to3/PatternGrammar.txt
==============================================================================
--- /sandbox/trunk/2to3/PatternGrammar.txt	Sun Mar 16 20:36:15 2008
+++ (empty file)
@@ -1,28 +0,0 @@
-# Copyright 2006 Google, Inc. All Rights Reserved.
-# Licensed to PSF under a Contributor Agreement.
-
-# A grammar to describe tree matching patterns.
-# Not shown here:
-# - 'TOKEN' stands for any token (leaf node)
-# - 'any' stands for any node (leaf or interior)
-# With 'any' we can still specify the sub-structure.
-
-# The start symbol is 'Matcher'.
-
-Matcher: Alternatives ENDMARKER
-
-Alternatives: Alternative ('|' Alternative)*
-
-Alternative: (Unit | NegatedUnit)+
-
-Unit: [NAME '='] ( STRING [Repeater]
-                 | NAME [Details] [Repeater]
-                 | '(' Alternatives ')' [Repeater]
-                 | '[' Alternatives ']'
-		 )
-
-NegatedUnit: 'not' (STRING | NAME [Details] | '(' Alternatives ')')
-
-Repeater: '*' | '+' | '{' NUMBER [',' NUMBER] '}'
-
-Details: '<' Alternatives '>'

Modified: sandbox/trunk/2to3/find_pattern.py
==============================================================================
--- sandbox/trunk/2to3/find_pattern.py	(original)
+++ sandbox/trunk/2to3/find_pattern.py	Sun Mar 16 20:36:15 2008
@@ -47,9 +47,9 @@
 from StringIO import StringIO
 
 # Local imports
-import pytree
+from lib2to3 import pytree
 from pgen2 import driver
-from pygram import python_symbols, python_grammar
+from lib2to3.pygram import python_symbols, python_grammar
 
 driver = driver.Driver(python_grammar, convert=pytree.convert)
 

Added: sandbox/trunk/2to3/lib2to3/__init__.py
==============================================================================
--- (empty file)
+++ sandbox/trunk/2to3/lib2to3/__init__.py	Sun Mar 16 20:36:15 2008
@@ -0,0 +1 @@
+#empty

Modified: sandbox/trunk/2to3/lib2to3/fixes/basefix.py
==============================================================================
--- sandbox/trunk/2to3/fixes/basefix.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/basefix.py	Sun Mar 16 20:36:15 2008
@@ -14,8 +14,8 @@
     from sets import Set as set
 
 # Local imports
-from patcomp import PatternCompiler
-import pygram
+from ..patcomp import PatternCompiler
+from .. import pygram
 
 class BaseFix(object):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_apply.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_apply.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_apply.py	Sun Mar 16 20:36:15 2008
@@ -6,10 +6,10 @@
 This converts apply(func, v, k) into (func)(*v, **k)."""
 
 # Local imports
-import pytree
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Call, Comma
+from . import basefix
+from .util import Call, Comma
 
 class FixApply(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_basestring.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_basestring.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_basestring.py	Sun Mar 16 20:36:15 2008
@@ -2,8 +2,8 @@
 # Author: Christian Heimes
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name
+from . import basefix
+from .util import Name
 
 class FixBasestring(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_buffer.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_buffer.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_buffer.py	Sun Mar 16 20:36:15 2008
@@ -4,8 +4,8 @@
 """Fixer that changes buffer(...) into memoryview(...)."""
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name
+from . import basefix
+from .util import Name
 
 
 class FixBuffer(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_callable.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_callable.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_callable.py	Sun Mar 16 20:36:15 2008
@@ -6,9 +6,9 @@
 This converts callable(obj) into hasattr(obj, '__call__')."""
 
 # Local imports
-import pytree
-from fixes import basefix
-from fixes.util import Call, Name, String
+from .. import pytree
+from . import basefix
+from .util import Call, Name, String
 
 class FixCallable(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_dict.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_dict.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_dict.py	Sun Mar 16 20:36:15 2008
@@ -24,11 +24,11 @@
 """
 
 # Local imports
-import pytree
-import patcomp
+from .. import pytree
+from .. import patcomp
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Name, Call, LParen, RParen, ArgList, Dot, set
+from . import basefix
+from .util import Name, Call, LParen, RParen, ArgList, Dot, set
 
 
 exempt = set(["sorted", "list", "set", "any", "all", "tuple", "sum"])

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_except.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_except.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_except.py	Sun Mar 16 20:36:15 2008
@@ -22,10 +22,10 @@
 # Author: Collin Winter
 
 # Local imports
-import pytree
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Assign, Attr, Name, is_tuple, is_list, reversed
+from . import basefix
+from .util import Assign, Attr, Name, is_tuple, is_list, reversed
 
 def find_excepts(nodes):
     for i, n in enumerate(nodes):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_exec.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_exec.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_exec.py	Sun Mar 16 20:36:15 2008
@@ -10,9 +10,9 @@
 """
 
 # Local imports
-import pytree
-from fixes import basefix
-from fixes.util import Comma, Name, Call
+from .. import pytree
+from . import basefix
+from .util import Comma, Name, Call
 
 
 class FixExec(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_execfile.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_execfile.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_execfile.py	Sun Mar 16 20:36:15 2008
@@ -7,9 +7,9 @@
 exec() function.
 """
 
-import pytree
-from fixes import basefix
-from fixes.util import Comma, Name, Call, LParen, RParen, Dot
+from .. import pytree
+from . import basefix
+from .util import Comma, Name, Call, LParen, RParen, Dot
 
 
 class FixExecfile(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_filter.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_filter.py	Sun Mar 16 20:36:15 2008
@@ -14,11 +14,11 @@
 """
 
 # Local imports
-import pytree
-import patcomp
+from .. import pytree
+from .. import patcomp
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Name, Call, ListComp, attr_chain
+from . import basefix
+from .util import Name, Call, ListComp, attr_chain
 
 class FixFilter(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_funcattrs.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_funcattrs.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_funcattrs.py	Sun Mar 16 20:36:15 2008
@@ -2,8 +2,8 @@
 # Author: Collin Winter
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name
+from . import basefix
+from .util import Name
 
 
 class FixFuncattrs(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_future.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_future.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_future.py	Sun Mar 16 20:36:15 2008
@@ -5,8 +5,8 @@
 # Author: Christian Heimes 
 
 # Local imports
-from fixes import basefix
-from fixes.util import BlankLine 
+from . import basefix
+from .util import BlankLine 
 
 class FixFuture(basefix.BaseFix):
     PATTERN = """import_from< 'from' module_name="__future__" 'import' any >"""

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_has_key.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_has_key.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_has_key.py	Sun Mar 16 20:36:15 2008
@@ -30,10 +30,10 @@
 """
 
 # Local imports
-import pytree
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Name
+from . import basefix
+from .util import Name
 
 
 class FixHasKey(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_idioms.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_idioms.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_idioms.py	Sun Mar 16 20:36:15 2008
@@ -28,8 +28,8 @@
 # Author: Jacques Frechet, Collin Winter
 
 # Local imports
-from fixes import basefix
-from fixes.util import Call, Comma, Name, Node, syms
+from . import basefix
+from .util import Call, Comma, Name, Node, syms
 
 CMP = "(n='!=' | '==' | 'is' | n=comp_op< 'is' 'not' >)"
 TYPE = "power< 'type' trailer< '(' x=any ')' > >"

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_imports.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_imports.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_imports.py	Sun Mar 16 20:36:15 2008
@@ -8,8 +8,8 @@
 # Author: Collin Winter
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name, attr_chain, any, set
+from . import basefix
+from .util import Name, attr_chain, any, set
 import __builtin__
 builtin_names = [name for name in dir(__builtin__)
                  if name not in ("__name__", "__doc__")]

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_input.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_input.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_input.py	Sun Mar 16 20:36:15 2008
@@ -2,9 +2,9 @@
 # Author: Andre Roberge
 
 # Local imports
-from fixes import basefix
-from fixes.util import Call, Name
-import patcomp
+from . import basefix
+from .util import Call, Name
+from .. import patcomp
 
 
 context = patcomp.compile_pattern("power< 'eval' trailer< '(' any ')' > >")

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_intern.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_intern.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_intern.py	Sun Mar 16 20:36:15 2008
@@ -6,9 +6,9 @@
 intern(s) -> sys.intern(s)"""
 
 # Local imports
-import pytree
-from fixes import basefix
-from fixes.util import Name, Attr
+from .. import pytree
+from . import basefix
+from .util import Name, Attr
 
 
 class FixIntern(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_long.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_long.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_long.py	Sun Mar 16 20:36:15 2008
@@ -7,9 +7,9 @@
 """
 
 # Local imports
-import pytree
-from fixes import basefix
-from fixes.util import Name, Number
+from .. import pytree
+from . import basefix
+from .util import Name, Number
 
 
 class FixLong(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_map.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_map.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_map.py	Sun Mar 16 20:36:15 2008
@@ -18,12 +18,12 @@
 """
 
 # Local imports
-import pytree
-import patcomp
+from .. import pytree
+from .. import patcomp
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Name, Call, ListComp, attr_chain
-from pygram import python_symbols as syms
+from . import basefix
+from .util import Name, Call, ListComp, attr_chain
+from ..pygram import python_symbols as syms
 
 class FixMap(basefix.BaseFix):
 

Copied: sandbox/trunk/2to3/lib2to3/fixes/fix_methodattrs.py (from r61427, sandbox/trunk/2to3/fixes/fix_methodattrs.py)
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_methodattrs.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_methodattrs.py	Sun Mar 16 20:36:15 2008
@@ -3,8 +3,8 @@
 # Author: Christian Heimes
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name
+from . import basefix
+from .util import Name
 
 MAP = {
     "im_func" : "__func__",

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_ne.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_ne.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_ne.py	Sun Mar 16 20:36:15 2008
@@ -4,9 +4,9 @@
 """Fixer that turns <> into !=."""
 
 # Local imports
-import pytree
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
+from . import basefix
 
 
 class FixNe(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_next.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_next.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_next.py	Sun Mar 16 20:36:15 2008
@@ -7,9 +7,9 @@
 
 # Local imports
 from pgen2 import token
-from pygram import python_symbols as syms
-from fixes import basefix
-from fixes.util import Name, Call, find_binding, any
+from ..pygram import python_symbols as syms
+from . import basefix
+from .util import Name, Call, find_binding, any
 
 bind_warning = "Calls to builtin next() possibly shadowed by global binding"
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_nonzero.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_nonzero.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_nonzero.py	Sun Mar 16 20:36:15 2008
@@ -2,8 +2,8 @@
 # Author: Collin Winter
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name, syms
+from .import basefix
+from .util import Name, syms
 
 class FixNonzero(basefix.BaseFix):
     PATTERN = """

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_numliterals.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_numliterals.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_numliterals.py	Sun Mar 16 20:36:15 2008
@@ -5,8 +5,8 @@
 
 # Local imports
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Number, set
+from .import basefix
+from .util import Number, set
 
 
 class FixNumliterals(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_print.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_print.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_print.py	Sun Mar 16 20:36:15 2008
@@ -11,11 +11,11 @@
 """
 
 # Local imports
-import patcomp
-import pytree
+from .. import patcomp
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Name, Call, Comma, String, is_tuple
+from .import basefix
+from .util import Name, Call, Comma, String, is_tuple
 
 
 parend_expr = patcomp.compile_pattern(

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_raise.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_raise.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_raise.py	Sun Mar 16 20:36:15 2008
@@ -22,10 +22,10 @@
 # Author: Collin Winter
 
 # Local imports
-import pytree
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Name, Call, Attr, ArgList, is_tuple
+from .import basefix
+from .util import Name, Call, Attr, ArgList, is_tuple
 
 class FixRaise(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_raw_input.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_raw_input.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_raw_input.py	Sun Mar 16 20:36:15 2008
@@ -2,8 +2,8 @@
 # Author: Andre Roberge
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name
+from .import basefix
+from .util import Name
 
 class FixRawInput(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_renames.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_renames.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_renames.py	Sun Mar 16 20:36:15 2008
@@ -7,8 +7,8 @@
 # based on Collin Winter's fix_import
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name, attr_chain, any, set
+from .import basefix
+from .util import Name, attr_chain, any, set
 
 MAPPING = {"sys":  {"maxint" : "maxsize"},
           }

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_repr.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_repr.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_repr.py	Sun Mar 16 20:36:15 2008
@@ -4,8 +4,8 @@
 """Fixer that transforms `xyzzy` into repr(xyzzy)."""
 
 # Local imports
-from fixes import basefix
-from fixes.util import Call, Name
+from .import basefix
+from .util import Call, Name
 
 
 class FixRepr(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_standarderror.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_standarderror.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_standarderror.py	Sun Mar 16 20:36:15 2008
@@ -4,8 +4,8 @@
 """Fixer for StandardError -> Exception."""
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name
+from .import basefix
+from .util import Name
 
 
 class FixStandarderror(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_throw.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_throw.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_throw.py	Sun Mar 16 20:36:15 2008
@@ -8,10 +8,10 @@
 # Author: Collin Winter
 
 # Local imports
-import pytree
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Name, Call, ArgList, Attr, is_tuple
+from .import basefix
+from .util import Name, Call, ArgList, Attr, is_tuple
 
 class FixThrow(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_tuple_params.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_tuple_params.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_tuple_params.py	Sun Mar 16 20:36:15 2008
@@ -19,10 +19,10 @@
 # Author: Collin Winter
 
 # Local imports
-import pytree
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Assign, Name, Newline, Number, Subscript, syms
+from .import basefix
+from .util import Assign, Name, Newline, Number, Subscript, syms
 
 def is_docstring(stmt):
     return isinstance(stmt, pytree.Node) and \

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_types.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_types.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_types.py	Sun Mar 16 20:36:15 2008
@@ -21,8 +21,8 @@
 
 # Local imports
 from pgen2 import token
-from fixes import basefix
-from fixes.util import Name
+from .import basefix
+from .util import Name
 
 _TYPE_MAPPING = {
         'BooleanType' : 'bool',

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_unicode.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_unicode.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_unicode.py	Sun Mar 16 20:36:15 2008
@@ -4,7 +4,7 @@
 
 import re
 from pgen2 import token
-from fixes import basefix
+from .import basefix
 
 class FixUnicode(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_ws_comma.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_ws_comma.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_ws_comma.py	Sun Mar 16 20:36:15 2008
@@ -5,9 +5,9 @@
 
 """
 
-import pytree
+from .. import pytree
 from pgen2 import token
-from fixes import basefix
+from .import basefix
 
 class FixWsComma(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_xrange.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_xrange.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_xrange.py	Sun Mar 16 20:36:15 2008
@@ -4,8 +4,8 @@
 """Fixer that changes xrange(...) into range(...)."""
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name
+from .import basefix
+from .util import Name
 
 class FixXrange(basefix.BaseFix):
 

Modified: sandbox/trunk/2to3/lib2to3/fixes/fix_xreadlines.py
==============================================================================
--- sandbox/trunk/2to3/fixes/fix_xreadlines.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/fix_xreadlines.py	Sun Mar 16 20:36:15 2008
@@ -4,8 +4,8 @@
 # Author: Collin Winter
 
 # Local imports
-from fixes import basefix
-from fixes.util import Name
+from .import basefix
+from .util import Name
 
 
 class FixXreadlines(basefix.BaseFix):

Modified: sandbox/trunk/2to3/lib2to3/fixes/util.py
==============================================================================
--- sandbox/trunk/2to3/fixes/util.py	(original)
+++ sandbox/trunk/2to3/lib2to3/fixes/util.py	Sun Mar 16 20:36:15 2008
@@ -3,8 +3,8 @@
 
 # Local imports
 from pgen2 import token
-from pytree import Leaf, Node
-from pygram import python_symbols as syms
+from ..pytree import Leaf, Node
+from ..pygram import python_symbols as syms
 
 
 ###########################################################
@@ -134,7 +134,7 @@
 
 ###########################################################
 ### Common portability code. This allows fixers to do, eg,
-###  "from fixes.util import set" and forget about it.
+###  "from .util import set" and forget about it.
 ###########################################################
 
 try:

Copied: sandbox/trunk/2to3/lib2to3/refactor.py (from r61376, sandbox/trunk/2to3/refactor.py)
==============================================================================
--- sandbox/trunk/2to3/refactor.py	(original)
+++ sandbox/trunk/2to3/lib2to3/refactor.py	Sun Mar 16 20:36:15 2008
@@ -130,7 +130,7 @@
             fix_names = get_all_fix_names()
         for fix_name in fix_names:
             try:
-                mod = __import__("fixes.fix_" + fix_name, {}, {}, ["*"])
+                mod = __import__("lib2to3.fixes.fix_" + fix_name, {}, {}, ["*"])
             except ImportError:
                 self.log_error("Can't find transformation %s", fix_name)
                 continue
@@ -241,7 +241,7 @@
             there were errors during the parse.
         """
         try:
-            tree = self.driver.parse_string(data)
+            tree = self.driver.parse_string(data,1)
         except Exception, err:
             self.log_error("Can't parse %s: %s: %s",
                            name, err.__class__.__name__, err)

Modified: sandbox/trunk/2to3/lib2to3/tests/__init__.py
==============================================================================
--- sandbox/trunk/2to3/tests/__init__.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/__init__.py	Sun Mar 16 20:36:15 2008
@@ -8,7 +8,7 @@
 import unittest
 import types
 
-import support
+from . import support
 
 all_tests = unittest.TestSuite()
 
@@ -17,7 +17,8 @@
                         if t.startswith('test_') and t.endswith('.py')]
 
 loader = unittest.TestLoader()
+
 for t in tests:
-    __import__('tests.' + t)
+    __import__("",globals(),locals(),[t],level=1)
     mod = globals()[t]
     all_tests.addTests(loader.loadTestsFromModule(mod))

Modified: sandbox/trunk/2to3/lib2to3/tests/benchmark.py
==============================================================================
--- sandbox/trunk/2to3/tests/benchmark.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/benchmark.py	Sun Mar 16 20:36:15 2008
@@ -17,7 +17,7 @@
 adjust_path()
 
 # Local imports
-import refactor
+from .. import refactor
 
 ### Mock code for refactor.py and the fixers
 ###############################################################################

Modified: sandbox/trunk/2to3/lib2to3/tests/pytree_idempotency.py
==============================================================================
--- sandbox/trunk/2to3/tests/pytree_idempotency.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/pytree_idempotency.py	Sun Mar 16 20:36:15 2008
@@ -15,7 +15,7 @@
 import logging
 
 # Local imports
-import pytree
+from .. import pytree
 import pgen2
 from pgen2 import driver
 

Modified: sandbox/trunk/2to3/lib2to3/tests/support.py
==============================================================================
--- sandbox/trunk/2to3/tests/support.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/support.py	Sun Mar 16 20:36:15 2008
@@ -9,10 +9,10 @@
 import re
 from textwrap import dedent
 
-sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
+#sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
 
 # Local imports
-import pytree
+from .. import pytree
 from pgen2 import driver
 
 test_dir = os.path.dirname(__file__)

Modified: sandbox/trunk/2to3/lib2to3/tests/test_all_fixers.py
==============================================================================
--- sandbox/trunk/2to3/tests/test_all_fixers.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/test_all_fixers.py	Sun Mar 16 20:36:15 2008
@@ -8,7 +8,7 @@
 
 # Testing imports
 try:
-    from tests import support
+    from . import support
 except ImportError:
     import support
 
@@ -16,8 +16,8 @@
 import unittest
 
 # Local imports
-import pytree
-import refactor
+from .. import pytree
+from .. import refactor
 
 class Options:
     def __init__(self, **kwargs):

Copied: sandbox/trunk/2to3/lib2to3/tests/test_fixers.py (from r61427, sandbox/trunk/2to3/tests/test_fixers.py)
==============================================================================
--- sandbox/trunk/2to3/tests/test_fixers.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/test_fixers.py	Sun Mar 16 20:36:15 2008
@@ -12,9 +12,9 @@
 import unittest
 
 # Local imports
-import pygram
-import pytree
-import refactor
+from .. import pygram
+from .. import pytree
+from .. import refactor
 
 class Options:
     def __init__(self, **kwargs):

Modified: sandbox/trunk/2to3/lib2to3/tests/test_parser.py
==============================================================================
--- sandbox/trunk/2to3/tests/test_parser.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/test_parser.py	Sun Mar 16 20:36:15 2008
@@ -9,8 +9,8 @@
 # Author: Collin Winter
 
 # Testing imports
-from tests import support
-from tests.support import driver, test_dir
+from . import support
+from .support import driver, test_dir
 
 # Python imports
 import os

Modified: sandbox/trunk/2to3/lib2to3/tests/test_pytree.py
==============================================================================
--- sandbox/trunk/2to3/tests/test_pytree.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/test_pytree.py	Sun Mar 16 20:36:15 2008
@@ -11,10 +11,10 @@
 """
 
 # Testing imports
-from tests import support
+from . import support
 
 # Local imports (XXX should become a package)
-import pytree
+from .. import pytree
 
 try:
     sorted

Modified: sandbox/trunk/2to3/lib2to3/tests/test_util.py
==============================================================================
--- sandbox/trunk/2to3/tests/test_util.py	(original)
+++ sandbox/trunk/2to3/lib2to3/tests/test_util.py	Sun Mar 16 20:36:15 2008
@@ -3,14 +3,14 @@
 # Author: Collin Winter
 
 # Testing imports
-from tests import support
+from . import support
 
 # Python imports
 import os.path
 
 # Local imports
-import pytree
-from fixes import util
+from .. import pytree
+from ..fixes import util
 
 
 def parse(code, strip_levels=0):
@@ -62,14 +62,14 @@
 
 class Test_Attr(MacroTestCase):
     def test(self):
-        from fixes.util import Attr, Name
+        from ..fixes.util import Attr, Name
         call = parse("foo()", strip_levels=2)
 
         self.assertStr(Attr(Name("a"), Name("b")), "a.b")
         self.assertStr(Attr(call, Name("b")), "foo().b")
 
     def test_returns(self):
-        from fixes.util import Attr, Name
+        from ..fixes.util import Attr, Name
 
         attr = Attr(Name("a"), Name("b"))
         self.assertEqual(type(attr), list)
@@ -77,7 +77,7 @@
 
 class Test_Name(MacroTestCase):
     def test(self):
-        from fixes.util import Name
+        from ..fixes.util import Name
 
         self.assertStr(Name("a"), "a")
         self.assertStr(Name("foo.foo().bar"), "foo.foo().bar")

Deleted: /sandbox/trunk/2to3/patcomp.py
==============================================================================
--- /sandbox/trunk/2to3/patcomp.py	Sun Mar 16 20:36:15 2008
+++ (empty file)
@@ -1,186 +0,0 @@
-# Copyright 2006 Google, Inc. All Rights Reserved.
-# Licensed to PSF under a Contributor Agreement.
-
-"""Pattern compiler.
-
-The grammer is taken from PatternGrammar.txt.
-
-The compiler compiles a pattern to a pytree.*Pattern instance.
-"""
-
-__author__ = "Guido van Rossum <guido at python.org>"
-
-# Python imports
-import os
-
-# Fairly local imports
-from pgen2 import driver
-from pgen2 import literals
-from pgen2 import token
-from pgen2 import tokenize
-
-# Really local imports
-import pytree
-import pygram
-
-# The pattern grammar file
-_PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__),
-                                     "PatternGrammar.txt")
-
-
-def tokenize_wrapper(input):
-    """Tokenizes a string suppressing significant whitespace."""
-    skip = (token.NEWLINE, token.INDENT, token.DEDENT)
-    tokens = tokenize.generate_tokens(driver.generate_lines(input).next)
-    for quintuple in tokens:
-        type, value, start, end, line_text = quintuple
-        if type not in skip:
-            yield quintuple
-
-
-class PatternCompiler(object):
-
-    def __init__(self, grammar_file=_PATTERN_GRAMMAR_FILE):
-        """Initializer.
-
-        Takes an optional alternative filename for the pattern grammar.
-        """
-        self.grammar = driver.load_grammar(grammar_file)
-        self.syms = pygram.Symbols(self.grammar)
-        self.pygrammar = pygram.python_grammar
-        self.pysyms = pygram.python_symbols
-        self.driver = driver.Driver(self.grammar, convert=pattern_convert)
-
-    def compile_pattern(self, input, debug=False):
-        """Compiles a pattern string to a nested pytree.*Pattern object."""
-        tokens = tokenize_wrapper(input)
-        root = self.driver.parse_tokens(tokens, debug=debug)
-        return self.compile_node(root)
-
-    def compile_node(self, node):
-        """Compiles a node, recursively.
-
-        This is one big switch on the node type.
-        """
-        # XXX Optimize certain Wildcard-containing-Wildcard patterns
-        # that can be merged
-        if node.type == self.syms.Matcher:
-            node = node.children[0] # Avoid unneeded recursion
-
-        if node.type == self.syms.Alternatives:
-            # Skip the odd children since they are just '|' tokens
-            alts = [self.compile_node(ch) for ch in node.children[::2]]
-            if len(alts) == 1:
-                return alts[0]
-            p = pytree.WildcardPattern([[a] for a in alts], min=1, max=1)
-            return p.optimize()
-
-        if node.type == self.syms.Alternative:
-            units = [self.compile_node(ch) for ch in node.children]
-            if len(units) == 1:
-                return units[0]
-            p = pytree.WildcardPattern([units], min=1, max=1)
-            return p.optimize()
-
-        if node.type == self.syms.NegatedUnit:
-            pattern = self.compile_basic(node.children[1:])
-            p = pytree.NegatedPattern(pattern)
-            return p.optimize()
-
-        assert node.type == self.syms.Unit
-
-        name = None
-        nodes = node.children
-        if len(nodes) >= 3 and nodes[1].type == token.EQUAL:
-            name = nodes[0].value
-            nodes = nodes[2:]
-        repeat = None
-        if len(nodes) >= 2 and nodes[-1].type == self.syms.Repeater:
-            repeat = nodes[-1]
-            nodes = nodes[:-1]
-
-        # Now we've reduced it to: STRING | NAME [Details] | (...) | [...]
-        pattern = self.compile_basic(nodes, repeat)
-
-        if repeat is not None:
-            assert repeat.type == self.syms.Repeater
-            children = repeat.children
-            child = children[0]
-            if child.type == token.STAR:
-                min = 0
-                max = pytree.HUGE
-            elif child.type == token.PLUS:
-                min = 1
-                max = pytree.HUGE
-            elif child.type == token.LBRACE:
-                assert children[-1].type == token.RBRACE
-                assert  len(children) in (3, 5)
-                min = max = self.get_int(children[1])
-                if len(children) == 5:
-                    max = self.get_int(children[3])
-            else:
-                assert False
-            if min != 1 or max != 1:
-                pattern = pattern.optimize()
-                pattern = pytree.WildcardPattern([[pattern]], min=min, max=max)
-
-        if name is not None:
-            pattern.name = name
-        return pattern.optimize()
-
-    def compile_basic(self, nodes, repeat=None):
-        # Compile STRING | NAME [Details] | (...) | [...]
-        assert len(nodes) >= 1
-        node = nodes[0]
-        if node.type == token.STRING:
-            value = literals.evalString(node.value)
-            return pytree.LeafPattern(content=value)
-        elif node.type == token.NAME:
-            value = node.value
-            if value.isupper():
-                if value not in TOKEN_MAP:
-                    raise SyntaxError("Invalid token: %r" % value)
-                return pytree.LeafPattern(TOKEN_MAP[value])
-            else:
-                if value == "any":
-                    type = None
-                elif not value.startswith("_"):
-                    type = getattr(self.pysyms, value, None)
-                    if type is None:
-                        raise SyntaxError("Invalid symbol: %r" % value)
-                if nodes[1:]: # Details present
-                    content = [self.compile_node(nodes[1].children[1])]
-                else:
-                    content = None
-                return pytree.NodePattern(type, content)
-        elif node.value == "(":
-            return self.compile_node(nodes[1])
-        elif node.value == "[":
-            assert repeat is None
-            subpattern = self.compile_node(nodes[1])
-            return pytree.WildcardPattern([[subpattern]], min=0, max=1)
-        assert False, node
-
-    def get_int(self, node):
-        assert node.type == token.NUMBER
-        return int(node.value)
-
-
-# Map named tokens to the type value for a LeafPattern
-TOKEN_MAP = {"NAME": token.NAME,
-             "STRING": token.STRING,
-             "NUMBER": token.NUMBER,
-             "TOKEN": None}
-
-
-def pattern_convert(grammar, raw_node_info):
-    """Converts raw node information to a Node or Leaf instance."""
-    type, value, context, children = raw_node_info
-    if children or type in grammar.number2symbol:
-        return pytree.Node(type, children, context=context)
-    else:
-        return pytree.Leaf(type, value, context=context)
-
-
-def compile_pattern(pattern):
-    return PatternCompiler().compile_pattern(pattern)

Deleted: /sandbox/trunk/2to3/pygram.py
==============================================================================
--- /sandbox/trunk/2to3/pygram.py	Sun Mar 16 20:36:15 2008
+++ (empty file)
@@ -1,38 +0,0 @@
-# Copyright 2006 Google, Inc. All Rights Reserved.
-# Licensed to PSF under a Contributor Agreement.
-
-"""Export the Python grammar and symbols."""
-
-# Python imports
-import os
-
-# Local imports
-import pytree
-from pgen2 import token
-from pgen2 import driver
-
-# The grammar file
-_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt")
-
-
-class Symbols(object):
-
-    def __init__(self, grammar):
-        """Initializer.
-
-        Creates an attribute for each grammar symbol (nonterminal),
-        whose value is the symbol's type (an int >= 256).
-        """
-        for name, symbol in grammar.symbol2number.iteritems():
-            setattr(self, name, symbol)
-
-
-python_grammar = driver.load_grammar(_GRAMMAR_FILE)
-python_symbols = Symbols(python_grammar)
-
-
-def parenthesize(node):
-    return pytree.Node(python_symbols.atom,
-                       (pytree.Leaf(token.LPAR, "("),
-                        node,
-                        pytree.Leaf(token.RPAR, ")")))

Deleted: /sandbox/trunk/2to3/pytree.py
==============================================================================
--- /sandbox/trunk/2to3/pytree.py	Sun Mar 16 20:36:15 2008
+++ (empty file)
@@ -1,712 +0,0 @@
-# Copyright 2006 Google, Inc. All Rights Reserved.
-# Licensed to PSF under a Contributor Agreement.
-
-"""Python parse tree definitions.
-
-This is a very concrete parse tree; we need to keep every token and
-even the comments and whitespace between tokens.
-
-There's also a pattern matching implementation here.
-"""
-
-__author__ = "Guido van Rossum <guido at python.org>"
-
-
-HUGE = 0x7FFFFFFF  # maximum repeat count, default max
-
-
-class Base(object):
-
-    """Abstract base class for Node and Leaf.
-
-    This provides some default functionality and boilerplate using the
-    template pattern.
-
-    A node may be a subnode of at most one parent.
-    """
-
-    # Default values for instance variables
-    type = None    # int: token number (< 256) or symbol number (>= 256)
-    parent = None  # Parent node pointer, or None
-    children = ()  # Tuple of subnodes
-    was_changed = False
-
-    def __new__(cls, *args, **kwds):
-        """Constructor that prevents Base from being instantiated."""
-        assert cls is not Base, "Cannot instantiate Base"
-        return object.__new__(cls)
-
-    def __eq__(self, other):
-        """Compares two nodes for equality.
-
-        This calls the method _eq().
-        """
-        if self.__class__ is not other.__class__:
-            return NotImplemented
-        return self._eq(other)
-
-    def __ne__(self, other):
-        """Compares two nodes for inequality.
-
-        This calls the method _eq().
-        """
-        if self.__class__ is not other.__class__:
-            return NotImplemented
-        return not self._eq(other)
-
-    def _eq(self, other):
-        """Compares two nodes for equality.
-
-        This is called by __eq__ and __ne__.  It is only called if the
-        two nodes have the same type.  This must be implemented by the
-        concrete subclass.  Nodes should be considered equal if they
-        have the same structure, ignoring the prefix string and other
-        context information.
-        """
-        raise NotImplementedError
-
-    def clone(self):
-        """Returns a cloned (deep) copy of self.
-
-        This must be implemented by the concrete subclass.
-        """
-        raise NotImplementedError
-
-    def post_order(self):
-        """Returns a post-order iterator for the tree.
-
-        This must be implemented by the concrete subclass.
-        """
-        raise NotImplementedError
-
-    def pre_order(self):
-        """Returns a pre-order iterator for the tree.
-
-        This must be implemented by the concrete subclass.
-        """
-        raise NotImplementedError
-
-    def set_prefix(self, prefix):
-        """Sets the prefix for the node (see Leaf class).
-
-        This must be implemented by the concrete subclass.
-        """
-        raise NotImplementedError
-
-    def get_prefix(self):
-        """Returns the prefix for the node (see Leaf class).
-
-        This must be implemented by the concrete subclass.
-        """
-        raise NotImplementedError
-
-    def replace(self, new):
-        """Replaces this node with a new one in the parent."""
-        assert self.parent is not None, str(self)
-        assert new is not None
-        if not isinstance(new, list):
-            new = [new]
-        l_children = []
-        found = False
-        for ch in self.parent.children:
-            if ch is self:
-                assert not found, (self.parent.children, self, new)
-                if new is not None:
-                    l_children.extend(new)
-                found = True
-            else:
-                l_children.append(ch)
-        assert found, (self.children, self, new)
-        self.parent.changed()
-        self.parent.children = l_children
-        for x in new:
-            x.parent = self.parent
-        self.parent = None
-
-    def get_lineno(self):
-        """Returns the line number which generated the invocant node."""
-        node = self
-        while not isinstance(node, Leaf):
-            if not node.children:
-                return
-            node = node.children[0]
-        return node.lineno
-
-    def changed(self):
-        if self.parent:
-            self.parent.changed()
-        self.was_changed = True
-
-    def remove(self):
-        """Remove the node from the tree. Returns the position of the node
-        in its parent's children before it was removed."""
-        if self.parent:
-            for i, node in enumerate(self.parent.children):
-                if node is self:
-                    self.parent.changed()
-                    del self.parent.children[i]
-                    self.parent = None
-                    return i
-
-    def get_next_sibling(self):
-        """Return the node immediately following the invocant in their
-        parent's children list. If the invocant does not have a next
-        sibling, return None."""
-        if self.parent is None:
-            return None
-
-        # Can't use index(); we need to test by identity
-        for i, sibling in enumerate(self.parent.children):
-            if sibling is self:
-                try:
-                    return self.parent.children[i+1]
-                except IndexError:
-                    return None
-
-    def get_suffix(self):
-        """Return the string immediately following the invocant node. This
-        is effectively equivalent to node.get_next_sibling().get_prefix()"""
-        next_sib = self.get_next_sibling()
-        if next_sib is None:
-            return ""
-        return next_sib.get_prefix()
-
-
-class Node(Base):
-
-    """Concrete implementation for interior nodes."""
-
-    def __init__(self, type, children, context=None, prefix=None):
-        """Initializer.
-
-        Takes a type constant (a symbol number >= 256), a sequence of
-        child nodes, and an optional context keyword argument.
-
-        As a side effect, the parent pointers of the children are updated.
-        """
-        assert type >= 256, type
-        self.type = type
-        self.children = list(children)
-        for ch in self.children:
-            assert ch.parent is None, repr(ch)
-            ch.parent = self
-        if prefix is not None:
-            self.set_prefix(prefix)
-
-    def __repr__(self):
-        """Returns a canonical string representation."""
-        return "%s(%r, %r)" % (self.__class__.__name__,
-                               self.type,
-                               self.children)
-
-    def __str__(self):
-        """Returns a pretty string representation.
-
-        This reproduces the input source exactly.
-        """
-        return "".join(map(str, self.children))
-
-    def _eq(self, other):
-        """Compares two nodes for equality."""
-        return (self.type, self.children) == (other.type, other.children)
-
-    def clone(self):
-        """Returns a cloned (deep) copy of self."""
-        return Node(self.type, [ch.clone() for ch in self.children])
-
-    def post_order(self):
-        """Returns a post-order iterator for the tree."""
-        for child in self.children:
-            for node in child.post_order():
-                yield node
-        yield self
-
-    def pre_order(self):
-        """Returns a pre-order iterator for the tree."""
-        yield self
-        for child in self.children:
-            for node in child.post_order():
-                yield node
-
-    def set_prefix(self, prefix):
-        """Sets the prefix for the node.
-
-        This passes the responsibility on to the first child.
-        """
-        if self.children:
-            self.children[0].set_prefix(prefix)
-
-    def get_prefix(self):
-        """Returns the prefix for the node.
-
-        This passes the call on to the first child.
-        """
-        if not self.children:
-            return ""
-        return self.children[0].get_prefix()
-
-    def set_child(self, i, child):
-        """Equivalent to 'node.children[i] = child'. This method also sets the
-        child's parent attribute appropriately."""
-        child.parent = self
-        self.children[i].parent = None
-        self.children[i] = child
-
-    def insert_child(self, i, child):
-        """Equivalent to 'node.children.insert(i, child)'. This method also
-        sets the child's parent attribute appropriately."""
-        child.parent = self
-        self.children.insert(i, child)
-
-    def append_child(self, child):
-        """Equivalent to 'node.children.append(child)'. This method also
-        sets the child's parent attribute appropriately."""
-        child.parent = self
-        self.children.append(child)
-
-
-class Leaf(Base):
-
-    """Concrete implementation for leaf nodes."""
-
-    # Default values for instance variables
-    prefix = ""  # Whitespace and comments preceding this token in the input
-    lineno = 0   # Line where this token starts in the input
-    column = 0   # Column where this token tarts in the input
-
-    def __init__(self, type, value, context=None, prefix=None):
-        """Initializer.
-
-        Takes a type constant (a token number < 256), a string value,
-        and an optional context keyword argument.
-        """
-        assert 0 <= type < 256, type
-        if context is not None:
-            self.prefix, (self.lineno, self.column) = context
-        self.type = type
-        self.value = value
-        if prefix is not None:
-            self.prefix = prefix
-
-    def __repr__(self):
-        """Returns a canonical string representation."""
-        return "%s(%r, %r)" % (self.__class__.__name__,
-                               self.type,
-                               self.value)
-
-    def __str__(self):
-        """Returns a pretty string representation.
-
-        This reproduces the input source exactly.
-        """
-        return self.prefix + str(self.value)
-
-    def _eq(self, other):
-        """Compares two nodes for equality."""
-        return (self.type, self.value) == (other.type, other.value)
-
-    def clone(self):
-        """Returns a cloned (deep) copy of self."""
-        return Leaf(self.type, self.value,
-                    (self.prefix, (self.lineno, self.column)))
-
-    def post_order(self):
-        """Returns a post-order iterator for the tree."""
-        yield self
-
-    def pre_order(self):
-        """Returns a pre-order iterator for the tree."""
-        yield self
-
-    def set_prefix(self, prefix):
-        """Sets the prefix for the node."""
-        self.changed()
-        self.prefix = prefix
-
-    def get_prefix(self):
-        """Returns the prefix for the node."""
-        return self.prefix
-
-
-def convert(gr, raw_node):
-    """Converts raw node information to a Node or Leaf instance.
-
-    This is passed to the parser driver which calls it whenever a
-    reduction of a grammar rule produces a new complete node, so that
-    the tree is build strictly bottom-up.
-    """
-    type, value, context, children = raw_node
-    if children or type in gr.number2symbol:
-        # If there's exactly one child, return that child instead of
-        # creating a new node.
-        if len(children) == 1:
-            return children[0]
-        return Node(type, children, context=context)
-    else:
-        return Leaf(type, value, context=context)
-
-
-class BasePattern(object):
-
-    """A pattern is a tree matching pattern.
-
-    It looks for a specific node type (token or symbol), and
-    optionally for a specific content.
-
-    This is an abstract base class.  There are three concrete
-    subclasses:
-
-    - LeafPattern matches a single leaf node;
-    - NodePattern matches a single node (usually non-leaf);
-    - WildcardPattern matches a sequence of nodes of variable length.
-    """
-
-    # Defaults for instance variables
-    type = None     # Node type (token if < 256, symbol if >= 256)
-    content = None  # Optional content matching pattern
-    name = None     # Optional name used to store match in results dict
-
-    def __new__(cls, *args, **kwds):
-        """Constructor that prevents BasePattern from being instantiated."""
-        assert cls is not BasePattern, "Cannot instantiate BasePattern"
-        return object.__new__(cls)
-
-    def __repr__(self):
-        args = [self.type, self.content, self.name]
-        while args and args[-1] is None:
-            del args[-1]
-        return "%s(%s)" % (self.__class__.__name__, ", ".join(map(repr, args)))
-
-    def optimize(self):
-        """A subclass can define this as a hook for optimizations.
-
-        Returns either self or another node with the same effect.
-        """
-        return self
-
-    def match(self, node, results=None):
-        """Does this pattern exactly match a node?
-
-        Returns True if it matches, False if not.
-
-        If results is not None, it must be a dict which will be
-        updated with the nodes matching named subpatterns.
-
-        Default implementation for non-wildcard patterns.
-        """
-        if self.type is not None and node.type != self.type:
-            return False
-        if self.content is not None:
-            r = None
-            if results is not None:
-                r = {}
-            if not self._submatch(node, r):
-                return False
-            if r:
-                results.update(r)
-        if results is not None and self.name:
-            results[self.name] = node
-        return True
-
-    def match_seq(self, nodes, results=None):
-        """Does this pattern exactly match a sequence of nodes?
-
-        Default implementation for non-wildcard patterns.
-        """
-        if len(nodes) != 1:
-            return False
-        return self.match(nodes[0], results)
-
-    def generate_matches(self, nodes):
-        """Generator yielding all matches for this pattern.
-
-        Default implementation for non-wildcard patterns.
-        """
-        r = {}
-        if nodes and self.match(nodes[0], r):
-            yield 1, r
-
-
-class LeafPattern(BasePattern):
-
-    def __init__(self, type=None, content=None, name=None):
-        """Initializer.  Takes optional type, content, and name.
-
-        The type, if given must be a token type (< 256).  If not given,
-        this matches any *leaf* node; the content may still be required.
-
-        The content, if given, must be a string.
-
-        If a name is given, the matching node is stored in the results
-        dict under that key.
-        """
-        if type is not None:
-            assert 0 <= type < 256, type
-        if content is not None:
-            assert isinstance(content, basestring), repr(content)
-        self.type = type
-        self.content = content
-        self.name = name
-
-    def match(self, node, results=None):
-        """Override match() to insist on a leaf node."""
-        if not isinstance(node, Leaf):
-            return False
-        return BasePattern.match(self, node, results)
-
-    def _submatch(self, node, results=None):
-        """Match the pattern's content to the node's children.
-
-        This assumes the node type matches and self.content is not None.
-
-        Returns True if it matches, False if not.
-
-        If results is not None, it must be a dict which will be
-        updated with the nodes matching named subpatterns.
-
-        When returning False, the results dict may still be updated.
-        """
-        return self.content == node.value
-
-
-class NodePattern(BasePattern):
-
-    wildcards = False
-
-    def __init__(self, type=None, content=None, name=None):
-        """Initializer.  Takes optional type, content, and name.
-
-        The type, if given, must be a symbol type (>= 256).  If the
-        type is None this matches *any* single node (leaf or not),
-        except if content is not None, in which it only matches
-        non-leaf nodes that also match the content pattern.
-
-        The content, if not None, must be a sequence of Patterns that
-        must match the node's children exactly.  If the content is
-        given, the type must not be None.
-
-        If a name is given, the matching node is stored in the results
-        dict under that key.
-        """
-        if type is not None:
-            assert type >= 256, type
-        if content is not None:
-            assert not isinstance(content, basestring), repr(content)
-            content = list(content)
-            for i, item in enumerate(content):
-                assert isinstance(item, BasePattern), (i, item)
-                if isinstance(item, WildcardPattern):
-                    self.wildcards = True
-        self.type = type
-        self.content = content
-        self.name = name
-
-    def _submatch(self, node, results=None):
-        """Match the pattern's content to the node's children.
-
-        This assumes the node type matches and self.content is not None.
-
-        Returns True if it matches, False if not.
-
-        If results is not None, it must be a dict which will be
-        updated with the nodes matching named subpatterns.
-
-        When returning False, the results dict may still be updated.
-        """
-        if self.wildcards:
-            for c, r in generate_matches(self.content, node.children):
-                if c == len(node.children):
-                    if results is not None:
-                        results.update(r)
-                    return True
-            return False
-        if len(self.content) != len(node.children):
-            return False
-        for subpattern, child in zip(self.content, node.children):
-            if not subpattern.match(child, results):
-                return False
-        return True
-
-
-class WildcardPattern(BasePattern):
-
-    """A wildcard pattern can match zero or more nodes.
-
-    This has all the flexibility needed to implement patterns like:
-
-    .*      .+      .?      .{m,n}
-    (a b c | d e | f)
-    (...)*  (...)+  (...)?  (...){m,n}
-
-    except it always uses non-greedy matching.
-    """
-
-    def __init__(self, content=None, min=0, max=HUGE, name=None):
-        """Initializer.
-
-        Args:
-            content: optional sequence of subsequences of patterns;
-                     if absent, matches one node;
-                     if present, each subsequence is an alternative [*]
-            min: optinal minumum number of times to match, default 0
-            max: optional maximum number of times tro match, default HUGE
-            name: optional name assigned to this match
-
-        [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is
-            equivalent to (a b c | d e | f g h); if content is None,
-            this is equivalent to '.' in regular expression terms.
-            The min and max parameters work as follows:
-                min=0, max=maxint: .*
-                min=1, max=maxint: .+
-                min=0, max=1: .?
-                min=1, max=1: .
-            If content is not None, replace the dot with the parenthesized
-            list of alternatives, e.g. (a b c | d e | f g h)*
-        """
-        assert 0 <= min <= max <= HUGE, (min, max)
-        if content is not None:
-            content = tuple(map(tuple, content))  # Protect against alterations
-            # Check sanity of alternatives
-            assert len(content), repr(content)  # Can't have zero alternatives
-            for alt in content:
-                assert len(alt), repr(alt) # Can have empty alternatives
-        self.content = content
-        self.min = min
-        self.max = max
-        self.name = name
-
-    def optimize(self):
-        """Optimize certain stacked wildcard patterns."""
-        subpattern = None
-        if (self.content is not None and
-            len(self.content) == 1 and len(self.content[0]) == 1):
-            subpattern = self.content[0][0]
-        if self.min == 1 and self.max == 1:
-            if self.content is None:
-                return NodePattern(name=self.name)
-            if subpattern is not None and  self.name == subpattern.name:
-                return subpattern.optimize()
-        if (self.min <= 1 and isinstance(subpattern, WildcardPattern) and
-            subpattern.min <= 1 and self.name == subpattern.name):
-            return WildcardPattern(subpattern.content,
-                                   self.min*subpattern.min,
-                                   self.max*subpattern.max,
-                                   subpattern.name)
-        return self
-
-    def match(self, node, results=None):
-        """Does this pattern exactly match a node?"""
-        return self.match_seq([node], results)
-
-    def match_seq(self, nodes, results=None):
-        """Does this pattern exactly match a sequence of nodes?"""
-        for c, r in self.generate_matches(nodes):
-            if c == len(nodes):
-                if results is not None:
-                    results.update(r)
-                    if self.name:
-                        results[self.name] = list(nodes)
-                return True
-        return False
-
-    def generate_matches(self, nodes):
-        """Generator yielding matches for a sequence of nodes.
-
-        Args:
-            nodes: sequence of nodes
-
-        Yields:
-            (count, results) tuples where:
-            count: the match comprises nodes[:count];
-            results: dict containing named submatches.
-        """
-        if self.content is None:
-            # Shortcut for special case (see __init__.__doc__)
-            for count in xrange(self.min, 1 + min(len(nodes), self.max)):
-                r = {}
-                if self.name:
-                    r[self.name] = nodes[:count]
-                yield count, r
-        else:
-            for count, r in self._recursive_matches(nodes, 0):
-                if self.name:
-                    r[self.name] = nodes[:count]
-                yield count, r
-
-    def _recursive_matches(self, nodes, count):
-        """Helper to recursively yield the matches."""
-        assert self.content is not None
-        if count >= self.min:
-            r = {}
-            if self.name:
-                r[self.name] = nodes[:0]
-            yield 0, r
-        if count < self.max:
-            for alt in self.content:
-                for c0, r0 in generate_matches(alt, nodes):
-                    for c1, r1 in self._recursive_matches(nodes[c0:], count+1):
-                        r = {}
-                        r.update(r0)
-                        r.update(r1)
-                        yield c0 + c1, r
-
-
-class NegatedPattern(BasePattern):
-
-    def __init__(self, content=None):
-        """Initializer.
-
-        The argument is either a pattern or None.  If it is None, this
-        only matches an empty sequence (effectively '$' in regex
-        lingo).  If it is not None, this matches whenever the argument
-        pattern doesn't have any matches.
-        """
-        if content is not None:
-            assert isinstance(content, BasePattern), repr(content)
-        self.content = content
-
-    def match(self, node):
-        # We never match a node in its entirety
-        return False
-
-    def match_seq(self, nodes):
-        # We only match an empty sequence of nodes in its entirety
-        return len(nodes) == 0
-
-    def generate_matches(self, nodes):
-        if self.content is None:
-            # Return a match if there is an empty sequence
-            if len(nodes) == 0:
-                yield 0, {}
-        else:
-            # Return a match if the argument pattern has no matches
-            for c, r in self.content.generate_matches(nodes):
-                return
-            yield 0, {}
-
-
-def generate_matches(patterns, nodes):
-    """Generator yielding matches for a sequence of patterns and nodes.
-
-    Args:
-        patterns: a sequence of patterns
-        nodes: a sequence of nodes
-
-    Yields:
-        (count, results) tuples where:
-        count: the entire sequence of patterns matches nodes[:count];
-        results: dict containing named submatches.
-        """
-    if not patterns:
-        yield 0, {}
-    else:
-        p, rest = patterns[0], patterns[1:]
-        for c0, r0 in p.generate_matches(nodes):
-            if not rest:
-                yield c0, r0
-            else:
-                for c1, r1 in generate_matches(rest, nodes[c0:]):
-                    r = {}
-                    r.update(r0)
-                    r.update(r1)
-                    yield c0 + c1, r

Deleted: /sandbox/trunk/2to3/refactor.py
==============================================================================
--- /sandbox/trunk/2to3/refactor.py	Sun Mar 16 20:36:15 2008
+++ (empty file)
@@ -1,525 +0,0 @@
-#!/usr/bin/env python2.5
-# Copyright 2006 Google, Inc. All Rights Reserved.
-# Licensed to PSF under a Contributor Agreement.
-
-"""Refactoring framework.
-
-Used as a main program, this can refactor any number of files and/or
-recursively descend down directories.  Imported as a module, this
-provides infrastructure to write your own refactoring tool.
-"""
-
-__author__ = "Guido van Rossum <guido at python.org>"
-
-
-# Python imports
-import os
-import sys
-import difflib
-import optparse
-import logging
-
-# Local imports
-import pytree
-import patcomp
-from pgen2 import driver
-from pgen2 import tokenize
-import fixes
-import pygram
-
-if sys.version_info < (2, 4):
-    hdlr = logging.StreamHandler()
-    fmt = logging.Formatter('%(name)s: %(message)s')
-    hdlr.setFormatter(fmt)
-    logging.root.addHandler(hdlr)
-else:
-    logging.basicConfig(format='%(name)s: %(message)s', level=logging.INFO)
-
-
-def main(args=None):
-    """Main program.
-
-    Call without arguments to use sys.argv[1:] as the arguments; or
-    call with a list of arguments (excluding sys.argv[0]).
-
-    Returns a suggested exit status (0, 1, 2).
-    """
-    # Set up option parser
-    parser = optparse.OptionParser(usage="refactor.py [options] file|dir ...")
-    parser.add_option("-d", "--doctests_only", action="store_true",
-                      help="Fix up doctests only")
-    parser.add_option("-f", "--fix", action="append", default=[],
-                      help="Each FIX specifies a transformation; default all")
-    parser.add_option("-l", "--list-fixes", action="store_true",
-                      help="List available transformations (fixes/fix_*.py)")
-    parser.add_option("-p", "--print-function", action="store_true",
-                      help="Modify the grammar so that print() is a function")
-    parser.add_option("-v", "--verbose", action="store_true",
-                      help="More verbose logging")
-    parser.add_option("-w", "--write", action="store_true",
-                      help="Write back modified files")
-
-    # Parse command line arguments
-    options, args = parser.parse_args(args)
-    if options.list_fixes:
-        print "Available transformations for the -f/--fix option:"
-        for fixname in get_all_fix_names():
-            print fixname
-        if not args:
-            return 0
-    if not args:
-        print >>sys.stderr, "At least one file or directory argument required."
-        print >>sys.stderr, "Use --help to show usage."
-        return 2
-
-    # Initialize the refactoring tool
-    rt = RefactoringTool(options)
-
-    # Refactor all files and directories passed as arguments
-    if not rt.errors:
-        rt.refactor_args(args)
-        rt.summarize()
-
-    # Return error status (0 if rt.errors is zero)
-    return int(bool(rt.errors))
-
-
-def get_all_fix_names():
-    """Return a sorted list of all available fix names."""
-    fix_names = []
-    names = os.listdir(os.path.dirname(fixes.__file__))
-    names.sort()
-    for name in names:
-        if name.startswith("fix_") and name.endswith(".py"):
-            fix_names.append(name[4:-3])
-    fix_names.sort()
-    return fix_names
-
-
-class RefactoringTool(object):
-
-    def __init__(self, options):
-        """Initializer.
-
-        The argument is an optparse.Values instance.
-        """
-        self.options = options
-        self.errors = []
-        self.logger = logging.getLogger("RefactoringTool")
-        self.fixer_log = []
-        if self.options.print_function:
-            del pygram.python_grammar.keywords["print"]
-        self.driver = driver.Driver(pygram.python_grammar,
-                                    convert=pytree.convert,
-                                    logger=self.logger)
-        self.pre_order, self.post_order = self.get_fixers()
-        self.files = []  # List of files that were or should be modified
-
-    def get_fixers(self):
-        """Inspects the options to load the requested patterns and handlers.
-        
-        Returns:
-          (pre_order, post_order), where pre_order is the list of fixers that
-          want a pre-order AST traversal, and post_order is the list that want
-          post-order traversal.
-        """
-        pre_order_fixers = []
-        post_order_fixers = []
-        fix_names = self.options.fix
-        if not fix_names or "all" in fix_names:
-            fix_names = get_all_fix_names()
-        for fix_name in fix_names:
-            try:
-                mod = __import__("fixes.fix_" + fix_name, {}, {}, ["*"])
-            except ImportError:
-                self.log_error("Can't find transformation %s", fix_name)
-                continue
-            parts = fix_name.split("_")
-            class_name = "Fix" + "".join([p.title() for p in parts])
-            try:
-                fix_class = getattr(mod, class_name)
-            except AttributeError:
-                self.log_error("Can't find fixes.fix_%s.%s",
-                               fix_name, class_name)
-                continue
-            try:
-                fixer = fix_class(self.options, self.fixer_log)
-            except Exception, err:
-                self.log_error("Can't instantiate fixes.fix_%s.%s()",
-                               fix_name, class_name, exc_info=True)
-                continue
-            if fixer.explicit and fix_name not in self.options.fix:
-                self.log_message("Skipping implicit fixer: %s", fix_name)
-                continue
-
-            if self.options.verbose:
-                self.log_message("Adding transformation: %s", fix_name)
-            if fixer.order == "pre":
-                pre_order_fixers.append(fixer)
-            elif fixer.order == "post":
-                post_order_fixers.append(fixer)
-            else:
-                raise ValueError("Illegal fixer order: %r" % fixer.order)
-        return (pre_order_fixers, post_order_fixers)
-
-    def log_error(self, msg, *args, **kwds):
-        """Increments error count and log a message."""
-        self.errors.append((msg, args, kwds))
-        self.logger.error(msg, *args, **kwds)
-
-    def log_message(self, msg, *args):
-        """Hook to log a message."""
-        if args:
-            msg = msg % args
-        self.logger.info(msg)
-
-    def refactor_args(self, args):
-        """Refactors files and directories from an argument list."""
-        for arg in args:
-            if arg == "-":
-                self.refactor_stdin()
-            elif os.path.isdir(arg):
-                self.refactor_dir(arg)
-            else:
-                self.refactor_file(arg)
-
-    def refactor_dir(self, arg):
-        """Descends down a directory and refactor every Python file found.
-
-        Python files are assumed to have a .py extension.
-
-        Files and subdirectories starting with '.' are skipped.
-        """
-        for dirpath, dirnames, filenames in os.walk(arg):
-            if self.options.verbose:
-                self.log_message("Descending into %s", dirpath)
-            dirnames.sort()
-            filenames.sort()
-            for name in filenames:
-                if not name.startswith(".") and name.endswith("py"):
-                    fullname = os.path.join(dirpath, name)
-                    self.refactor_file(fullname)
-            # Modify dirnames in-place to remove subdirs with leading dots
-            dirnames[:] = [dn for dn in dirnames if not dn.startswith(".")]
-
-    def refactor_file(self, filename):
-        """Refactors a file."""
-        try:
-            f = open(filename)
-        except IOError, err:
-            self.log_error("Can't open %s: %s", filename, err)
-            return
-        try:
-            input = f.read() + "\n" # Silence certain parse errors
-        finally:
-            f.close()
-        if self.options.doctests_only:
-            if self.options.verbose:
-                self.log_message("Refactoring doctests in %s", filename)
-            output = self.refactor_docstring(input, filename)
-            if output != input:
-                self.write_file(output, filename, input)
-            elif self.options.verbose:
-                self.log_message("No doctest changes in %s", filename)
-        else:
-            tree = self.refactor_string(input, filename)
-            if tree and tree.was_changed:
-                # The [:-1] is to take off the \n we added earlier
-                self.write_file(str(tree)[:-1], filename)
-            elif self.options.verbose:
-                self.log_message("No changes in %s", filename)
-
-    def refactor_string(self, data, name):
-        """Refactor a given input string.
-        
-        Args:
-            data: a string holding the code to be refactored.
-            name: a human-readable name for use in error/log messages.
-            
-        Returns:
-            An AST corresponding to the refactored input stream; None if
-            there were errors during the parse.
-        """
-        try:
-            tree = self.driver.parse_string(data)
-        except Exception, err:
-            self.log_error("Can't parse %s: %s: %s",
-                           name, err.__class__.__name__, err)
-            return
-        if self.options.verbose:
-            self.log_message("Refactoring %s", name)
-        self.refactor_tree(tree, name)
-        return tree
-
-    def refactor_stdin(self):
-        if self.options.write:
-            self.log_error("Can't write changes back to stdin")
-            return
-        input = sys.stdin.read()
-        if self.options.doctests_only:
-            if self.options.verbose:
-                self.log_message("Refactoring doctests in stdin")
-            output = self.refactor_docstring(input, "<stdin>")
-            if output != input:
-                self.write_file(output, "<stdin>", input)
-            elif self.options.verbose:
-                self.log_message("No doctest changes in stdin")
-        else:
-            tree = self.refactor_string(input, "<stdin>")
-            if tree and tree.was_changed:
-                self.write_file(str(tree), "<stdin>", input)
-            elif self.options.verbose:
-                self.log_message("No changes in stdin")
-
-    def refactor_tree(self, tree, name):
-        """Refactors a parse tree (modifying the tree in place).
-        
-        Args:
-            tree: a pytree.Node instance representing the root of the tree
-                  to be refactored.
-            name: a human-readable name for this tree.
-        
-        Returns:
-            True if the tree was modified, False otherwise.
-        """
-        all_fixers = self.pre_order + self.post_order
-        for fixer in all_fixers:
-            fixer.start_tree(tree, name)
-
-        self.traverse_by(self.pre_order, tree.pre_order())
-        self.traverse_by(self.post_order, tree.post_order())
-
-        for fixer in all_fixers:
-            fixer.finish_tree(tree, name)
-        return tree.was_changed
-
-    def traverse_by(self, fixers, traversal):
-        """Traverse an AST, applying a set of fixers to each node.
-        
-        This is a helper method for refactor_tree().
-        
-        Args:
-            fixers: a list of fixer instances.
-            traversal: a generator that yields AST nodes.
-        
-        Returns:
-            None
-        """
-        if not fixers:
-            return
-        for node in traversal:
-            for fixer in fixers:
-                results = fixer.match(node)
-                if results:
-                    new = fixer.transform(node, results)
-                    if new is not None and (new != node or
-                                            str(new) != str(node)):
-                        node.replace(new)
-                        node = new
-
-    def write_file(self, new_text, filename, old_text=None):
-        """Writes a string to a file.
-
-        If there are no changes, this is a no-op.
-
-        Otherwise, it first shows a unified diff between the old text
-        and the new text, and then rewrites the file; the latter is
-        only done if the write option is set.
-        """
-        self.files.append(filename)
-        if old_text is None:
-            try:
-                f = open(filename, "r")
-            except IOError, err:
-                self.log_error("Can't read %s: %s", filename, err)
-                return
-            try:
-                old_text = f.read()
-            finally:
-                f.close()
-        if old_text == new_text:
-            if self.options.verbose:
-                self.log_message("No changes to %s", filename)
-            return
-        diff_texts(old_text, new_text, filename)
-        if not self.options.write:
-            if self.options.verbose:
-                self.log_message("Not writing changes to %s", filename)
-            return
-        backup = filename + ".bak"
-        if os.path.lexists(backup):
-            try:
-                os.remove(backup)
-            except os.error, err:
-                self.log_message("Can't remove backup %s", backup)
-        try:
-            os.rename(filename, backup)
-        except os.error, err:
-            self.log_message("Can't rename %s to %s", filename, backup)
-        try:
-            f = open(filename, "w")
-        except os.error, err:
-            self.log_error("Can't create %s: %s", filename, err)
-            return
-        try:
-            try:
-                f.write(new_text)
-            except os.error, err:
-                self.log_error("Can't write %s: %s", filename, err)
-        finally:
-            f.close()
-        if self.options.verbose:
-            self.log_message("Wrote changes to %s", filename)
-
-    PS1 = ">>> "
-    PS2 = "... "
-
-    def refactor_docstring(self, input, filename):
-        """Refactors a docstring, looking for doctests.
-
-        This returns a modified version of the input string.  It looks
-        for doctests, which start with a ">>>" prompt, and may be
-        continued with "..." prompts, as long as the "..." is indented
-        the same as the ">>>".
-
-        (Unfortunately we can't use the doctest module's parser,
-        since, like most parsers, it is not geared towards preserving
-        the original source.)
-        """
-        result = []
-        block = None
-        block_lineno = None
-        indent = None
-        lineno = 0
-        for line in input.splitlines(True):
-            lineno += 1
-            if line.lstrip().startswith(self.PS1):
-                if block is not None:
-                    result.extend(self.refactor_doctest(block, block_lineno,
-                                                        indent, filename))
-                block_lineno = lineno
-                block = [line]
-                i = line.find(self.PS1)
-                indent = line[:i]
-            elif (indent is not None and
-                  (line.startswith(indent + self.PS2) or
-                   line == indent + self.PS2.rstrip() + "\n")):
-                block.append(line)
-            else:
-                if block is not None:
-                    result.extend(self.refactor_doctest(block, block_lineno,
-                                                        indent, filename))
-                block = None
-                indent = None
-                result.append(line)
-        if block is not None:
-            result.extend(self.refactor_doctest(block, block_lineno,
-                                                indent, filename))
-        return "".join(result)
-
-    def refactor_doctest(self, block, lineno, indent, filename):
-        """Refactors one doctest.
-
-        A doctest is given as a block of lines, the first of which starts
-        with ">>>" (possibly indented), while the remaining lines start
-        with "..." (identically indented).
-
-        """
-        try:
-            tree = self.parse_block(block, lineno, indent)
-        except Exception, err:
-            if self.options.verbose:
-                for line in block:
-                    self.log_message("Source: %s", line.rstrip("\n"))
-            self.log_error("Can't parse docstring in %s line %s: %s: %s",
-                           filename, lineno, err.__class__.__name__, err)
-            return block
-        if self.refactor_tree(tree, filename):
-            new = str(tree).splitlines(True)
-            # Undo the adjustment of the line numbers in wrap_toks() below.
-            clipped, new = new[:lineno-1], new[lineno-1:]
-            assert clipped == ["\n"] * (lineno-1), clipped
-            if not new[-1].endswith("\n"):
-                new[-1] += "\n"
-            block = [indent + self.PS1 + new.pop(0)]
-            if new:
-                block += [indent + self.PS2 + line for line in new]
-        return block
-
-    def summarize(self):
-        if self.options.write:
-            were = "were"
-        else:
-            were = "need to be"
-        if not self.files:
-            self.log_message("No files %s modified.", were)
-        else:
-            self.log_message("Files that %s modified:", were)
-            for file in self.files:
-                self.log_message(file)
-        if self.fixer_log:
-            self.log_message("Warnings/messages while refactoring:")
-            for message in self.fixer_log:
-                self.log_message(message)
-        if self.errors:
-            if len(self.errors) == 1:
-                self.log_message("There was 1 error:")
-            else:
-                self.log_message("There were %d errors:", len(self.errors))
-            for msg, args, kwds in self.errors:
-                self.log_message(msg, *args, **kwds)
-
-    def parse_block(self, block, lineno, indent):
-        """Parses a block into a tree.
-
-        This is necessary to get correct line number / offset information
-        in the parser diagnostics and embedded into the parse tree.
-        """
-        return self.driver.parse_tokens(self.wrap_toks(block, lineno, indent))
-
-    def wrap_toks(self, block, lineno, indent):
-        """Wraps a tokenize stream to systematically modify start/end."""
-        tokens = tokenize.generate_tokens(self.gen_lines(block, indent).next)
-        for type, value, (line0, col0), (line1, col1), line_text in tokens:
-            line0 += lineno - 1
-            line1 += lineno - 1
-            # Don't bother updating the columns; this is too complicated
-            # since line_text would also have to be updated and it would
-            # still break for tokens spanning lines.  Let the user guess
-            # that the column numbers for doctests are relative to the
-            # end of the prompt string (PS1 or PS2).
-            yield type, value, (line0, col0), (line1, col1), line_text
-
-
-    def gen_lines(self, block, indent):
-        """Generates lines as expected by tokenize from a list of lines.
-
-        This strips the first len(indent + self.PS1) characters off each line.
-        """
-        prefix1 = indent + self.PS1
-        prefix2 = indent + self.PS2
-        prefix = prefix1
-        for line in block:
-            if line.startswith(prefix):
-                yield line[len(prefix):]
-            elif line == prefix.rstrip() + "\n":
-                yield "\n"
-            else:
-                raise AssertionError("line=%r, prefix=%r" % (line, prefix))
-            prefix = prefix2
-        while True:
-            yield ""
-
-
-def diff_texts(a, b, filename):
-    """Prints a unified diff of two strings."""
-    a = a.splitlines()
-    b = b.splitlines()
-    for line in difflib.unified_diff(a, b, filename, filename,
-                                     "(original)", "(refactored)",
-                                     lineterm=""):
-        print line
-
-
-if __name__ == "__main__":
-  sys.exit(main())

Added: sandbox/trunk/2to3/setup.py
==============================================================================
--- (empty file)
+++ sandbox/trunk/2to3/setup.py	Sun Mar 16 20:36:15 2008
@@ -0,0 +1,8 @@
+from distutils.core import setup
+
+setup(
+   name="2to3",
+   packages=['lib2to3','lib2to3.fixes','pgen2'],
+   package_data={'lib2to3':['Grammar.txt','PatternGrammar.txt']},
+   scripts=["2to3"]
+)

Modified: sandbox/trunk/2to3/test.py
==============================================================================
--- sandbox/trunk/2to3/test.py	(original)
+++ sandbox/trunk/2to3/test.py	Sun Mar 16 20:36:15 2008
@@ -6,7 +6,7 @@
 """
 # Author: Collin Winter
 
-import tests
-import tests.support
+from lib2to3 import tests
+import lib2to3.tests.support
 
 tests.support.run_all_tests(tests=tests.all_tests)


More information about the Python-checkins mailing list