[issue12691] tokenize.untokenize is broken
Gareth Rees
report at bugs.python.org
Fri Aug 5 00:21:29 CEST 2011
New submission from Gareth Rees <gdr at garethrees.org>:
tokenize.untokenize is completely broken.
Python 3.2.1 (default, Jul 19 2011, 00:09:43)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tokenize, io
>>> t = list(tokenize.tokenize(io.BytesIO('1+1'.encode('utf8')).readline))
>>> tokenize.untokenize(t)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/tokenize.py", line 250, in untokenize
out = ut.untokenize(iterable)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/tokenize.py", line 179, in untokenize
self.add_whitespace(start)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/tokenize.py", line 165, in add_whitespace
assert row <= self.prev_row
AssertionError
The assertion is simply bogus: the <= should be >=.
The reason why no-one has spotted this is that the unit tests for the tokenize module only ever call untokenize() in "compatibility" mode, passing in a 2-tuple instead of a 5-tuple.
I propose to fix this, and add unit tests, at the same time as fixing other problems with tokenize.py (issue12675).
----------
components: Library (Lib)
messages: 141634
nosy: Gareth.Rees
priority: normal
severity: normal
status: open
title: tokenize.untokenize is broken
type: behavior
versions: Python 3.2, Python 3.3
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue12691>
_______________________________________
More information about the Python-bugs-list
mailing list