[ python-Bugs-1224621 ] tokenize module does not detect inconsistent dedents

SourceForge.net noreply at sourceforge.net
Tue Jun 21 09:15:59 CEST 2005


Bugs item #1224621, was opened at 2005-06-21 01:10
Message generated for change (Settings changed) made by rhettinger
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1224621&group_id=5470

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Danny Yoo (dyoo)
>Assigned to: Raymond Hettinger (rhettinger)
Summary: tokenize module does not detect inconsistent dedents

Initial Comment:
The attached code snippet 'testcase.py' should produce an 
IndentationError, but does not.  The code in tokenize.py is too 
trusting, and needs to add a check against bad indentation as it 
yields DEDENT tokens.

I'm including a diff to tokenize.py that should at least raise an 
exception on bad indentation like this.

Just in case, I'm including testcase.py here too:
------
import tokenize
from StringIO import StringIO
sampleBadText = """
def foo():
    bar
  baz
"""
print list(tokenize.generate_tokens(
    StringIO(sampleBadText).readline))

----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1224621&group_id=5470


More information about the Python-bugs-list mailing list