[Python-Dev] issue2180 and using 'tokenize' with Python 3 'str's
meadori at gmail.com
Tue Sep 28 05:15:48 CEST 2010
I was going through some of the open issues related to 'tokenize' and ran
across 'issue2180'. The reproduction case for this issue is along the lines
>>> tokenize.tokenize(io.StringIO("if 1:\n \\\n #hey\n print 1").readline)
but, with 'py3k' I get:
>>> tokenize.tokenize(io.StringIO("if 1:\n \\\n #hey\n print
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/minge/Code/python/py3k/Lib/tokenize.py", line 360, in
encoding, consumed = detect_encoding(readline)
File "/Users/minge/Code/python/py3k/Lib/tokenize.py", line 316, in
TypeError: Can't convert 'bytes' object to str implicitly
which, as seen in the trace, is because the 'detect_encoding' function in
'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'bytes' object) in the string
to tokenize 'first' (a 'str' object). It seems to me that strings should
still be able to be tokenized, but maybe I am missing something.
Is the implementation of 'detect_encoding' correct in how it attempts to
determine an encoding or should I open an issue for this?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-Dev