On Tuesday, July 19, 2016 at 5:06:17 PM UTC+5:30, Neil Girdhar wrote:

On Tue, Jul 19, 2016 at 7:21 AM Steven D'Aprano wrote:
On Mon, Jul 18, 2016 at 10:29:34PM -0700, Rustom Mody wrote:

> IOW
> 1. The lexer is internally (evidently from the error message) so
> ASCII-oriented that any “unicode-junk” just defaults out to identifiers
> (presumably comments are dealt with earlier) and then if that lexing action
> fails it mistakenly pinpoints a wrong *identifier* rather than just an
> impermissible character like python 2

You seem to be jumping to a rather large conclusion here. Even if you
are right that the lexer considers all otherwise-unexpected characters
to be part of an identifier, why is that a problem?

It's a problem because those characters could never be part of an identifier.  So it seems like a bug.

An armchair-design solution would say: We should give the most appropriate answer for every possible unicode character category
This would need to take all the Unicode character-categories and Python lexical-categories and 'cross-product' them — a humongous task to little advantage

A more practical solution would be to take the best of the python2 and python3 current approaches:
"Invalid character XX  in line YY"
and just reveal nothing about what lexical category — like identifier — python thinks the  char is coming in.

The XX is like python2 and the YY like python3
If it can do better than '\xe2' — ie a codepoint — that’s a bonus but not strictly necessary