On Wed, Sep 20, 2000 at 07:07:06PM +0200, Martin von Loewis wrote:
Adding error productions to ignore input until stabilization may be feasible on top of the existing parser. Adding tokens in the right place is probably harder - I'd personally go for a pure Python solution, that operates on Grammar/Grammar.
Don't forget that there are two kinds of SyntaxErrors in Python: those that are generated by the tokenizer/parser, and those that are actually generated by the (bytecode-)compiler. (inconsistent indent/dedent errors, incorrect uses of (augmented) assignment, incorrect placing of particular keywords, etc, are all generated while actually compiling the code.) Also, in order to be really useful, the error-indicator would have to be pretty intelligent. Imagine something like this:
forever() and_ever() <tons more code using 4-space indent>
With the current interpreter, that would generate a single warning, on the line below the one that is the actual problem. If you continue searching for errors, you'll get tons and tons of errors, all because the first line was indented too far.
An easy way to work around it is probably to consider all tokenizer-errors and some of the compiler-generated errors (like indent/dedent ones) as really-fatal errors, and only handle the errors that are likely to managable errors, skipping over the affected lines or considering them no-ops.