Wouldn't it be nice if there were a collection of programs (perhaps those owned by registered paranoids, but whatever) that you could use to get statistics about breakage under some syntax change?
But I think syntax changes aren't even the tip of the iceberg -- semantic changes are relevant too.
Clearly extending this through to run-time would add a huge testing infrastructure that would need maintaining, but allowing people to add their own codes to the syntax-checker base might mollify them a bit about the prospect of future language change.
Call it the PythOnGuard (TM) database. :-) - if your programs would break you'll be mailed before it's committed to production.
Sounds like some kind of huge test suite. The problem is, there's always *something* that breaks... :-(
--Guido van Rossum (home page: http://www.python.org/~guido/)