Wouldn't it be nice if there were a collection of programs (perhaps those owned by registered paranoids, but whatever) that you could use to get statistics about breakage under some syntax change?
But I think syntax changes aren't even the tip of the iceberg -- semantic changes are relevant too.
Sorry, this is the syntax department. I think you need to speak to Mr. Tester in room 104.
Clearly extending this through to run-time would add a huge testing infrastructure that would need maintaining, but allowing people to add their own codes to the syntax-checker base might mollify them a bit about the prospect of future language change.
Call it the PythOnGuard (TM) database. :-) - if your programs would break you'll be mailed before it's committed to production.
Sounds like some kind of huge test suite. The problem is, there's always *something* that breaks... :-(
OK. The hugeness of the test suite was precisely what made me stick to syntax. Do you think quality would benefit by an enlargement of the test suite to non-distributed code?
Couple it with a few polling buttons from "I wouldn't mind fixing this breakage" to "I'ma get my gun and come looking for you" and you might obtain a measure of resistance *from people whose code would actually be broken*. Clearly this should not necessarily be the arbiter of development, but it might allow you to tell people whose code hadn't broken so far to just PythOnGuard it and not complain until something *did* break.
Also good for news releases: "What breaks in this version?" (I can see Paul Rubin loving that one).