Anthony Baxter wrote:
On Friday 14 July 2006 16:39, Neal Norwitz wrote:
Remember I also tried to push for more features to go in early? That would have given more time for external testing. Still features are coming in. Python developers weren't happy about having to get things in earlier. I don't see a practical way to implement what you propose (see Anthony's comments).
Following up on this point: Given the number of "just-this-one-more-thing-please" we've _already_ had since the b1 feature freeze, do you really except that 90 days of feature freeze is feasible? And if there's not going to be a feature freeze, there's hardly any point to people doing testing until there _is_ a feature freeze, is there? Oh, great, my code works with 2.5b1. Oops. 2.5b9 added a new feature that broke my code, but I didn't test with that.
I think this is where release candidates can come into their own - the beta's can flush out the "just-one-more-thing" pleases, the maintenance branch is forked for rc1, and then any changes on the branch are purely to fix regressions.
As far as the idea of a suite of buildbots running the unit tests of large Python applications against the current SVN trunk goes, one problem is that failures in those buildbots will come from two sources: - Python itself fails (broken checkin) - the application unit tests fail (backwards incompatibility)
To weed out the false alarms, the slaves will need to be set up to run the Python unit tests first and then the application unit tests only if the Python test run is successful.
When the slave suffers a real failure due to a backwards incompatibility, it will take a developer of the application to figure out what it was that broke the application's tests.
So while I think it's a great idea, I also think it will need significant support from the application developers in debugging any buildbot failures to really make it work.