PEP 380 acceptance (was Re: EuroPython Language Summit report)

On Sun, Jun 26, 2011 at 2:44 AM, Guido van Rossum <guido@python.org> wrote:
Let me cut this short. PEP 380 is pretty much approved. I know there are a few details worth quibbling over, but they are not going to jeopardize acceptance of the PEP. We are waiting for an implementation in Python 3.3. In fact, I wouldn't mind at this point if someone took their best effort at an implementation and checked it into the 3.3 branch, and we can do the quibbling over the details while we have a working implementation to experiment with.
Based on this message, I've bumped PEP 380 to Accepted, and I'm now working on committing Renaud Blanch's forward port of Greg's original patch (see http://bugs.python.org/issue11682). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Sun, Jun 26, 2011 at 1:16 PM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On Sun, Jun 26, 2011 at 2:44 AM, Guido van Rossum <guido@python.org> wrote:
Let me cut this short. PEP 380 is pretty much approved. I know there are a few details worth quibbling over, but they are not going to jeopardize acceptance of the PEP. We are waiting for an implementation in Python 3.3. In fact, I wouldn't mind at this point if someone took their best effort at an implementation and checked it into the 3.3 branch, and we can do the quibbling over the details while we have a working implementation to experiment with.
Based on this message, I've bumped PEP 380 to Accepted, and I'm now working on committing Renaud Blanch's forward port of Greg's original patch (see http://bugs.python.org/issue11682).
I hit a snag with this. The real tests of the PEP 380 functionality aren't currently part of the patch - they're a big set of "golden output" tests in the zipfile hosted on Greg's site. Those need to be refactored into proper unittest or doctest based additions to the test suite and incorporated into the patch before I could commit this with a clear conscience. That's not going to be as quick as I first thought. Renaud's patch mostly applies cleanly at the moment - the only change is that the "#endif" for the Py_LIMITED_API check needs to be moved in pyerrors.h so it also covers the new StopIteration struct definition. Regards, Nick. [1] http://www.cosc.canterbury.ac.nz/greg.ewing/python/yield-from/yield_from.htm... -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Nick Coghlan <ncoghlan <at> gmail.com> writes:
I hit a snag with this. The real tests of the PEP 380 functionality aren't currently part of the patch - they're a big set of "golden output" tests in the zipfile hosted on Greg's site. Those need to be refactored into proper unittest or doctest based additions to the test suite and incorporated into the patch before I could commit this with a clear conscience.
let me know if i can help.
Renaud's patch mostly applies cleanly at the moment - the only change is that the "#endif" for the Py_LIMITED_API check needs to be moved in pyerrors.h so it also covers the new StopIteration struct definition.
if this helps, i've updated the patch to fix this. https://bitbucket.org/rndblnch/cpython-pep380/changeset/6014d1720625 renaud

On Tue, Jun 28, 2011 at 1:09 AM, renaud <rndblnch@gmail.com> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes:
I hit a snag with this. The real tests of the PEP 380 functionality aren't currently part of the patch - they're a big set of "golden output" tests in the zipfile hosted on Greg's site. Those need to be refactored into proper unittest or doctest based additions to the test suite and incorporated into the patch before I could commit this with a clear conscience.
let me know if i can help.
It would be good if you could take a look at Greg's original test suite, consider ways of bringing it into the main regression tests and then update the patch queue on bitbucket accordingly. My preference is for something unittest based, essentially taking the "golden output" comparisons and turning them into appropriate self.assert* invocations. Given the number of tests Greg has, it will probably make more sense to do it as a new test subdirectory rather than as a single test file (although that depends on how many tests are in each file - if there are only a few, or if they overlap a lot, then having them as separate test cases within a single file may be a better choice).
Renaud's patch mostly applies cleanly at the moment - the only change is that the "#endif" for the Py_LIMITED_API check needs to be moved in pyerrors.h so it also covers the new StopIteration struct definition.
if this helps, i've updated the patch to fix this. https://bitbucket.org/rndblnch/cpython-pep380/changeset/6014d1720625
Yep, that does help. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

Nick Coghlan <ncoghlan <at> gmail.com> writes:
On Tue, Jun 28, 2011 at 1:09 AM, renaud <rndblnch <at> gmail.com> wrote:
Nick Coghlan <ncoghlan <at> gmail.com> writes:
I hit a snag with this. The real tests of the PEP 380 functionality aren't currently part of the patch - they're a big set of "golden output" tests in the zipfile hosted on Greg's site. Those need to be refactored into proper unittest or doctest based additions to the test suite and incorporated into the patch before I could commit this with a clear conscience.
let me know if i can help.
It would be good if you could take a look at Greg's original test suite, consider ways of bringing it into the main regression tests and then update the patch queue on bitbucket accordingly.
My preference is for something unittest based, essentially taking the "golden output" comparisons and turning them into appropriate self.assert* invocations.
Given the number of tests Greg has, it will probably make more sense to do it as a new test subdirectory rather than as a single test file (although that depends on how many tests are in each file - if there are only a few, or if they overlap a lot, then having them as separate test cases within a single file may be a better choice).
ok, i've generated a single test_pep380.py using greg tests wrapped to be run by unittest. it's ugly, but it does the job. some things that may not be desirable: - line numbers in the tracebacks do rely on the position of the tests in the file, so any editing before the last test case will probably screw up everything; - the test touches sys.stdout & sys.stderr the whole thing is available here: <https://bitbucket.org/rndblnch/cpython-pep380/src/tip/pep380-tests> renaud ps. i had to edit the test24 (Test parser module) expected output to make it match the actual output, i guess that the parser module has changed since greg wrote the tests.
participants (2)
-
Nick Coghlan
-
renaud