[Python-Dev] PEP 399: Pure Python/C Accelerator Module Compatibiilty Requirements

Raymond Hettinger raymond.hettinger at gmail.com
Sun Apr 17 04:19:32 CEST 2011


On Apr 16, 2011, at 2:45 PM, Brett Cannon wrote:

> 
> 
> On Sat, Apr 16, 2011 at 14:23, Stefan Krah <stefan at bytereef.org> wrote:
> Brett Cannon <brett at python.org> wrote:
> > In the grand python-dev tradition of "silence means acceptance", I consider
> > this PEP finalized and implicitly accepted.

I haven't seen any responses that said, yes this is a well thought-out proposal that will actually benefit any of the various implementations.

Almost none of the concerns that have been raised has been addressed.  Does the PEP only apply to purely algorithmic modules such as heapq or does it apply to anything written in C (like an xz compressor or for example)?  Does testing every branch in a given implementation now guarantee every implementation detail or do we only promise the published API (historically, we've *always* done the latter)?  Is there going to be any guidance on the commonly encountered semantic differences between C modules and their Python counterparts (thread-safety, argument handling, tracebacks, all possible exceptions, monkey-patchable pure python classes versus hard-wired C types etc)?

The PEP seems to be predicated on a notion that anything written in C is bad and that all testing is good.  AFAICT, it doesn't provide any practical advice to someone pursuing a non-trivial project (such as decimal or threading).  The PEP mostly seems to be about discouraging any further work in C.  If that's the case, it should just come out and say it rather than tangentially introducing ambiguous testing requirements that don't make a lot of sense.

The PEP also makes some unsupported claims about saving labor.  My understanding is the IronPython and Jython tend to re-implement modules using native constructs.  Even with PyPy, the usual pure python idioms aren't necessarily what is best for PyPy, so I expect some rewriting there also.  It seems the lion's share of the work in making other implementations has to do with interpreter details and whatnot -- I would be surprised if the likes of bisect or heapq took even one-tenth of one percent of the total development time for any of the other implementations.


> 
> I did not really see an answer to these concerns:
> 
> http://mail.python.org/pipermail/python-dev/2011-April/110672.html
> 
> Antoine does seem sold on the 100% branch coverage requirement and views it as pointless. I disagree. =)
> 
> As for the exception Stefan is saying may be granted, that is not in the PEP so I consider it unimportant. If we really feel the desire to grant an exception we can (since we can break any of our own rules that we collectively choose to), but I'm assuming we won't.
>  
> http://mail.python.org/pipermail/python-dev/2011-April/110675.html
> 
> Raymond thinks that have a testing requirement conflates having implementations match vs. APIs.

That is not an accurate restatement of my post.

> Well, as we all know, the stdlib ends up having its implementation details relied upon constantly by people whether they mean to or not,  so making sure that this is properly tested helps deal with this known reality.

If you're saying that all implementation details (including internal branching logic) are now guaranteed behaviors, then I think this PEP has completely lost its way.  I don't know of any implementors asking for this.


> This is a damned-if-you-do-damned-if-you-don't situation. The first draft of this PEP said to be "semantically equivalent w/ divergence where technically required", but I got pushback from being too wishy-washy w/ lack of concrete details. So I introduce a concrete metric that some are accusing of being inaccurate for the goals of the PEP. I'm screwed or I'm screwed. =) So I am choosing to go with the one that has a side benefit of also increasing test coverage.

Maybe that is just an indication that the proposal isn't mature yet.   To me, it doesn't seem well thought out and isn't realistic.  


> Now if people would actually support simply not accepting any more C modules into the Python stdlib (this does not apply to CPython's stdlib), then I'm all for that.
> I only went with the "accelerator modules are okay" route to help get acceptance for the PEP. But if people are willing to go down a more stringent route and say that any module which uses new C code is considered CPython-specific and thus any acceptance of such modules will be damn hard to accomplish as it will marginalize the value of the code, that's fine by me.

Is that what people want?   For example, do we want to accept a C version of decimal?  Without it, the decimal module is unusable for people with high volumes of data.  Do we want things like an xz compressor to be written in pure python and only in Python?  I don't think this benefits our users.

I'm not really clear what it is you're trying to get at.  For PyPy, IronPython, and Jython to succeed, does the CPython project need to come to a halt?  I don't think many people here really believe that to be the case.


Raymond





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20110416/94cf62d8/attachment.html>


More information about the Python-Dev mailing list