[Python-Dev] RFC: Backport ssl.MemoryBIO and ssl.SSLObject to Python 2.7

Donald Stufft donald at stufft.io
Fri Jun 2 12:59:42 EDT 2017

> On Jun 2, 2017, at 12:41 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
> On Fri, 2 Jun 2017 12:22:06 -0400
> Donald Stufft <donald at stufft.io> wrote:
>> It’s not just bootstrapping that pip has a problem with for C extensions, it also prevents upgrading PyOpenSSL on Windows because having pip import PyOpenSSL locks the .dll, and we can’t delete it or overwrite it until the pip process exits and no longer imports PyOpenSSL. This isn’t a problem on Linux or macOS or the other *nix clients though. We patch requests as it is today to prevent it from importing simplejson and cryptography for this reason.
> Does pip use any advanced features in Requests, at least when it comes
> to downloading packages (which is where the bootstrapping issue lies
> AFAIU)? Because at this point it should like you may be better off with
> a simple pure Python HTTP downloader.

It’s hard to fully answer the question because it sort of depends?

Could we switch to just like, urllib2 or something? Yea we could, in fact we used to use that and switched to using requests because we had to backport security work around / fixes ourselves (the big one at the time was host name matching/verification) and we were really bad at keeping up with tracking what patches needed applying and when. Switching to requests let us offload that work to the requests team who are doing a phenomenal job at it.

Beyond that though, getting HTTP right is hard, and pip used to have to try and implement work arounds to either broken or less optimal urllib2 behavior whereas requests generally gets it right for us out of the box.

Closer to your specific questions about features, we’re using the requests session support to handle connection pooling to speed up downloading (since we don’t need to open a new connection for every download then), the adapter API to handle transparently allowing file:// URLs, the auth framework to handle holding authentication for multiple domains at once, and the third party library cachecontrol to handle our HTTP caching using a browser style cache.

I suspect (though I’d let him speak for himself) that Cory would rather continue to be sync only than require pip to go back to not using requests.

Donald Stufft

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20170602/adf4a65b/attachment.html>

More information about the Python-Dev mailing list