Re: [Python-Dev] SSL certificates recommendations for downstreampython packagers
On 1 Feb 2017, at 14:20, Steve Dower <steve.dower@python.org> wrote:
Sorry, I misspoke when I said "certificate validation callback", I meant the same callback Cory uses below (name escapes me now, but it's unfortunately similar to what I said). There are two callbacks in OpenSSL, one that allows you to verify each certificate in the chain individually, and one that requires you to validate the entire chain.
I do indeed take the entire chain in one go and pass it to the OS API. Christian also didn't like that I was bypassing *all* of OpenSSL's certificate handling here, but maybe there's a way to make it reliable if Chrome has done it?
So, my understanding is that bypassing OpenSSL’s cert handling is basically fine. The risks are only in cases where OpenSSL’s cert handling would be a supplement to what the OS provides, which is not really very common and I don’t think is a major risk for Python. So in general, it is not unreasonable to ask your OS “are these certificates valid for this connection based on your trust DB” and circumventing OpenSSL entirely there. Please do bear in mind you need to ask your OS the right question. For Windows this stuff is actually kinda hard because the API is somewhat opaque, but you have to worry about setting correct certificate usages, building up chain policies, and then doing appropriate error handling (AFAIK the crypto API can “fail validation” for some reasons that have nothing to do with validation itself, so worth bearing that in mind). The TL;DR is: I understand Christian’s concern, but I don’t think it’s important if you’re very, very careful. Cory
Cory Benfield writes:
The TL;DR is: I understand Christian’s concern, but I don’t think it’s important if you’re very, very careful.
But AIUI, the "you" above is the end-user or admin of end-user's system, no? We know that they aren't very careful (or perhaps more accurate, this is too fsckin' complicated for anybody but an infosec expert to do very well). I[1] still agree with you that it's *unlikely* that end-users/admins will need to worry about it. But we need to be really careful about what we say here, or at least where the responsible parties will be looking. Thanks to all who are contributing so much time and skull sweat on this. This is insanely hard, but important. Footnotes: [1] Infosec wannabe, I've thought carefully but don't claim real expertise.
On 2 Feb 2017, at 03:38, Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Cory Benfield writes:
The TL;DR is: I understand Christian’s concern, but I don’t think it’s important if you’re very, very careful.
But AIUI, the "you" above is the end-user or admin of end-user's system, no? We know that they aren't very careful (or perhaps more accurate, this is too fsckin' complicated for anybody but an infosec expert to do very well).
I think "you" is the coder of the interface. From a security perspective I think we have to discount the possibility of administrator error from our threat model. A threat model that includes “defend the system against intrusions that the administrator incorrectly allows” is an insanely difficult one to respond to, given that it basically requires psychic powers to determine what the administrator *meant* instead of what they configured. Now, where we allow configuration we have a duty to ensure that it’s as easy as possible to configure correctly, but when using the system trust store most of the configuration is actually provided by the OS tools, rather than by the above-mentioned “you”, so that’s not in our control. The risk, and the need to be very very careful, comes from ensuring that the semantics of the OS configuration are preserved through to the behaviour of the program. This is definitely a place with razor-blades all around, which is why I have tended to defer to the Chrome security team on this issue. In particular, the BoringSSL developers are razor-sharp people who have their heads screwed on when it comes to practical security decisions, and I’ve found that emulating them is usually a safe bet in the face of ambiguity. However, it’s unquestionable that the *safest* route to go down in terms of preserving the expectations of users is to use the platform-native TLS implementation wholesale, rather than do a hybrid model like Chrome does where OpenSSL does the protocol bits and the system does the X509 bits. That way Python ends up behaving basically like Edge or Safari on the relevant platforms, or perhaps more importantly behaving like .NET on Windows and like CoreFoundation on macOS, which is a much better place to be in terms of user and administrator expectations. As a side benefit, that approach helps take Python a bit closer to feeling “platform-native” on many platforms, which can only be a good thing for those of us who want to see more Python on the desktop (or indeed on the mobile device).
I[1] still agree with you that it's *unlikely* that end-users/admins will need to worry about it. But we need to be really careful about what we say here, or at least where the responsible parties will be looking.
I agree. In an ideal world I’d say to Steve that he should shelve his current work and wait for the TLS ABC PEP that is incoming (hopefully I’ll send a first-draft to python-dev today). However, I’m nothing if not pragmatic, and having Steve continue his current work in parallel to the TLS ABC PEP is probably a good idea so that we can avoid having all our eggs in one basket. Perhaps we can get the TLS ABC stuff in place in time for Steve to just swap over to using SChannel altogether, but if that doesn’t work out and Steve can get a halfway-house out the door earlier then that’s fine by me. Cory
Cory Benfield writes:
From a security perspective I think we have to discount the possibility of administrator error from our threat model.
I disagree in a certain sense, and in that sense you don't discount it -- see below.
A threat model that includes “defend the system against intrusions that the administrator incorrectly allows”
I agree that child-proof locks don't work. The point of having a category called "administrator error" in the threat model is not to instantiate it, but merely to recognize it:
where we allow configuration we have a duty to ensure that it’s as easy as possible to configure correctly,
and in particular defaults should (1) "deny everything" (well, nearly), and (2) be robust ("forbid what is not explicitly permitted") to configuration changes that allow accesses wherever Python can reasonably achieve that.
but when using the system trust store most of the configuration is actually provided by the OS tools, rather than by the above-mentioned “you”, so that’s not in our control.
OK, up to the problem that OS tools may not be accessible or may be considered unreliable. I trust you guys to do something sane there, and I agree it's covered by the "we can't correct admin mistakes in complex environments" clause that you invoked above. Python cannot take responsibility for guessing what might happen in any given configuration in such environments.
However, it’s unquestionable that the *safest* route to go down in terms of preserving the expectations of users is to use the platform-native TLS implementation wholesale, rather than do a hybrid model like Chrome does where OpenSSL does the protocol bits and the system does the X509 bits. That way Python ends up behaving basically like Edge or Safari on the relevant platforms, or perhaps more importantly behaving like .NET on Windows and like CoreFoundation on macOS, which is a much better place to be in terms of user and administrator expectations.
OK, I can't help you with the details, but I can at least say I feel safer when you say that's where you're going. :-)
participants (2)
-
Cory Benfield
-
Stephen J. Turnbull