Re: [Security-sig] Unified TLS API for Python
On 11 Jan 2017, at 21:44, Wes Turner <wes.turner@gmail.com> wrote:
This may be a bit of a different use case (and possibly worth having in the first version of a new tls module):
"Hitless TLS Certificate Rotation in Go" https://diogomonica.com/2017/01/11/hitless-tls-certificate-rotation-in-go/ <https://diogomonica.com/2017/01/11/hitless-tls-certificate-rotation-in-go/>
- Can/could this be done with only set_sni_callback ?
Yes, it can be. Twisted has an extension module, txsni, that uses the SNI callback to choose which certificate to provide. This is basically identical to the Go GetCertificate callback function. Cory
On Thursday, January 12, 2017, Cory Benfield <cory@lukasa.co.uk> wrote:
On 11 Jan 2017, at 21:44, Wes Turner <wes.turner@gmail.com <javascript:_e(%7B%7D,'cvml','wes.turner@gmail.com');>> wrote:
This may be a bit of a different use case (and possibly worth having in the first version of a new tls module):
"Hitless TLS Certificate Rotation in Go" https://diogomonica.com/2017/01/11/hitless-tls-certificate-rotation-in-go/
- Can/could this be done with only set_sni_callback ?
Yes, it can be. Twisted has an extension module, txsni, that uses the SNI callback to choose which certificate to provide.
https://github.com/glyph/txsni
This is basically identical to the Go GetCertificate callback function.
There's more config than just the cert, though. There are really two reqs mentioned: SNI (Server Name Indication), and "hot" TLS config detection/selection: """ The idea is to allow an administrator to force the whole cluster to migrate away from an old root CA transparently, removing its existence from the trust stores of all the nodes participating in the Swarm. This means that we need control over the whole TLS config, instead of controlling only which certificate is currently being served. """ '"" We chose to create a MutableTLSCreds <https://github.com/docker/swarmkit/blob/master/ca/transport.go> struct, which implements this TransportCredentials <https://godoc.org/google.golang.org/grpc/credentials> interface and allows the caller to simply change the TLS Config by calling LoadNewTLSConfig. """ IIUC, we'd currently have to create a new context to change any config other than the cert?
Cory
On 2017-01-12 18:07, Wes Turner wrote:
On Thursday, January 12, 2017, Cory Benfield <cory@lukasa.co.uk <mailto:cory@lukasa.co.uk>> wrote:
On 11 Jan 2017, at 21:44, Wes Turner <wes.turner@gmail.com <javascript:_e(%7B%7D,'cvml','wes.turner@gmail.com');>> wrote:
This may be a bit of a different use case (and possibly worth having in the first version of a new tls module):
"Hitless TLS Certificate Rotation in Go" https://diogomonica.com/2017/01/11/hitless-tls-certificate-rotation-in-go/ <https://diogomonica.com/2017/01/11/hitless-tls-certificate-rotation-in-go/>
- Can/could this be done with only set_sni_callback ?
Yes, it can be. Twisted has an extension module, txsni, that uses the SNI callback to choose which certificate to provide.
https://github.com/glyph/txsni
This is basically identical to the Go GetCertificate callback function.
There's more config than just the cert, though. There are really two reqs mentioned: SNI (Server Name Indication), and "hot" TLS config detection/selection:
""" The idea is to allow an administrator to force the whole cluster to migrate away from an old root CA transparently, removing its existence from the trust stores of all the nodes participating in the Swarm. This means that we need control over the whole TLS config, instead of controlling only which certificate is currently being served. """ '"" We chose to create a MutableTLSCreds <https://github.com/docker/swarmkit/blob/master/ca/transport.go> struct, which implements this TransportCredentials <https://godoc.org/google.golang.org/grpc/credentials> interface and allows the caller to simply change the TLS Config by calling |LoadNewTLSConfig|. """
IIUC, we'd currently have to create a new context to change any config other than the cert?
Such advanced features are out-of-scope for the PEP. Please remember that any additional features makes it way more complicated for implementers. Personally I would rather remove half of the PEP than add new things. For pip, requests, and similar libraries only a subset of the features are really required: * ClientContext.wrap_socket() * a function like create_default_context() to create a client context that either loads a CA file or the system's trust store * TLSError * TLSWrappedSocket with socket.socket API Cory likes to define the API the additional features and server-side TLS. I consider it useful, partly advanced and nice-to-have, but not mandatory to solve the immediate issue we are facing now. [1] Let's keep it simple. We can always define an enhanced superset of the TLS ABC later. But we cannot remove features or change API in an incompatible way later. Christian https://mail.python.org/pipermail/distutils-sig/2017-January/029970.html
On Jan 12, 2017, at 2:13 PM, Christian Heimes <christian@cheimes.de> wrote:
Let's keep it simple. We can always define an enhanced superset of the TLS ABC later. But we cannot remove features or change API in an incompatible way later.
I think the server side stuff makes sense, it’ll be important for projects like Twisted and such and isn’t really *that* much more effort. Getting too lost in the weeds over advanced features like hot-config-reload I agree is a bad use of resources. — Donald Stufft
On Thursday, January 12, 2017, Christian Heimes <christian@cheimes.de> wrote:
On 2017-01-12 18:07, Wes Turner wrote:
On Thursday, January 12, 2017, Cory Benfield <cory@lukasa.co.uk
<javascript:;>
<mailto:cory@lukasa.co.uk <javascript:;>>> wrote:
On 11 Jan 2017, at 21:44, Wes Turner <wes.turner@gmail.com
<javascript:;>
<javascript:_e(%7B%7D,'cvml','wes.turner@gmail.com <javascript:;>');>>
wrote:
This may be a bit of a different use case (and possibly worth having in the first version of a new tls module):
"Hitless TLS Certificate Rotation in Go" https://diogomonica.com/2017/01/11/hitless-tls-certificate-
rotation-in-go/
<https://diogomonica.com/2017/01/11/hitless-tls-certificate-
rotation-in-go/>
- Can/could this be done with only set_sni_callback ?
Yes, it can be. Twisted has an extension module, txsni, that uses the SNI callback to choose which certificate to provide.
https://github.com/glyph/txsni
This is basically identical to the Go GetCertificate callback function.
There's more config than just the cert, though. There are really two reqs mentioned: SNI (Server Name Indication), and "hot" TLS config detection/selection:
""" The idea is to allow an administrator to force the whole cluster to migrate away from an old root CA transparently, removing its existence from the trust stores of all the nodes participating in the Swarm. This means that we need control over the whole TLS config, instead of controlling only which certificate is currently being served. """ '"" We chose to create a MutableTLSCreds <https://github.com/docker/swarmkit/blob/master/ca/transport.go> struct, which implements this TransportCredentials <https://godoc.org/google.golang.org/grpc/credentials> interface and allows the caller to simply change the TLS Config by calling |LoadNewTLSConfig|. """
IIUC, we'd currently have to create a new context to change any config other than the cert?
Such advanced features are out-of-scope for the PEP. Please remember that any additional features makes it way more complicated for implementers. Personally I would rather remove half of the PEP than add new things. For pip, requests, and similar libraries only a subset of the features are really required:
* ClientContext.wrap_socket() * a function like create_default_context() to create a client context that either loads a CA file or the system's trust store * TLSError * TLSWrappedSocket with socket.socket API
Cory likes to define the API the additional features and server-side TLS. I consider it useful, partly advanced and nice-to-have, but not mandatory to solve the immediate issue we are facing now. [1]
Let's keep it simple. We can always define an enhanced superset of the TLS ABC later. But we cannot remove features or change API in an incompatible way later.
I understand your scope concern. In order to accomplish the aforementioned "hot" TLS config selection, - Is there context-state to pass to TLSWrappedSocket.do_handshake(self, [context/configurator])? - Or, would it be possible to specify injection so as to make subclassing unnecessary? - Would it then be preferable to use a unified configuration object with AttributeErrors and ValueErrors for the context (and any potential connection-specific reconfigurations); instead of kwargs here and there?
Christian
https://mail.python.org/pipermail/distutils-sig/2017-January/029970.html
I agree with Christian and Donald (unsurprisingly). The key thing to note is that we can extend this API as time goes on and we get a better understanding of what's happening. And any application that is doing hot TLS config changes is likely not going to be agnostic to the concrete TLS implementation it uses anyway, given that many implementations won't be sensibly able to do it. I'm not even sure about the specific API we're using for SNI: I might just want to restrict it to emitting new certificates. Cory
On 12 Jan 2017, at 19:29, Donald Stufft <donald@stufft.io> wrote:
On Jan 12, 2017, at 2:13 PM, Christian Heimes <christian@cheimes.de> wrote:
Let's keep it simple. We can always define an enhanced superset of the TLS ABC later. But we cannot remove features or change API in an incompatible way later.
I think the server side stuff makes sense, it’ll be important for projects like Twisted and such and isn’t really *that* much more effort. Getting too lost in the weeds over advanced features like hot-config-reload I agree is a bad use of resources.
— Donald Stufft
On Jan 12, 2017, at 2:39 PM, Cory Benfield <cory@lukasa.co.uk> wrote:
I'm not even sure about the specific API we're using for SNI: I might just want to restrict it to emitting new certificates.
I am pro restricting the API, can always relax restrictions later. — Donald Stufft
-----Original Message----- From: Donald Stufft <donald@stufft.io> Reply: Donald Stufft <donald@stufft.io> Date: January 12, 2017 at 13:46:26 To: Cory Benfield <cory@lukasa.co.uk> Cc: security-sig@python.org <security-sig@python.org>, Christian Heimes <christian@cheimes.de> Subject: Re: [Security-sig] Unified TLS API for Python
On Jan 12, 2017, at 2:39 PM, Cory Benfield wrote:
I'm not even sure about the specific API we're using for SNI: I might just want to restrict it to emitting new certificates.
I am pro restricting the API, can always relax restrictions later.
Expanding APIs is always leagues easier than contracting them. Starting off with the minimum necessary API makes perfect sense. As needs are found that it cannot meet, then expanding it slowly and methodically will be easy and painless. In other words, +1 on keeping it small to start and restricting the API. -- Ian Cordasco
+1 for start simple and iterate; but expecting a config object is not easy to add later. class SSLConfig(dict): def validate(self): """check types and (implementation-specific) ranges""" class _BaseContext(metaclass=ABCMeta): # def validate_config(self, cfg: Union[Dict, SSLConfig]) -> Boolean: """check types and (implementation-specific) ranges""" if not hasattr(cfg, 'validate'): cfg = SSLConfig(cfg) cfg.validate() self.cfg = cfg || return cfg def set_config(self, cfg: Dict): self.register_certificates(cfg[*]) self.set_ciphers(cfg[*]) self.set_inner_protocls(cfg[*]) if cfg.get(*) is not None: self.set_sni_callback(cfg[*]) Why a configuration object makes sense here: - Centralized config validation - Implementations may have unique parameters - Convenience: just pass the config params from {JSON, TOML, YAML, configobj, PasteDeploy, pyramid, ...} with sane validation On Thu, Jan 12, 2017 at 2:36 PM, Ian Cordasco <sigmavirus24@gmail.com> wrote:
-----Original Message----- From: Donald Stufft <donald@stufft.io> Reply: Donald Stufft <donald@stufft.io> Date: January 12, 2017 at 13:46:26 To: Cory Benfield <cory@lukasa.co.uk> Cc: security-sig@python.org <security-sig@python.org>, Christian Heimes <christian@cheimes.de> Subject: Re: [Security-sig] Unified TLS API for Python
On Jan 12, 2017, at 2:39 PM, Cory Benfield wrote:
I'm not even sure about the specific API we're using for SNI: I might
just want to restrict
it to emitting new certificates.
I am pro restricting the API, can always relax restrictions later.
Expanding APIs is always leagues easier than contracting them. Starting off with the minimum necessary API makes perfect sense. As needs are found that it cannot meet, then expanding it slowly and methodically will be easy and painless.
In other words, +1 on keeping it small to start and restricting the API. -- Ian Cordasco _______________________________________________ Security-SIG mailing list Security-SIG@python.org https://mail.python.org/mailman/listinfo/security-sig
On 13 January 2017 at 07:05, Wes Turner <wes.turner@gmail.com> wrote:
+1 for start simple and iterate; but expecting a config object is not easy to add later.
Yes, it is - all that is necessary is to add a "make_ssl_context" helper function that translates from the declarative configuration format (however defined) to the programmatic API and returns a configured context of the requested type. The appropriate time to define that lowest-common-denominator configuration format is *after* there is a working programmatic API that covers at least the 3 major implementations of interest (OpenSSL, SecureTransport, SChannel), and hopefully a few other implementations as well (e.g. NSS, BoringSSL). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On Fri, Jan 13, 2017 at 12:09 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 13 January 2017 at 07:05, Wes Turner <wes.turner@gmail.com> wrote:
+1 for start simple and iterate; but expecting a config object is not easy to add later.
Yes, it is - all that is necessary is to add a "make_ssl_context" helper function that translates from the declarative configuration format (however defined) to the programmatic API and returns a configured context of the requested type.
The appropriate time to define that lowest-common-denominator configuration format is *after* there is a working programmatic API that covers at least the 3 major implementations of interest (OpenSSL, SecureTransport, SChannel), and hopefully a few other implementations as well (e.g. NSS, BoringSSL).
So, rather than scattered throughout each implementation, I think it would be wise to provide guidance regarding configuration validation. Enumerating the parameters into a common schema certainly will require actual implementations to define a superset of common dict keys; which I believe would be easier with a standard SSLConfig object. Another reason I believe there should be a configuration object with a .validate() method for centralized SSL configuration: - Having one .validate() method (with as many necessary subvalidators) provides a hook for security-conscious organizations to do SSL/TLS configuration validation in one place. (Otherwise, this type of configuration-validation must be frequently-re-implemented in an ad-hoc way; which inopportunely admits errors) - Examples of SSL/TLS configuration validation criteria: - https://en.wikipedia.org/wiki/FIPS_140-2#Cryptographic_Module_Validation_Pro... - https://mozilla.github.io/server-side-tls/ssl-config-generator/ - "CIS Critical Security Controls" https://www.cisecurity.org/critical-controls.cfm - CSC 3.[*] - Least-privilege dictates that this type of config is separate from the code.py files - "CWE-327: Use of a Broken or Risky Cryptographic Algorithm" (2011 Top 25 #19) https://cwe.mitre.org/top25/#CWE-327 https://cwe.mitre.org/data/definitions/327.html - ... "improper configuration" is basically this issue; so, validation should be encouraged where possible. Again, I think we should encourage validation of SSL/TLS configuration settings; and pointing to SSLConfig.validate() as the method to subclass makes that very easy.
On 13 January 2017 at 16:32, Wes Turner <wes.turner@gmail.com> wrote:
Another reason I believe there should be a configuration object with a .validate() method for centralized SSL configuration:
- Having one .validate() method (with as many necessary subvalidators) provides a hook for security-conscious organizations to do SSL/TLS configuration validation in one place. (Otherwise, this type of configuration-validation must be frequently-re-implemented in an ad-hoc way; which inopportunely admits errors)
Ah, I see what you mean you mean now - not "Is this a valid configuration for the current TLS implementation?", but rather "Does this TLS context, as configured, comply with a given security policy?" That also gets back to Christian's observation that: ========== We have to consider different kinds of min and max version: 1) min and max offical SSL/TLS standards 2) min and max supported standards for a TLS implementation 3) min and max version we consider as secure enough ========== Where points 1 & 2 are descriptive of the standards and particular implementations, point 3 is really about security policy recommendations and enforcement. I'd still be inclined to keep that concern separate from those of the core TLS context (which is about applying a given security context to sockets and buffers) though, and instead have the specific concept of a TLSPolicy that can be used to create TLS contexts that meet certain criteria: class TLSPolicy(abc.ABC): @abstractmethod def get_context(self, context_factory: Callable[[], TLSContext] = None) -> None: """Create a TLS context instance that complies with the policy""" ... @abstractmethod def validate_context(self, context: TLSContext) -> bool: """Returns True if the context complies with the policy, False otherwise""" ... The standard library could then provide a concrete implementation and default policy that was sufficient to enforce minimum TLS versions and the acceptable cipher suites, while still leaving room for more capable third party policy enforcement. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 13 Jan 2017, at 08:04, Nick Coghlan <ncoghlan@gmail.com> wrote:
I'd still be inclined to keep that concern separate from those of the core TLS context (which is about applying a given security context to sockets and buffers) though, and instead have the specific concept of a TLSPolicy that can be used to create TLS contexts that meet certain criteria:
… snip ...
The standard library could then provide a concrete implementation and default policy that was sufficient to enforce minimum TLS versions and the acceptable cipher suites, while still leaving room for more capable third party policy enforcement.
So, I’d like to rein this discussion in a bit here. The idea of being able to validate the security and correctness of a collection of TLS settings is a valuable idea. It is also vastly beyond the scope of the problem I’m trying to solve here. I want to very strongly resist the desire that a lot of the community will have to attach additional complexity and responsibility to this module. While it’s very understandable to want to do that (after all, we don’t revisit our standard library TLS functionality often), it will have the effect of chaining a concrete block to this PEP’s ankle and then throwing it into the water. I want us to develop a culture around TLS functionality where we are willing to continually iterate on it, and where the community feels willing-and-able to provide extensions and new functionality incrementally, rather than all at once. Let’s not have the perfect be the enemy of the good here. I highly recommend that people who are interested in TLS policy workshop a separate PEP that discusses what we should build to resolve that issue. But we should not block solving problem A (how do we use something that isn’t OpenSSL with the standard library) on solving problem B (how do we define and enforce TLS policy). Cory
On Fri, Jan 13, 2017 at 12:27 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
On 13 Jan 2017, at 08:04, Nick Coghlan <ncoghlan@gmail.com> wrote:
I'd still be inclined to keep that concern separate from those of the core TLS context (which is about applying a given security context to sockets and buffers) though, and instead have the specific concept of a TLSPolicy that can be used to create TLS contexts that meet certain criteria:
… snip ...
The standard library could then provide a concrete implementation and default policy that was sufficient to enforce minimum TLS versions and the acceptable cipher suites, while still leaving room for more capable third party policy enforcement.
So, I’d like to rein this discussion in a bit here.
The idea of being able to validate the security and correctness of a collection of TLS settings is a valuable idea. It is also vastly beyond the scope of the problem I’m trying to solve here. I want to very strongly resist the desire that a lot of the community will have to attach additional complexity and responsibility to this module. While it’s very understandable to want to do that (after all, we don’t revisit our standard library TLS functionality often), it will have the effect of chaining a concrete block to this PEP’s ankle and then throwing it into the water.
I want us to develop a culture around TLS functionality where we are willing to continually iterate on it, and where the community feels willing-and-able to provide extensions and new functionality incrementally, rather than all at once. Let’s not have the perfect be the enemy of the good here. I highly recommend that people who are interested in TLS policy workshop a separate PEP that discusses what we should build to resolve that issue. But we should not block solving problem A (how do we use something that isn’t OpenSSL with the standard library) on solving problem B (how do we define and enforce TLS policy).
The potentially-useful idea I took from this subthread was: right now we have an interface with a bunch of getter/setter methods. Would it make sense to *replace* that with something more declarative, like a single method that takes configuration data in some sort of standard format? Something very non-clever and simple, like, a dict? That seems like it might both simplify the interface (e.g. the sni callback can return one of these dicts instead of trying to specify OpenSSL's context switcheroo weirdness; if the new dict tries to change options that this particular implementation doesn't allow to be changed, then it's free to error out) and be more flexible (e.g. people could write validation routines that take one of these dicts and raise an error or not -- having a standard format enables this but also makes it not our problem). -n -- Nathaniel J. Smith -- https://vorpus.org
On 13 Jan 2017, at 08:48, Nathaniel Smith <njs@pobox.com> wrote:
The potentially-useful idea I took from this subthread was: right now we have an interface with a bunch of getter/setter methods. Would it make sense to *replace* that with something more declarative, like a single method that takes configuration data in some sort of standard format? Something very non-clever and simple, like, a dict? That seems like it might both simplify the interface (e.g. the sni callback can return one of these dicts instead of trying to specify OpenSSL's context switcheroo weirdness; if the new dict tries to change options that this particular implementation doesn't allow to be changed, then it's free to error out) and be more flexible (e.g. people could write validation routines that take one of these dicts and raise an error or not -- having a standard format enables this but also makes it not our problem).
Hmm. I am extremely unsure. At the risk of sounding facetious, these two solutions are *basically* isomorphic to one another: that is, an object with a bunch of getters/setters is basically a dict, it’s just not something you can shove arbitrary data into. That’s my main reason for preferring an object to a dict: I want to eliminate a whole class of bugs that comes from accidentally doing `config[’ssl_version_minimum’] = TLSv1_1` instead of `config[‘tls_version_minimum’] = TLSv1_1`. Put another way, the dict approach either requires that we override __setitem__ to validate all keys, or that we allow typo-based errors where people accidentally get the default. The second is terrifying, so we’d have to do the first, and at that point we’re basically just writing an object. ;) I think the actual key here is that the “Context” object should not be confused with an *actual* Context object from the backing library. That is, a ClientContext() for OpenSSL does not need to compose onto it a SSL_CTX from OpenSSL. It is quite reasonable for the ClientContext to just be a bucket of data that manufactures SSL_CTX objects when wrap_socket is called. I don’t see why validation routines couldn’t run on Context objects directly. You could even do this: class ClientValidationContext(ClientContext): def validate(self): “””Checks internal state and returns whether or not the context meets your criteria””” pass Essentially, because all code operates on the abstract objects, one of the things you can do for validation is insert a *validating* implementation of the abstract object. It’s exactly like a test fake, but for validation. Does that make any sense, or have I not had enough coffee yet? Cory
On Fri, Jan 13, 2017 at 12:55 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
On 13 Jan 2017, at 08:48, Nathaniel Smith <njs@pobox.com> wrote:
The potentially-useful idea I took from this subthread was: right now we have an interface with a bunch of getter/setter methods. Would it make sense to *replace* that with something more declarative, like a single method that takes configuration data in some sort of standard format? Something very non-clever and simple, like, a dict? That seems like it might both simplify the interface (e.g. the sni callback can return one of these dicts instead of trying to specify OpenSSL's context switcheroo weirdness; if the new dict tries to change options that this particular implementation doesn't allow to be changed, then it's free to error out) and be more flexible (e.g. people could write validation routines that take one of these dicts and raise an error or not -- having a standard format enables this but also makes it not our problem).
Hmm.
I am extremely unsure. At the risk of sounding facetious, these two solutions are *basically* isomorphic to one another: that is, an object with a bunch of getters/setters is basically a dict, it’s just not something you can shove arbitrary data into.
That’s my main reason for preferring an object to a dict: I want to eliminate a whole class of bugs that comes from accidentally doing `config[’ssl_version_minimum’] = TLSv1_1` instead of `config[‘tls_version_minimum’] = TLSv1_1`. Put another way, the dict approach either requires that we override __setitem__ to validate all keys, or that we allow typo-based errors where people accidentally get the default. The second is terrifying, so we’d have to do the first, and at that point we’re basically just writing an object. ;)
I think the actual key here is that the “Context” object should not be confused with an *actual* Context object from the backing library. That is, a ClientContext() for OpenSSL does not need to compose onto it a SSL_CTX from OpenSSL. It is quite reasonable for the ClientContext to just be a bucket of data that manufactures SSL_CTX objects when wrap_socket is called.
I don’t see why validation routines couldn’t run on Context objects directly. You could even do this:
class ClientValidationContext(ClientContext): def validate(self): “””Checks internal state and returns whether or not the context meets your criteria””” pass
Essentially, because all code operates on the abstract objects, one of the things you can do for validation is insert a *validating* implementation of the abstract object. It’s exactly like a test fake, but for validation.
Does that make any sense, or have I not had enough coffee yet?
I was imagining something along the lines of ClientContext({...}).wrap_socket(...) where ClientContext.__init__ would do the validation. Or there are various other ways to slot the pieces together, but in general validation doesn't need to be done incrementally; there's going to be some place where the config dict hits the library and it can/should be validated at that point. And before that point we get all the dict niceties for free. -n -- Nathaniel J. Smith -- https://vorpus.org
On 13 Jan 2017, at 09:07, Nathaniel Smith <njs@pobox.com> wrote:
I was imagining something along the lines of
ClientContext({...}).wrap_socket(...)
where ClientContext.__init__ would do the validation. Or there are various other ways to slot the pieces together, but in general validation doesn't need to be done incrementally; there's going to be some place where the config dict hits the library and it can/should be validated at that point. And before that point we get all the dict niceties for free.
What are the dict niceties, exactly? I’m still not seeing what the advantage is in throwing away a typed and restricted API for a dict is. Cory
On Fri, Jan 13, 2017 at 1:09 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
On 13 Jan 2017, at 09:07, Nathaniel Smith <njs@pobox.com> wrote:
I was imagining something along the lines of
ClientContext({...}).wrap_socket(...)
where ClientContext.__init__ would do the validation. Or there are various other ways to slot the pieces together, but in general validation doesn't need to be done incrementally; there's going to be some place where the config dict hits the library and it can/should be validated at that point. And before that point we get all the dict niceties for free.
What are the dict niceties, exactly? I’m still not seeing what the advantage is in throwing away a typed and restricted API for a dict is.
Getters & setters (right now the spec has setters for everything, but a getter for validate_certificates only -- that's kinda weird?), an obvious serialization story for the parts of the config that use basic types (e.g. to/from JSON/TOML/YAML), an obvious/natural way to represent "here are the config changes I want to make in response to that ClientHello" for the sni callback as a declarative object, a nice __repr__, familiarity, ... obviously these are all things one can put into the spec piece by piece, but you're the one who said you wanted to simplify things :-). I get what you're saying about a typed API -- basically in the current setup, the way _BaseContext is written as a bunch of Python methods means that the interpreter will automatically catch if someone tries to call set_cihpers, whereas in the dict version each implementation would have to catch this itself. But in practice this difference seems really minimal to me -- either way it's a runtime check, and it's really difficult to write a loop like for key, value in config.items(): do_some_quirky_cffi_thing_based_on_key() that doesn't implicitly validate the keys anyway. There's also the option of using e.g. JSON-Schema to write down a formal description of what goes in the dict -- this could actually be substantially stricter than what Python gets you, because you can actually statically enforce that validate_certificates is a bool. For whatever that's worth -- probably not much. If we really want to simplify it down, then as Alex noted in the PR, TLSWrappedSockets are superfluous, so we could make it just: # Simple functions, not methods: client_wrap_buffers(config, incoming, outgoing) -> TLSWrappedBuffers server_wrap_buffers(config, incoming, outgoing) -> TLSWrappedBuffers class TLSWrappedBuffers: ... read and all that stuff ... @property @abstractmethod def get_config(self) -> dict: pass def update_config(self, new_settings): # Attempt to do the equivalent of self.config.update(new_settings); # raises TLSError if any of the settings can't be changed given # current backend or current connection state. Anyway, I'm not actually sure if this is a good idea, but it seemed worth putting the two approaches up where we could look at them and compare :-). -n -- Nathaniel J. Smith -- https://vorpus.org
On 13 January 2017 at 18:27, Cory Benfield <cory@lukasa.co.uk> wrote:
On 13 Jan 2017, at 08:04, Nick Coghlan <ncoghlan@gmail.com> wrote:
I'd still be inclined to keep that concern separate from those of the core TLS context (which is about applying a given security context to sockets and buffers) though, and instead have the specific concept of a TLSPolicy that can be used to create TLS contexts that meet certain criteria:
… snip ...
The standard library could then provide a concrete implementation and default policy that was sufficient to enforce minimum TLS versions and the acceptable cipher suites, while still leaving room for more capable third party policy enforcement.
So, I’d like to rein this discussion in a bit here.
The idea of being able to validate the security and correctness of a collection of TLS settings is a valuable idea. It is also vastly beyond the scope of the problem I’m trying to solve here. I want to very strongly resist the desire that a lot of the community will have to attach additional complexity and responsibility to this module. While it’s very understandable to want to do that (after all, we don’t revisit our standard library TLS functionality often), it will have the effect of chaining a concrete block to this PEP’s ankle and then throwing it into the water.
Note that I'd be reasonably happy to have my post interpreted as "We can leave policy definition and management until later without really hindering our ability to add it whenever we want to" - the only potential change to TLSContext would be to accept an optional "policy" parameter when creating instances, and that's relatively easy to add to implementations in a backwards compatible way (the only slight awkwardness lies in deciding whether or not to pass it along to base classes during initialization). However, I'm also aware that one of the awkwards points in the current SSL module is the significant difference in behaviour between calling ssl.create_default_context() (good defaults) and instantiating ssl.SSLContext directly (bad defaults). These are two different SSL/TLS security policies represented as distinct SSLContext factories. For the new module, that can be changed such that tls.TLSContext has inherently reasonable defaults and a priori notice that the defaults may change in maintenance releases, so we only have a single TLSContext factory and a single default TLS configutation policy. Unfortunately, that would still leave us with the question of how an API user can compare their own default settings to the standard library ones (e.g. to emit a warning or error when testing, even if they've opted out of relying directly on the defaults at runtime). If I'm understanding Wes correctly, this is the gist of the problem he's worried about as well - it doesn't do any of us or the overall ecosystem any good if incrementally evolving the default settings in the standard library leads to a significant number of developers instead defining their own "snapshot in time" TLS settings and then never updating them. By contrast, if the "default TLS policy" is pulled out as a distinct object that the tls.TLSContext constructor *uses* to configure the instance, then it lets us say "If you don't want to rely on the default settings at runtime, at least validate your own defaults against the standard library's TLS policy as part of your test suite". So overall, I think separating out "the standard library's default TLS settings" as an importable and introspectable object may actually make the overall design simpler and easier to explain. It just doesn't look that way right now because the initial draft glosses over the question of how to define and manage the default security policy (which is an entirely reasonable approach to have taken!). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 2017-01-13 12:02, Nick Coghlan wrote:
On 13 January 2017 at 18:27, Cory Benfield <cory@lukasa.co.uk> wrote:
On 13 Jan 2017, at 08:04, Nick Coghlan <ncoghlan@gmail.com> wrote:
I'd still be inclined to keep that concern separate from those of the core TLS context (which is about applying a given security context to sockets and buffers) though, and instead have the specific concept of a TLSPolicy that can be used to create TLS contexts that meet certain criteria:
… snip ...
The standard library could then provide a concrete implementation and default policy that was sufficient to enforce minimum TLS versions and the acceptable cipher suites, while still leaving room for more capable third party policy enforcement.
So, I’d like to rein this discussion in a bit here.
The idea of being able to validate the security and correctness of a collection of TLS settings is a valuable idea. It is also vastly beyond the scope of the problem I’m trying to solve here. I want to very strongly resist the desire that a lot of the community will have to attach additional complexity and responsibility to this module. While it’s very understandable to want to do that (after all, we don’t revisit our standard library TLS functionality often), it will have the effect of chaining a concrete block to this PEP’s ankle and then throwing it into the water.
Note that I'd be reasonably happy to have my post interpreted as "We can leave policy definition and management until later without really hindering our ability to add it whenever we want to" - the only potential change to TLSContext would be to accept an optional "policy" parameter when creating instances, and that's relatively easy to add to implementations in a backwards compatible way (the only slight awkwardness lies in deciding whether or not to pass it along to base classes during initialization).
However, I'm also aware that one of the awkwards points in the current SSL module is the significant difference in behaviour between calling ssl.create_default_context() (good defaults) and instantiating ssl.SSLContext directly (bad defaults). These are two different SSL/TLS security policies represented as distinct SSLContext factories.
I'm addressing this issue with a new PEP (WIP) independently from Cory's PEP. In 3.6 SSLContext got a bit more sane. In the future we are going to have two protocols only (TLS_PROTOCOL_SERVER and TLS_PROTOCOL_CLIENT) with sane default values for both contexts. Client side context will default to cert and hostname verification, good cipher selection but no root CA certs. In fact I'm planning to write two PEPs, the first addresses the 'secure by default' issue and the second addresses various issues with hostname verification [1]. The second PEP will also require OpenSSL >= 1.0.2 and LibreSSL >= 2.5.0, because it depends on X509_VERIFY_PARAM_set1_host(). Christian [1] https://speakerdeck.com/tiran/pyconpl-2016-keynote-tales-from-python-securit...
On 13 January 2017 at 19:33, Nathaniel Smith <njs@pobox.com> wrote:
On Fri, Jan 13, 2017 at 1:09 AM, Cory Benfield <cory@lukasa.co.uk> wrote: I get what you're saying about a typed API -- basically in the current setup, the way _BaseContext is written as a bunch of Python methods means that the interpreter will automatically catch if someone tries to call set_cihpers, whereas in the dict version each implementation would have to catch this itself. But in practice this difference seems really minimal to me -- either way it's a runtime check
The difference isn't minimal at all from the perspective of: - static type analysis - runtime introspection - IDEs (which use a mixture of both) - documentation Config dictionary based APIs have *major* problems in all those respects, as they're ultimately just a bunch of opaque-to-the-compiler keys and values. Network security is a sufficiently hard problem that we want to make it as easy as possible for developers to bring additional tools for ensuring code correctness to bear :)
and it's really difficult to write a loop like
for key, value in config.items(): do_some_quirky_cffi_thing_based_on_key()
that doesn't implicitly validate the keys anyway. There's also the option of using e.g. JSON-Schema to write down a formal description of what goes in the dict -- this could actually be substantially stricter than what Python gets you, because you can actually statically enforce that validate_certificates is a bool. For whatever that's worth -- probably not much.
I've generally found that it's easier to build a declarative API atop a programmatic API than it is to tackle a problem the other way around: 1. Even if the only public API is declarative, there's going to be a programmatic API that implements the actual processing of the declarative requests 2. Working with the programmatic API provides insight into which features the declarative API should cover, and which it can skip 3. Programmatic APIs are often easier to test, since you can more readily isolate the effects of the individual operations 4. A public programmatic API serves as a natural constraint and test case for any subsequent declarative API Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
participants (7)
-
Christian Heimes
-
Cory Benfield
-
Donald Stufft
-
Ian Cordasco
-
Nathaniel Smith
-
Nick Coghlan
-
Wes Turner