Unified TLS API for Python: Round 2
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
All, Thanks for your feedback on the draft PEP I proposed last week! There was a lot of really enthusiastic and valuable feedback both on this mailing list and on GitHub. I believe I’ve addressed a lot of the concerns that were brought up with the PEP now, so I’d like to ask that interested parties take another look. Some quick hits on the changes: - I’ve introduced a TLSConfiguration object and extracted all configuration from the Context onto it. This object is an immutable one (it’s strictly a slightly-fancy namedtuple), which provides some really huge advantages in terms of managing them and allowing concrete implementations to use Configuration objects as dictionary keys. - I’ve fleshed out the cipher suite section. - I’ve split out read_into. - I’ve stopped allowing passwords to be strings: they must now always be bytes. - I’ve dramatically changed the SNI callback to take advantage of the TLSConfiguration. - I’ve added support for trust stores. - Several other smaller changes. Please let me know what you think. The version of the draft PEP, from commit ce74bc60, is reproduced below. Thanks! Cory — PEP: XXX Title: TLS Abstract Base Classes Version: $Revision$ Last-Modified: $Date$ Author: Cory Benfield <cory@lukasa.co.uk> Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 17-Oct-2016 Python-Version: 3.7 Post-History: 30-Aug-2002 Abstract ======== This PEP would define a standard TLS interface in the form of a collection of abstract base classes. This interface would allow Python implementations and third-party libraries to provide bindings to TLS libraries other than OpenSSL that can be used by tools that expect the interface provided by the Python standard library, with the goal of reducing the dependence of the Python ecosystem on OpenSSL. Rationale ========= In the 21st century it has become increasingly clear that robust and user-friendly TLS support is an extremely important part of the ecosystem of any popular programming language. For most of its lifetime, this role in the Python ecosystem has primarily been served by the `ssl module`_, which provides a Python API to the `OpenSSL library`_. Because the ``ssl`` module is distributed with the Python standard library, it has become the overwhelmingly most-popular method for handling TLS in Python. An extraordinary majority of Python libraries, both in the standard library and on the Python Package Index, rely on the ``ssl`` module for their TLS connectivity. Unfortunately, the preeminence of the ``ssl`` module has had a number of unforseen side-effects that have had the effect of tying the entire Python ecosystem tightly to OpenSSL. This has forced Python users to use OpenSSL even in situations where it may provide a worse user experience than alternative TLS implementations, which imposes a cognitive burden and makes it hard to provide "platform-native" experiences. Problems -------- The fact that the ``ssl`` module is build into the standard library has meant that all standard-library Python networking libraries are entirely reliant on the OpenSSL that the Python implementation has been linked against. This leads to the following issues: * It is difficult to take advantage of new, higher-security TLS without recompiling Python to get a new OpenSSL. While there are third-party bindings to OpenSSL (e.g. `pyOpenSSL`_), these need to be shimmed into a format that the standard library understands, forcing projects that want to use them to maintain substantial compatibility layers. * For Windows distributions of Python, they need to be shipped with a copy of OpenSSL. This puts the CPython development team in the position of being OpenSSL redistributors, potentially needing to ship security updates to the Windows Python distributions when OpenSSL vulnerabilities are released. * For macOS distributions of Python, they need either to be shipped with a copy of OpenSSL or linked against the system OpenSSL library. Apple has formally deprecated linking against the system OpenSSL library, and even if they had not, that library version has been unsupported by upstream for nearly one year as of the time of writing. The CPython development team has started shipping newer OpenSSLs with the Python available from python.org, but this has the same problem as with Windows. * Many systems, including but not limited to Windows and macOS, do not make their system certificate stores available to OpenSSL. This forces users to either obtain their trust roots from elsewhere (e.g. `certifi`_) or to attempt to export their system trust stores in some form. Relying on `certifi`_ is less than ideal, as most system administrators do not expect to receive security-critical software updates from PyPI. Additionally, it is not easy to extend the `certifi`_ trust bundle to include custom roots, or to centrally manage trust using the `certifi`_ model. Even in situations where the system certificate stores are made available to OpenSSL in some form, the experience is still sub-standard, as OpenSSL will perform different validation checks than the platform-native TLS implementation. This can lead to users experiencing different behaviour on their browsers or other platform-native tools than they experience in Python, with little or no recourse to resolve the problem. * Users may wish to integrate with TLS libraries other than OpenSSL for many other reasons, such as OpenSSL missing features (e.g. TLS 1.3 support), or because OpenSSL is simply too large and unweildy for the platform (e.g. for embedded Python). Those users are left with the requirement to use third-party networking libraries that can interact with their preferred TLS library or to shim their preferred library into the OpenSSL-specific ``ssl`` module API. Additionally, the ``ssl`` module as implemented today limits the ability of CPython itself to add support for alternative TLS backends, or remove OpenSSL support entirely, should either of these become necessary or useful. The ``ssl`` module exposes too many OpenSSL-specific function calls and features to easily map to an alternative TLS backend. Proposal ======== This PEP proposes to introduce a few new Abstract Base Classes in Python 3.7 to provide TLS functionality that is not so strongly tied to OpenSSL. It also proposes to update standard library modules to use only the interface exposed by these abstract base classes wherever possible. There are three goals here: 1. To provide a common API surface for both core and third-party developers to target their TLS implementations to. This allows TLS developers to provide interfaces that can be used by most Python code, and allows network developers to have an interface that they can target that will work with a wide range of TLS implementations. 2. To provide an API that has few or no OpenSSL-specific concepts leak through. The ``ssl`` module today has a number of warts caused by leaking OpenSSL concepts through to the API: the new ABCs would remove those specific concepts. 3. To provide a path for the core development team to make OpenSSL one of many possible TLS backends, rather than requiring that it be present on a system in order for Python to have TLS support. The proposed interface is laid out below. Abstract Base Classes --------------------- There are several interfaces that require standardisation. Those interfaces are: 1. Configuring TLS, currently implemented by the `SSLContext`_ class in the ``ssl`` module. 2. Wrapping a socket object, currently implemented by the `SSLSocket`_ class in the ``ssl`` module. 3. Providing an in-memory buffer for doing in-memory encryption or decryption with no actual I/O (necessary for asynchronous I/O models), currently implemented by the `SSLObject`_ class in the ``ssl`` module. 4. Applying TLS configuration to the wrapping objects in (2) and (3). Currently this is also implemented by the `SSLContext`_ class in the ``ssl`` module. 5. Specifying TLS cipher suites. There is currently no code for doing this in the standard library: instead, the standard library uses OpenSSL cipher suite strings. 6. Specifying application-layer protocols that can be negotiated during the TLS handshake. 7. Specifying TLS versions. 8. Reporting errors to the caller, currently implemented by the `SSLError`_ class in the ``ssl`` module. 9. Specifying certificates to load, either as client or server certificates. 10. Specifying which trust database should be used to validate certificates presented by a remote peer. While it is technically possible to define (2) in terms of (3), for the sake of simplicity it is easier to define these as two separate ABCs. Implementations are of course free to implement the concrete subclasses however they see fit. Obviously, (5) doesn't require an abstract base class: instead, it requires a richer API for configuring supported cipher suites that can be easily updated with supported cipher suites for different implementations. (9) is a thorny problem, becuase in an ideal world the private keys associated with these certificates would never end up in-memory in the Python process (that is, the TLS library would collaborate with a Hardware Security Module (HSM) to provide the private key in such a way that it cannot be extracted from process memory). Thus, we need to provide an extensible model of providing certificates that allows concrete implementations the ability to provide this higher level of security, while also allowing a lower bar for those implementations that cannot. This lower bar would be the same as the status quo: that is, the certificate may be loaded from an in-memory buffer or from a file on disk. (10) also represents an issue because different TLS implementations vary wildly in how they allow users to select trust stores. Some implementations have specific trust store formats that only they can use (such as the OpenSSL CA directory format that is created by ``c_rehash``), and others may not allow you to specify a trust store that does not include their default trust store. For this reason, we need to provide a model that assumes very little about the form that trust stores take. The "Trust Store" section below goes into more detail about how this is achieved. Finally, this API will split the responsibilities currently assumed by the `SSLContext`_ object: specifically, the responsibility for holding and managing configuration and the responsibility for using that configuration to build wrapper objects. This is necessarily primarily for supporting functionality like Server Name Indication (SNI). In OpenSSL (and thus in the ``ssl`` module), the server has the ability to modify the TLS configuration in response to the client telling the server what hostname it is trying to reach. This is mostly used to change certificate chain so as to present the correct TLS certificate chain for the given hostname. The specific mechanism by which this is done is by returning a new `SSLContext`_ object with the appropriate configuration. This is not a model that maps well to other TLS implementations. Instead, we need to make it possible to provide a return value from the SNI callback that can be used to indicate what configuration changes should be made. This means providing an object that can hold TLS configuration. This object needs to be applied to specific TLSWrappedBuffer, and TLSWrappedSocket objects. For this reason, we split the responsibility of `SSLContext`_ into two separate objects. The ``TLSConfiguration`` object is an object that acts as container for TLS configuration: the ``ClientContext`` and ``ServerContext`` objects are objects that are instantiated with a ``TLSConfiguration`` object. Both objects would be immutable. Configuration ~~~~~~~~~~~~~ The ``TLSConfiguration`` concrete class defines an object that can hold and manage TLS configuration. The goals of this class are as follows: 1. To provide a method of specifying TLS configuration that avoids the risk of errors in typing (this excludes the use of a simple dictionary). 2. To provide an object that can be safely compared to other configuration objects to detect changes in TLS configuration, for use with the SNI callback. This class is not an ABC, primarily because it is not expected to have implementation-specific behaviour. The responsibility for transforming a ``TLSConfiguration`` object into a useful set of configuration for a given TLS implementation belongs to the Context objects discussed below. This class has one other notable property: it is immutable. This is a desirable trait for a few reasons. The most important one is that it allows these objects to be used as dictionary keys, which is potentially extremely valuable for certain TLS backends and their SNI configuration. On top of this, it frees implementations from needing to worry about their configuration objects being changed under their feet, which allows them to avoid needing to carefully synchronize changes between their concrete data structures and the configuration object. The ``TLSConfiguration`` object would be defined by the following code: ServerNameCallback = Callable[[TLSBufferObject, Optional[str], TLSConfiguration], Any] _configuration_fields = [ 'validate_certificates', 'certificate_chain', 'ciphers', 'inner_protocols', 'lowest_supported_version', 'highest_supported_version', 'trust_store', 'sni_callback', ] _DEFAULT_VALUE = object() class TLSConfiguration(namedtuple('TLSConfiguration', _configuration_fields)): """ An imutable TLS Configuration object. This object has the following properties: :param validate_certificates bool: Whether to validate the TLS certificates. This switch operates at a very broad scope: either validation is enabled, in which case all forms of validation are performed including hostname validation if possible, or validation is disabled, in which case no validation is performed. Not all backends support having their certificate validation disabled. If a backend does not support having their certificate validation disabled, attempting to set this property to ``False`` will throw a ``TLSError`` when this object is passed into a context object. :param certificate_chain Tuple[Tuple[Certificate],PrivateKey]: The certificate, intermediate certificate, and the corresponding private key for the leaf certificate. These certificates will be offered to the remote peer during the handshake if required. The first Certificate in the list must be the leaf certificate. All subsequent certificates will be offered as intermediate additional certificates. :param ciphers Tuple[CipherSuite]: The available ciphers for TLS connections created with this configuration, in priority order. :param inner_protocols Tuple[Union[NextProtocol, bytes]]: Protocols that connections created with this configuration should advertise as supported during the TLS handshake. These may be advertised using either or both of ALPN or NPN. This list of protocols should be ordered by preference. :param lowest_supported_version TLSVersion: The minimum version of TLS that should be allowed on TLS connections using this configuration. :param highest_supported_version TLSVersion: The maximum version of TLS that should be allowed on TLS connections using this configuration. :param trust_store TrustStore: The trust store that connections using this configuration will use to validate certificates. :param sni_callback Optional[ServerNameCallback]: A callback function that will be called after the TLS Client Hello handshake message has been received by the TLS server when the TLS client specifies a server name indication. Only one callback can be set per ``TLSConfiguration``. If the ``sni_callback`` is ``None`` then the callback is disabled. If the ``TLSConfiguration`` is used for a ``ClientContext`` then this setting will be ignored. The ``callback`` function will be called with three arguments: the first will be the ``TLSBufferObject`` for the connection; the second will be a string that represents the server name that the client is intending to communicate (or ``None`` if the TLS Client Hello does not contain a server name); and the third argument will be the original ``Context``. The server name argument will be the IDNA *decoded* server name. The ``callback`` must return a ``TLSConfiguration`` to allow negotiation to continue. Other return values signal errors. Attempting to control what error is signaled by the underlying TLS implementation is not specified in this API, but is up to the concrete implementation to handle. The Context will do its best to apply the ``TLSConfiguration`` changes from its original configuration to the incoming connection. This will usually include changing the certificate chain, but may also include changes to allowable ciphers or any other configuration settings. """ __slots__ = () def __new__(cls, validate_certificates=None: Optional[bool], certificate_chain=None: Optional[Tuple[Tuple[Certificate], PrivateKey]], ciphers=None: Optional[Tuple[CipherSuite]], inner_protocols=None: Optional[Tuple[Union[NextProtocol, bytes]]], lowest_supported_version=None: Optional[TLSVersion], highest_supported_version=None: Optional[TLSVersion], trust_store=None: Optional[TrustStore], sni_callback=None: Optional[ServerNameCallback]): if validate_certificates is None: validate_certificates = True if ciphers is None: ciphers = DEFAULT_CIPHER_LIST if inner_protocols is None: inner_protocols = [] if lowest_supported_version is None: lowest_supported_version = TLSVersion.TLSv1 if highest_supported_version is None: highest_supported_version = TLSVersion.MAXIMUM_SUPPORTED return super().__new__( cls, validate_certificates, certificate_chain, ciphers, inner_protocols, lowest_supported_version, highest_supported_version, trust_store, sni_callback ) def update(self, validate_certificates=_DEFAULT_VALUE, certificate_chain=_DEFAULT_VALUE, ciphers=_DEFAULT_VALUE, inner_protocols=_DEFAULT_VALUE, lowest_supported_version=_DEFAULT_VALUE, highest_supported_version=_DEFAULT_VALUE, trust_store=_DEFAULT_VALUE, sni_callback=_DEFAULT_VALUE): """ Create a new ``TLSConfiguration``, overriding some of the settings on the original configuration with the new settings. """ if validate_certificates is _DEFAULT_VALUE: validate_certificates = self.validate_certificates if certificate_chain is _DEFAULT_VALUE: certificate_chain = self.certificate_chain if ciphers is _DEFAULT_VALUE: ciphers = self.ciphers if inner_protocols is _DEFAULT_VALUE: inner_protocols = self.inner_protocols if lowest_supported_version is _DEFAULT_VALUE: lowest_supported_version = self.lowest_supported_version if highest_supported_version is _DEFAULT_VALUE: highest_supported_version = self.highest_supported_version if trust_store is _DEFAULT_VALUE: trust_store = self.trust_store if sni_callback is _DEFAULT_VALUE: sni_callback = self.sni_callback return self.__class__( validate_certificates, certificate_chain, ciphers, inner_protocols, lowest_supported_version, highest_supported_version, trust_store, sni_callback ) Context ~~~~~~~ The ``Context`` abstract base class defines an object that allows configuration of TLS. It can be thought of as a factory for ``TLSWrappedSocket`` and ``TLSWrappedBuffer`` objects. As much as possible implementers should aim to make these classes immutable: that is, they should prefer not to allow users to mutate their internal state directly, instead preferring to create new contexts from new TLSConfiguration objects. Obviously, the ABCs cannot enforce this constraint, and so they do not attempt to. The ``Context`` abstract base class has the following class definition:: TLSBufferObject = Union[TLSWrappedSocket, TLSWrappedBuffer] class _BaseContext(metaclass=ABCMeta): @abstractmethod def __init__(self, configuration: TLSConfiguration): """ Create a new context object from a given TLS configuration. """ @property @abstractmethod def configuration(self) -> TLSConfiguration: """ Returns the TLS configuration that was used to create the context. """ class ClientContext(_BaseContext): @abstractmethod def wrap_socket(self, socket: socket.socket, auto_handshake=True: bool, server_hostname=None: Optional[str]) -> TLSWrappedSocket: """ Wrap an existing Python socket object ``socket`` and return a ``TLSWrappedSocket`` object. ``socket`` must be a ``SOCK_STREAM`` socket: all other socket types are unsupported. The returned SSL socket is tied to the context, its settings and certificates. The parameter ``auto_handshake`` specifies whether to do the SSL handshake automatically after doing a ``socket.connect()``, or whether the application program will call it explicitly, by invoking the ``TLSWrappedSocket.do_handshake()`` method. Calling ``TLSWrappedSocket.do_handshake()`` explicitly gives the program control over the blocking behavior of the socket I/O involved in the handshake. The optional parameter ``server_hostname`` specifies the hostname of the service which we are connecting to. This allows a single server to host multiple SSL-based services with distinct certificates, quite similarly to HTTP virtual hosts. """ @abstractmethod def wrap_buffers(self, incoming: Any, outgoing: Any, server_hostname=None: Optional[str]) -> TLSWrappedBuffer: """ Wrap a pair of buffer objects (``incoming`` and ``outgoing``) to create an in-memory stream for TLS. The SSL routines will read data from ``incoming`` and decrypt it, and write encrypted data to ``outgoing``. The ``server_hostname`` parameter has the same meaning as in ``wrap_socket``. """ class ServerContext(_BaseContext): @abstractmethod def wrap_socket(self, socket: socket.socket, auto_handshake=True: bool) -> TLSWrappedSocket: """ Wrap an existing Python socket object ``socket`` and return a ``TLSWrappedSocket`` object. ``socket`` must be a ``SOCK_STREAM`` socket: all other socket types are unsupported. The returned SSL socket is tied to the context, its settings and certificates. The parameter ``auto_handshake`` specifies whether to do the SSL handshake automatically after doing a ``socket.connect()``, or whether the application program will call it explicitly, by invoking the ``TLSWrappedSocket.do_handshake()`` method. Calling ``TLSWrappedSocket.do_handshake()`` explicitly gives the program control over the blocking behavior of the socket I/O involved in the handshake. """ @abstractmethod def wrap_buffers(self, incoming: Any, outgoing: Any) -> TLSWrappedBuffer: """ Wrap a pair of buffer objects (``incoming`` and ``outgoing``) to create an in-memory stream for TLS. The SSL routines will read data from ``incoming`` and decrypt it, and write encrypted data to ``outgoing``. """ Socket ~~~~~~ The socket-wrapper ABC will be defined by the ``TLSWrappedSocket`` ABC, which has the following definition:: class TLSWrappedSocket(metaclass=ABCMeta): # The various socket methods all must be implemented. Their definitions # have been elided from this class defintion in the PEP because they # aren't instructive. @abstractmethod def do_handshake(self) -> None: """ Performs the TLS handshake. Also performs certificate validation and hostname verification. """ @abstractmethod def cipher(self) -> Optional[CipherSuite]: """ Returns the CipherSuite entry for the cipher that has been negotiated on the connection. If no connection has been negotiated, returns ``None``. """ @abstractmethod def negotiated_protocol(self) -> Optional[Union[NextProtocol, bytes]]: """ Returns the protocol that was selected during the TLS handshake. This selection may have been made using ALPN, NPN, or some future negotiation mechanism. If the negotiated protocol is one of the protocols defined in the ``NextProtocol`` enum, the value from that enum will be returned. Otherwise, the raw bytestring of the negotiated protocol will be returned. If ``Context.set_inner_protocols()`` was not called, if the other party does not support protocol negotiation, if this socket does not support any of the peer's proposed protocols, or if the handshake has not happened yet, ``None`` is returned. """ @property @abstractmethod def context(self) -> Context: """ The ``Context`` object this socket is tied to. """ @abstractproperty def negotiated_tls_version(self) -> Optional[TLSVersion]: """ The version of TLS that has been negotiated on this connection. """ @abstractmethod def unwrap(self) -> socket.socket: """ Cleanly terminate the TLS connection on this wrapped socket. Once called, this ``TLSWrappedSocket`` can no longer be used to transmit data. Returns the socket that was wrapped with TLS. """ Buffer ~~~~~~ The buffer-wrapper ABC will be defined by the ``TLSWrappedBuffer`` ABC, which has the following definition:: class TLSWrappedBuffer(metaclass=ABCMeta): @abstractmethod def read(self, amt=None: int) -> bytes: """ Read up to ``amt`` bytes of data from the input buffer and return the result as a ``bytes`` instance. If ``amt`` is ``None``, will attempt to read until either EOF is reached or until further attempts to read would raise either ``WantReadError`` or ``WantWriteError``. Raise ``WantReadError`` or ``WantWriteError`` if there is insufficient data in either the input or output buffer and the operation would have caused data to be written or read. As at any time a re-negotiation is possible, a call to ``read()`` can also cause write operations. """ @abstractmethod def read_into(self, buffer: Any, amt=None: int) -> int: """ Read up to ``amt`` bytes of data from the input buffer into ``buffer``, which must be an object that implements the buffer protocol. Returns the number of bytes read. If ``amt`` is ``None``, will attempt to read until either EOF is reached or until further attempts to read would raise either ``WantReadError`` or ``WantWriteError``, or until the buffer is full. Raises ``WantReadError`` or ``WantWriteError`` if there is insufficient data in either the input or output buffer and the operation would have caused data to be written or read. As at any time a re-negotiation is possible, a call to ``read_into()`` can also cause write operations. """ @abstractmethod def write(self, buf: Any) -> int: """ Write ``buf`` in encrypted form to the output buffer and return the number of bytes written. The ``buf`` argument must be an object supporting the buffer interface. Raise ``WantReadError`` or ``WantWriteError`` if there is insufficient data in either the input or output buffer and the operation would have caused data to be written or read. As at any time a re-negotiation is possible, a call to ``write()`` can also cause read operations. """ @abstractmethod def do_handshake(self) -> None: """ Performs the TLS handshake. Also performs certificate validation and hostname verification. """ @abstractmethod def cipher(self) -> Optional[CipherSuite]: """ Returns the CipherSuite entry for the cipher that has been negotiated on the connection. If no connection has been negotiated, returns ``None``. """ @abstractmethod def negotiated_protocol(self) -> Optional[Union[NextProtocol, bytes]]: """ Returns the protocol that was selected during the TLS handshake. This selection may have been made using ALPN, NPN, or some future negotiation mechanism. If the negotiated protocol is one of the protocols defined in the ``NextProtocol`` enum, the value from that enum will be returned. Otherwise, the raw bytestring of the negotiated protocol will be returned. If ``Context.set_inner_protocols()`` was not called, if the other party does not support protocol negotiation, if this socket does not support any of the peer's proposed protocols, or if the handshake has not happened yet, ``None`` is returned. """ @property @abstractmethod def context(self) -> Context: """ The ``Context`` object this socket is tied to. """ @abstractproperty def negotiated_tls_version(self) -> Optional[TLSVersion]: """ The version of TLS that has been negotiated on this connection. """ @abstractmethod def shutdown(self) -> None: """ Performs a clean TLS shut down. This should generally be used whenever possible to signal to the remote peer that the content is finished. """ Cipher Suites ~~~~~~~~~~~~~ Supporting cipher suites in a truly library-agnostic fashion is a remarkably difficult undertaking. Different TLS implementations often have *radically* different APIs for specifying cipher suites, but more problematically these APIs frequently differ in capability as well as in style. Some examples are shown below: OpenSSL ^^^^^^^ OpenSSL uses a well-known cipher string format. This format has been adopted as a configuration language by most products that use OpenSSL, including Python. This format is relatively easy to read, but has a number of downsides: it is a string, which makes it remarkably easy to provide bad inputs; it lacks much detailed validation, meaning that it is possible to configure OpenSSL in a way that doesn't allow it to negotiate any cipher at all; and it allows specifying cipher suites in a number of different ways that make it tricky to parse. The biggest problem with this format is that there is no formal specification for it, meaning that the only way to parse a given string the way OpenSSL would is to get OpenSSL to parse it. OpenSSL's cipher strings can look like this: 'ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:DH+CHACHA20:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!eNULL:!MD5' This string demonstrates some of the complexity of the OpenSSL format. For example, it is possible for one entry to specify multiple cipher suites: the entry ``ECDH+AESGCM`` means "all ciphers suites that include both elliptic-curve Diffie-Hellman key exchange and AES in Galois Counter Mode". More explicitly, that will expand to four cipher suites: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256" That makes parsing a complete OpenSSL cipher string extremely tricky. Add to the fact that there are other meta-characters, such as "!" (exclude all cipher suites that match this criterion, even if they would otherwise be included: "!MD5" means that no cipher suites using the MD5 hash algorithm should be included), "-" (exclude matching ciphers if they were already included, but allow them to be re-added later if they get included again), and "+" (include the matching ciphers, but place them at the end of the list), and you get an *extremely* complex format to parse. On top of this complexity it should be noted that the actual result depends on the OpenSSL version, as an OpenSSL cipher string is valid so long as it contains at least one cipher that OpenSSL recognises. OpenSSL also uses different names for its ciphers than the names used in the relevant specifications. See the manual page for ``ciphers(1)`` for more details. The actual API inside OpenSSL for the cipher string is simple: char *cipher_list = <some cipher list>; int rc = SSL_CTX_set_cipher_list(context, cipher_list); This means that any format that is used by this module must be able to be converted to an OpenSSL cipher string for use with OpenSSL. SecureTransport ^^^^^^^^^^^^^^^ SecureTransport is the macOS system TLS library. This library is substantially more restricted than OpenSSL in many ways, as it has a much more restricted class of users. One of these substantial restrictions is in controlling supported cipher suites. Ciphers in SecureTransport are represented by a C ``enum``. This enum has one entry per cipher suite, with no aggregate entries, meaning that it is not possible to reproduce the meaning of an OpenSSL cipher string like "ECDH+AESGCM" without hand-coding which categories each enum member falls into. However, the names of most of the enum members are in line with the formal names of the cipher suites: that is, the cipher suite that OpenSSL calls "ECDHE-ECDSA-AES256-GCM-SHA384" is called "TLS_ECDHE_ECDHSA_WITH_AES_256_GCM_SHA384" in SecureTransport. The API for configuring cipher suites inside SecureTransport is simple: SSLCipherSuite ciphers[] = {TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, ...}; OSStatus status = SSLSetEnabledCiphers(context, ciphers, sizeof(cphers)); SChannel ^^^^^^^^ SChannel is the Windows system TLS library. SChannel has extremely restrictive support for controlling available TLS cipher suites, and additionally adopts a third method of expressing what TLS cipher suites are supported. Specifically, SChannel defines a set of ``ALG_ID`` constants (C unsigned ints). Each of these constants does not refer to an entire cipher suite, but instead an individual algorithm. Some examples are ``CALG_3DES`` and ``CALG_AES_256``, which refer to the bulk encryption algorithm used in a cipher suite, ``CALG_DH_EPHEM`` and ``CALG_RSA_KEYX`` which refer to part of the key exchange algorithm used in a cipher suite, ``CALG_SHA1`` and ``CALG_MD5`` which refer to the message authentication code used in a cipher suite, and ``CALG_ECDSA`` and ``CALG_RSA_SIGN`` which refer to the signing portions of the key exchange algorithm. This can be thought of as the half of OpenSSL's functionality that SecureTransport doesn't have: SecureTransport only allows specifying exact cipher suites, while SChannel only allows specifying *parts* of the cipher suite, while OpenSSL allows both. Determining which cipher suites are allowed on a given connection is done by providing a pointer to an array of these ``ALG_ID`` constants. This means that any suitable API must allow the Python code to determine which ``ALG_ID`` constants must be provided. Proposed Interface ^^^^^^^^^^^^^^^^^^ The proposed interface for the new module is influenced by the combined set of limitations of the above implementations. Specifically, as every implementation *except* OpenSSL requires that each individual cipher be provided, there is no option but to provide that lowest-common denominator approach. The simplest approach is to provide an enumerated type that includes all of the cipher suites defined for TLS. The values of the enum members will be their two-octet cipher identifier as used in the TLS handshake, stored as a tuple of integers. The names of the enum members will be their IANA-registered cipher suite names. Rather than populate this enum by hand, it is likely that we'll define a script that can build it from Christian Heimes' `tlsdb JSON file`_ (warning: large file). This also opens up the possibility of extending the API with additional querying function, such as determining which TLS versions support which ciphers, if that functionality is found to be useful or necessary. If users find this approach to be onerous, a future extension to this API can provide helpers that can reintroduce OpenSSL's aggregation functionality. Because this enum would be enormous, the entire enum is not provided here. Instead, a small sample of entries is provided to give a flavor of how it will appear. class CipherSuite(Enum): ... TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA = (0xC0, 0x12) ... TLS_ECDHE_ECDSA_WITH_AES_128_CCM = (0xC0, 0xAC) ... TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 = (0xC0, 0x2B) ... Protocol Negotiation ~~~~~~~~~~~~~~~~~~~~ Both NPN and ALPN allow for protocol negotiation as part of the HTTP/2 handshake. While NPN and ALPN are, at their fundamental level, built on top of bytestrings, string-based APIs are frequently problematic as they allow for errors in typing that can be hard to detect. For this reason, this module would define a type that protocol negotiation implementations can pass and be passed. This type would wrap a bytestring to allow for aliases for well-known protocols. This allows us to avoid the problems inherent in typos for well-known protocols, while allowing the full extensibility of the protocol negotiation layer if needed by letting users pass byte strings directly. :: class NextProtocol(Enum): H2 = b'h2' H2C = b'h2c' HTTP1 = b'http/1.1' WEBRTC = b'webrtc' C_WEBRTC = b'c-webrtc' FTP = b'ftp' STUN = b'stun.nat-discovery' TURN = b'stun.turn' TLS Versions ~~~~~~~~~~~~ It is often useful to be able to restrict the versions of TLS you're willing to support. There are many security advantages in refusing to use old versions of TLS, and some misbehaving servers will mishandle TLS clients advertising support for newer versions. The following enumerated type can be used to gate TLS versions. Forward-looking applications should almost never set a maximum TLS version unless they absolutely must, as a TLS backend that is newer than the Python that uses it may support TLS versions that are not in this enumerated type. Additionally, this enumerated type defines two additional flags that can always be used to request either the lowest or highest TLS version supported by an implementation. :: class TLSVersion(Enum): MINIMUM_SUPPORTED SSLv2 SSLv3 TLSv1 TLSv1_1 TLSv1_2 TLSv1_3 MAXIMUM_SUPPORTED Errors ~~~~~~ This module would define three base classes for use with error handling. Unlike the other classes defined here, these classes are not *abstract*, as they have no behaviour. They exist simply to signal certain common behaviours. Backends should subclass these exceptions in their own packages, but needn't define any behaviour for them. In general, concrete implementations should subclass these exceptions rather than throw them directly. This makes it moderately easier to determine which concrete TLS implementation is in use during debugging of unexpected errors. However, this is not mandatory. The definitions of the errors are below:: class TLSError(Exception): """ The base exception for all TLS related errors from any backend. Catching this error should be sufficient to catch *all* TLS errors, regardless of what backend is used. """ class WantWriteError(TLSError): """ A special signaling exception used only when non-blocking or buffer-only I/O is used. This error signals that the requested operation cannot complete until more data is written to the network, or until the output buffer is drained. """ class WantReadError(TLSError): """ A special signaling exception used only when non-blocking or buffer-only I/O is used. This error signals that the requested operation cannot complete until more data is read from the network, or until more data is available in the input buffer. """ Certificates ~~~~~~~~~~~~ This module would define an abstract X509 certificate class. This class would have almost no behaviour, as the goal of this module is not to provide all possible relevant cryptographic functionality that could be provided by X509 certificates. Instead, all we need is the ability to signal the source of a certificate to a concrete implementation. For that reason, this certificate implementation defines only constructors. In essence, the certificate object in this module could be as abstract as a handle that can be used to locate a specific certificate. Concrete implementations may choose to provide alternative constructors, e.g. to load certificates from HSMs. If a common interface emerges for doing this, this module may be updated to provide a standard constructor for this use-case as well. Concrete implementations should aim to have Certificate objects be hashable if at all possible. This will help ensure that TLSConfiguration objects used with an individual concrete implementation are also hashable. class Certificate(metaclass=ABCMeta): @abstractclassmethod def from_buffer(cls, buffer: bytes) -> Certificate: """ Creates a Certificate object from a byte buffer. This byte buffer may be either PEM-encoded or DER-encoded. If the buffer is PEM encoded it *must* begin with the standard PEM preamble (a series of dashes followed by the ASCII bytes "BEGIN CERTIFICATE" and another series of dashes). In the absence of that preamble, the implementation may assume that the certificate is DER-encoded instead. """ @abstractclassmethod def from_file(cls, path: Union[pathlib.Path, AnyStr]) -> Certificate: """ Creates a Certificate object from a file on disk. This method may be a convenience method that wraps ``open`` and ``from_buffer``, but some TLS implementations may be able to provide more-secure or faster methods of loading certificates that do not involve Python code. """ Private Keys ~~~~~~~~~~~~ This module would define an abstract private key class. Much like the Certificate class, this class has almost no behaviour in order to give as much freedom as possible to the concrete implementations to treat keys carefully. This class has all the caveats of the ``Certificate`` class. class PrivateKey(metaclass=ABCMeta): @abstractclassmethod def from_buffer(cls, buffer: bytes, password=None: Optional[Union[Callable[[], Union[bytes, bytearray]], bytes, bytearray]) -> PrivateKey: """ Creates a PrivateKey object from a byte buffer. This byte buffer may be either PEM-encoded or DER-encoded. If the buffer is PEM encoded it *must* begin with the standard PEM preamble (a series of dashes followed by the ASCII bytes "BEGIN", the key type, and another series of dashes). In the absence of that preamble, the implementation may assume that the certificate is DER-encoded instead. The key may additionally be encrypted. If it is, the ``password`` argument can be used to decrypt the key. The ``password`` argument may be a function to call to get the password for decrypting the private key. It will only be called if the private key is encrypted and a password is necessary. It will be called with no arguments, and it should return either bytes or bytearray containing the password. Alternatively a bytes, or bytearray value may be supplied directly as the password argument. It will be ignored if the private key is not encrypted and no password is needed. """ @abstractclassmethod def from_file(cls, path: Union[pathlib.Path, bytes, str], password=None: Optional[Union[Callable[[], Union[bytes, bytearray]], bytes, bytearray]) -> PrivateKey: """ Creates a PrivateKey object from a file on disk. This method may be a convenience method that wraps ``open`` and ``from_buffer``, but some TLS implementations may be able to provide more-secure or faster methods of loading certificates that do not involve Python code. The ``password`` parameter behaves exactly as the equivalent parameter on ``from_buffer``. """ Trust Store ~~~~~~~~~~~ As discussed above, loading a trust store represents an issue because different TLS implementations vary wildly in how they allow users to select trust stores. For this reason, we need to provide a model that assumes very little about the form that trust stores take. This problem is the same as the one that the Certificate and PrivateKey types need to solve. For this reason, we use the exact same model, by creating an opaque type that can encapsulate the various means that TLS backends may open a trust store. A given TLS implementation is not required to implement all of the constructors. However, it is strongly recommended that a given TLS implementation provide the ``system`` constructor if at all possible, as this is the most common validation trust store that is used. Concrete implementations may also add their own constructors. Concrete implementations should aim to have TrustStore objects be hashable if at all possible. This will help ensure that TLSConfiguration objects used with an individual concrete implementation are also hashable. class TrustStore(metaclass=ABCMeta): @abstractclassmethod def system(cls) -> TrustStore: """ Returns a TrustStore object that represents the system trust database. """ @abstractclassmethod def from_pem_file(cls, path: Union[pathlib.Path, bytes, str]) -> TrustStore: """ Initializes a trust store from a single file full of PEMs. """ Changes to the Standard Library =============================== The portions of the standard library that interact with TLS should be revised to use these ABCs. This will allow them to function with other TLS backends. This includes the following modules: - asyncio - ftplib - http.client - imaplib - nntplib - poplib - smtplib Future ====== Major future TLS features may require revisions of these ABCs. These revisions should be made cautiously: many backends may not be able to move forward swiftly, and will be invalidated by changes in these ABCs. This is acceptable, but wherever possible features that are specific to individual implementations should not be added to the ABCs. The ABCs should restrict themselves to high-level descriptions of IETF-specified features. ToDo ==== * Consider adding a new parameter (``valid_subjects``?) to ``wrap_socket`` and ``wrap_buffers`` that specifies in a *typed* manner what kind of entries in the SAN field are acceptable. This would break the union between SNI and cert validation, which may be a good thing (you can't SNI an IP address, but you can validate a cert with one if you want). * It's annoying that there's no type signature for fileobj. Do I really have to define one as part of this PEP? Otherwise, how do I define the types of the arguments to ``wrap_buffers``? * Do we need ways to control hostname validation? * Do we need to support getpeercert? Should we always return DER instead of the weird semi-structured thing? * How do we load certs from locations on disk? What about HSMs? * How do we signal to load certs from the OS? What happens if an implementation doesn't let you *not* load those certs? References ========== .. _ssl module: https://docs.python.org/3/library/ssl.html .. _OpenSSL Library: https://www.openssl.org/ .. _PyOpenSSL: https://pypi.org/project/pyOpenSSL/ .. _certifi: https://pypi.org/project/certifi/ .. _SSLContext: https://docs.python.org/3/library/ssl.html#ssl.SSLContext .. _SSLSocket: https://docs.python.org/3/library/ssl.html#ssl.SSLSocket .. _SSLObject: https://docs.python.org/3/library/ssl.html#ssl.SSLObject .. _SSLError: https://docs.python.org/3/library/ssl.html#ssl.SSLError .. _MSDN articles: https://msdn.microsoft.com/en-us/library/windows/desktop/mt490158(v=vs.85).a... .. _tlsdb JSON file: https://github.com/tiran/tlsdb/blob/master/tlsdb.json Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End:
![](https://secure.gravatar.com/avatar/63ca18e130d527d0741f1da54bb129a7.jpg?s=120&d=mm&r=g)
On Thursday, January 19, 2017, Cory Benfield <cory@lukasa.co.uk> wrote:
Configuration ~~~~~~~~~~~~~
The ``TLSConfiguration`` concrete class defines an object that can hold and manage TLS configuration. The goals of this class are as follows:
1. To provide a method of specifying TLS configuration that avoids the risk of errors in typing (this excludes the use of a simple dictionary). 2. To provide an object that can be safely compared to other configuration objects to detect changes in TLS configuration, for use with the SNI callback.
This class is not an ABC, primarily because it is not expected to have implementation-specific behaviour. The responsibility for transforming a ``TLSConfiguration`` object into a useful set of configuration for a given TLS implementation belongs to the Context objects discussed below.
This class has one other notable property: it is immutable. This is a desirable trait for a few reasons. The most important one is that it allows these objects to be used as dictionary keys, which is potentially extremely valuable for certain TLS backends and their SNI configuration. On top of this, it frees implementations from needing to worry about their configuration objects being changed under their feet, which allows them to avoid needing to carefully synchronize changes between their concrete data structures and the configuration object.
The ``TLSConfiguration`` object would be defined by the following code:
ServerNameCallback = Callable[[TLSBufferObject, Optional[str], TLSConfiguration], Any]
_configuration_fields = [ 'validate_certificates', 'certificate_chain', 'ciphers', 'inner_protocols', 'lowest_supported_version', 'highest_supported_version', 'trust_store', 'sni_callback', ]
Thanks! TLSConfiguration looks much easier to review; and should make other implementations easier. I read a great (illustrated) intro to TLS1.3 the other day: https://temen.io/resources/tls-gets-an-upgrade:-welcome-1.3/ - 1-RTT and 0-RTT look useful. - There's a reduced set of cipher suites https://tlswg.github.io/tls13-spec/ #rfc.section.4.3 - Are there additional parameters relevant to TLS1.3 for the TLSConfiguration object? - If necessary, how should TLSConfiguration parameter fields be added?
_DEFAULT_VALUE = object()
class TLSConfiguration(namedtuple('TLSConfiguration', _configuration_fields)): """ An imutable TLS Configuration object. This object has the following properties:
:param validate_certificates bool: Whether to validate the TLS certificates. This switch operates at a very broad scope: either validation is enabled, in which case all forms of validation are performed including hostname validation if possible, or validation is disabled, in which case no validation is performed.
Not all backends support having their certificate validation disabled. If a backend does not support having their certificate validation disabled, attempting to set this property to ``False`` will throw a ``TLSError`` when this object is passed into a context object.
:param certificate_chain Tuple[Tuple[Certificate],PrivateKey]: The certificate, intermediate certificate, and the corresponding private key for the leaf certificate. These certificates will be offered to the remote peer during the handshake if required.
The first Certificate in the list must be the leaf certificate. All subsequent certificates will be offered as intermediate additional certificates.
:param ciphers Tuple[CipherSuite]: The available ciphers for TLS connections created with this configuration, in priority order.
:param inner_protocols Tuple[Union[NextProtocol, bytes]]: Protocols that connections created with this configuration should advertise as supported during the TLS handshake. These may be advertised using either or both of ALPN or NPN. This list of protocols should be ordered by preference.
:param lowest_supported_version TLSVersion: The minimum version of TLS that should be allowed on TLS connections using this configuration.
:param highest_supported_version TLSVersion: The maximum version of TLS that should be allowed on TLS connections using this configuration.
:param trust_store TrustStore: The trust store that connections using this configuration will use to validate certificates.
:param sni_callback Optional[ServerNameCallback]: A callback function that will be called after the TLS Client Hello handshake message has been received by the TLS server when the TLS client specifies a server name indication.
Only one callback can be set per ``TLSConfiguration``. If the ``sni_callback`` is ``None`` then the callback is disabled. If the ``TLSConfiguration`` is used for a ``ClientContext`` then this setting will be ignored.
The ``callback`` function will be called with three arguments: the first will be the ``TLSBufferObject`` for the connection; the second will be a string that represents the server name that the client is intending to communicate (or ``None`` if the TLS Client Hello does not contain a server name); and the third argument will be the original ``Context``. The server name argument will be the IDNA *decoded* server name.
The ``callback`` must return a ``TLSConfiguration`` to allow negotiation to continue. Other return values signal errors. Attempting to control what error is signaled by the underlying TLS implementation is not specified in this API, but is up to the concrete implementation to handle.
The Context will do its best to apply the ``TLSConfiguration`` changes from its original configuration to the incoming connection. This will usually include changing the certificate chain, but may also include changes to allowable ciphers or any other configuration settings. """ __slots__ = ()
def __new__(cls, validate_certificates=None: Optional[bool], certificate_chain=None: Optional[Tuple[Tuple[Certificate], PrivateKey]], ciphers=None: Optional[Tuple[CipherSuite]], inner_protocols=None: Optional[Tuple[Union[NextProtocol, bytes]]], lowest_supported_version=None: Optional[TLSVersion], highest_supported_version=None: Optional[TLSVersion], trust_store=None: Optional[TrustStore], sni_callback=None: Optional[ServerNameCallback]):
if validate_certificates is None: validate_certificates = True
if ciphers is None: ciphers = DEFAULT_CIPHER_LIST
if inner_protocols is None: inner_protocols = []
if lowest_supported_version is None: lowest_supported_version = TLSVersion.TLSv1
if highest_supported_version is None: highest_supported_version = TLSVersion.MAXIMUM_SUPPORTED
return super().__new__( cls, validate_certificates, certificate_chain, ciphers, inner_protocols, lowest_supported_version, highest_supported_version, trust_store, sni_callback )
def update(self, validate_certificates=_DEFAULT_VALUE, certificate_chain=_DEFAULT_VALUE, ciphers=_DEFAULT_VALUE, inner_protocols=_DEFAULT_VALUE, lowest_supported_version=_DEFAULT_VALUE, highest_supported_version=_DEFAULT_VALUE, trust_store=_DEFAULT_VALUE, sni_callback=_DEFAULT_VALUE): """ Create a new ``TLSConfiguration``, overriding some of the settings on the original configuration with the new settings. """ if validate_certificates is _DEFAULT_VALUE: validate_certificates = self.validate_certificates
if certificate_chain is _DEFAULT_VALUE: certificate_chain = self.certificate_chain
if ciphers is _DEFAULT_VALUE: ciphers = self.ciphers
if inner_protocols is _DEFAULT_VALUE: inner_protocols = self.inner_protocols
if lowest_supported_version is _DEFAULT_VALUE: lowest_supported_version = self.lowest_supported_version
if highest_supported_version is _DEFAULT_VALUE: highest_supported_version = self.highest_supported_version
if trust_store is _DEFAULT_VALUE: trust_store = self.trust_store
if sni_callback is _DEFAULT_VALUE: sni_callback = self.sni_callback
return self.__class__( validate_certificates, certificate_chain, ciphers, inner_protocols, lowest_supported_version, highest_supported_version, trust_store, sni_callback )
![](https://secure.gravatar.com/avatar/97c543aca1ac7bbcfb5279d0300c8330.jpg?s=120&d=mm&r=g)
[resending from a real computer since the security-sig auto(?)-moderator didn't like me top-posting from my phone :-)] On Thu, Jan 19, 2017 at 9:29 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
All,
Thanks for your feedback on the draft PEP I proposed last week! There was a lot of really enthusiastic and valuable feedback both on this mailing list and on GitHub.
I believe I’ve addressed a lot of the concerns that were brought up with the PEP now, so I’d like to ask that interested parties take another look.
Hi Cory, Great work! A few quick thoughts: - given that for next protocol negotiation we have to accept arbitrary bytestrings and the primary value of the NextProtocol class is to protect against typos, I wonder if it would simplify things slightly to make the attributes of this class just *be* bytestrings. I.e. what you have now but without the inheritance from enum. - what object types *do* you expect to be passed to wrap_buffers? I was assuming bytearrays, but the text at the bottom suggests file-likes? - a minor thing that I think is very important for usability is that there be some way to specify a name which library we're using with a single object. For example, curio has a high-level helper that abstracts over a lot of boilerplate in setting up a listening socket: await curio.tcp_server(host, port, client_connected_task, *, family=AF_INET, backlog=100, ssl=None, reuse_address=True) (Here ssl= is an SSLContext. Actually it's a curio.ssl.SSLContext -- see https://github.com/dabeaz/curio/blob/master/curio/ssl.py) Asyncio is similar: https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEve... It isn't quite clear to me right now how this kind of API should look with your proposal. Obviously they can't just take a SSLConfiguration object, because there's no way to look at an SSLConfiguration object and figure out what backend is in use (even though in general a given SSLConfiguration is only usable with a specific backend, because backends provide the Cert type etc). They could take a ServerContext/ClientContext, I guess? But it would be nice if there were some way to say "give me the default configuration, using SChannel". Or to write a function that sets up an SSLConfiguration while being generic over backends, so like it takes the backend as an argument and then uses that backend's cert type etc. Maybe the solution is to require that each implementation provide a namespace where each of its concrete types are given standard names? So like I know schannel.ClientContext is the schannel implementation of the tlsabc.ClientContext interface? -n -- Nathaniel J. Smith -- https://vorpus.org
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 20 Jan 2017, at 23:00, Nathaniel Smith <njs@pobox.com> wrote:
- given that for next protocol negotiation we have to accept arbitrary bytestrings and the primary value of the NextProtocol class is to protect against typos, I wonder if it would simplify things slightly to make the attributes of this class just *be* bytestrings. I.e. what you have now but without the inheritance from enum.
While we *could*, I don’t think the value-add of doing this is very high. Basically the only thing it simplifies is the type declaration, and I don’t know that it’s worth doing that. ;)
- what object types *do* you expect to be passed to wrap_buffers? I was assuming bytearrays, but the text at the bottom suggests file-likes?
Yeah, good question. I was assuming file-likes as well, but I don’t see any reason we couldn’t also do bytearrays. What do you think the advantage of that is?
It isn't quite clear to me right now how this kind of API should look with your proposal. Obviously they can't just take a SSLConfiguration object, because there's no way to look at an SSLConfiguration object and figure out what backend is in use (even though in general a given SSLConfiguration is only usable with a specific backend, because backends provide the Cert type etc). They could take a ServerContext/ClientContext, I guess? But it would be nice if there were some way to say "give me the default configuration, using SChannel". Or to write a function that sets up an SSLConfiguration while being generic over backends, so like it takes the backend as an argument and then uses that backend's cert type etc.
Yeah, I was assuming that they’d take a ClientContext/ServerContext object, rather than a configuration plus an instruction on which backend to use. It’s not clear to me that a function that setups up a configuration while generic over backends is actually a meaningful thing to have: are there really going to be users who are insistent on a specific TLS configuration but don’t care what concrete implementation is going to be used, such that their libraries have to pick it for them? Cory
![](https://secure.gravatar.com/avatar/97c543aca1ac7bbcfb5279d0300c8330.jpg?s=120&d=mm&r=g)
On Sun, Jan 22, 2017 at 4:01 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
On 20 Jan 2017, at 23:00, Nathaniel Smith <njs@pobox.com> wrote:
- given that for next protocol negotiation we have to accept arbitrary bytestrings and the primary value of the NextProtocol class is to protect against typos, I wonder if it would simplify things slightly to make the attributes of this class just *be* bytestrings. I.e. what you have now but without the inheritance from enum.
While we *could*, I don’t think the value-add of doing this is very high. Basically the only thing it simplifies is the type declaration, and I don’t know that it’s worth doing that. ;)
Well, and the code that receives the values, which currently has to handle both enums and bytestrings. Agreed it's not a big deal, it just seems like the value the enum is adding is all negative.
- what object types *do* you expect to be passed to wrap_buffers? I was assuming bytearrays, but the text at the bottom suggests file-likes?
Yeah, good question. I was assuming file-likes as well, but I don’t see any reason we couldn’t also do bytearrays. What do you think the advantage of that is?
I've just been used to using bytearrays everywhere for things like socket receive buffers. I find them much more convenient to work with than BytesIO b/c you can do things like pass them directly to send/recv/parsing functions, and they have nice properties like O(1) deletion from the front. It doesn't matter so much for this case where we're just shuffling bytes to and from a socket in a lump, but that's why I was assuming bytearrays without thinking about it :-). I guess h2's data_to_send and data_received don't involve any file-like objects either?
It isn't quite clear to me right now how this kind of API should look with your proposal. Obviously they can't just take a SSLConfiguration object, because there's no way to look at an SSLConfiguration object and figure out what backend is in use (even though in general a given SSLConfiguration is only usable with a specific backend, because backends provide the Cert type etc). They could take a ServerContext/ClientContext, I guess? But it would be nice if there were some way to say "give me the default configuration, using SChannel". Or to write a function that sets up an SSLConfiguration while being generic over backends, so like it takes the backend as an argument and then uses that backend's cert type etc.
Yeah, I was assuming that they’d take a ClientContext/ServerContext object, rather than a configuration plus an instruction on which backend to use. It’s not clear to me that a function that setups up a configuration while generic over backends is actually a meaningful thing to have: are there really going to be users who are insistent on a specific TLS configuration but don’t care what concrete implementation is going to be used, such that their libraries have to pick it for them?
I think it's more likely the other way around -- users need to override the default backend, but don't want to specify all the details? And any library that does build configuration should be able to do it using generic code so long as it's sticking to generic features? requests.get(..., backend=WeirdEmbeddedTLSLib) -> "I need to use this lib, but requests should pick the ciphers/TLS version/etc., I'll just screw it up" web-server.cfg: listen-port = 443 [ssl] enabled = True backend = SChannel certificate = foo.pem Like, in the PEP as currently written, it goes to great efforts to make sure that generic code can be used to select cipher suites, but there's no way for a web server to load in certificate given a PEM filename, except via having a hard-coded table of all possible backends? The PEP says that each backend should somehow somewhere provide a concrete subclass of Certificate, but to find it you need to consult the backend-specific docs. This is really a usability point that's mostly orthogonal to the semantics/design discussion and should be pretty straightforward to sort out, so it might make sense to defer worrying about it until the other stuff's more settled. -n -- Nathaniel J. Smith -- https://vorpus.org
![](https://secure.gravatar.com/avatar/de311342220232e618cb27c9936ab9bf.jpg?s=120&d=mm&r=g)
On 01/23/2017 05:59 PM, Nathaniel Smith wrote:
On Sun, Jan 22, 2017 at 4:01 AM, Cory Benfield wrote:
On 20 Jan 2017, at 23:00, Nathaniel Smith wrote:
- given that for next protocol negotiation we have to accept arbitrary bytestrings and the primary value of the NextProtocol class is to protect against typos, I wonder if it would simplify things slightly to make the attributes of this class just *be* bytestrings. I.e. what you have now but without the inheritance from enum.
While we *could*, I don’t think the value-add of doing this is very high. Basically the only thing it simplifies is the type declaration, and I don’t know that it’s worth doing that. ;)
Well, and the code that receives the values, which currently has to handle both enums and bytestrings. Agreed it's not a big deal, it just seems like the value the enum is adding is all negative.
Enum can be mixed with other types: --- 8< ---------------------------------------- from enum import Enum class NextProtocol(bytes, Enum): H2 = b'h2' H2C = b'h2c' HTTP1 = b'http/1.1' WEBRTC = b'webrtc' C_WEBRTC = b'c-webrtc' FTP = b'ftp' STUN = b'stun.nat-discovery' TURN = b'stun.turn' print(NextProtocol.STUN) # NextProtocol.STUN print(isinstance(NextProtocol.STUN, bytes)) # True --- 8< ---------------------------------------- -- ~Ethan~
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 24 Jan 2017, at 01:59, Nathaniel Smith <njs@pobox.com> wrote:
I've just been used to using bytearrays everywhere for things like socket receive buffers. I find them much more convenient to work with than BytesIO b/c you can do things like pass them directly to send/recv/parsing functions, and they have nice properties like O(1) deletion from the front. It doesn't matter so much for this case where we're just shuffling bytes to and from a socket in a lump, but that's why I was assuming bytearrays without thinking about it :-). I guess h2's data_to_send and data_received don't involve any file-like objects either?
Yup, that’s very correct. Ok, let’s update to include bytearray (really it should be any buffer type).
Like, in the PEP as currently written, it goes to great efforts to make sure that generic code can be used to select cipher suites, but there's no way for a web server to load in certificate given a PEM filename, except via having a hard-coded table of all possible backends? The PEP says that each backend should somehow somewhere provide a concrete subclass of Certificate, but to find it you need to consult the backend-specific docs.
This is really a usability point that's mostly orthogonal to the semantics/design discussion and should be pretty straightforward to sort out, so it might make sense to defer worrying about it until the other stuff's more settled.
Yeah, this is a good catch. I had missed the problems around Certificate and TrustStore objects. Hrm. Let me have a think about it and revise the PEP. Cory
![](https://secure.gravatar.com/avatar/f3ba3ecffd20251d73749afbfa636786.jpg?s=120&d=mm&r=g)
On 20 January 2017 at 04:29, Cory Benfield <cory@lukasa.co.uk> wrote:
Please let me know what you think.
The version of the draft PEP, from commit ce74bc60, is reproduced below.
Thanks Cory, this is looking really good. I don't have anything to add on the security design front, but do have a few comments/questions on the API design and explanation front.
Configuration ~~~~~~~~~~~~~
The ``TLSConfiguration`` concrete class defines an object that can hold and manage TLS configuration. The goals of this class are as follows:
1. To provide a method of specifying TLS configuration that avoids the risk of errors in typing (this excludes the use of a simple dictionary). 2. To provide an object that can be safely compared to other configuration objects to detect changes in TLS configuration, for use with the SNI callback.
This class is not an ABC, primarily because it is not expected to have implementation-specific behaviour. The responsibility for transforming a ``TLSConfiguration`` object into a useful set of configuration for a given TLS implementation belongs to the Context objects discussed below.
This class has one other notable property: it is immutable. This is a desirable trait for a few reasons. The most important one is that it allows these objects to be used as dictionary keys, which is potentially extremely valuable for certain TLS backends and their SNI configuration. On top of this, it frees implementations from needing to worry about their configuration objects being changed under their feet, which allows them to avoid needing to carefully synchronize changes between their concrete data structures and the configuration object.
The ``TLSConfiguration`` object would be defined by the following code:
ServerNameCallback = Callable[[TLSBufferObject, Optional[str], TLSConfiguration], Any]
_configuration_fields = [ 'validate_certificates', 'certificate_chain', 'ciphers', 'inner_protocols', 'lowest_supported_version', 'highest_supported_version', 'trust_store', 'sni_callback', ]
_DEFAULT_VALUE = object()
class TLSConfiguration(namedtuple('TLSConfiguration', _configuration_fields)):
I agree with Wes that the backwards compatibility guarantees around adding new configuration fields should be clarified. I think it will suffice to say that "new fields are only appended, existing fields are never removed, renamed, or reordered". That means that: - tuple unpacking will be forward compatible as long as you use *args at the end - numeric lookup will be forward compatible That doesn't make either of them a good idea (vs just using attribute lookups), but it does provide an indication to future maintainers that such code shouldn't be gratuitously broken either.
Context ~~~~~~~
The ``Context`` abstract base class defines an object that allows configuration of TLS. It can be thought of as a factory for ``TLSWrappedSocket`` and ``TLSWrappedBuffer`` objects.
As much as possible implementers should aim to make these classes immutable: that is, they should prefer not to allow users to mutate their internal state directly, instead preferring to create new contexts from new TLSConfiguration objects. Obviously, the ABCs cannot enforce this constraint, and so they do not attempt to.
The ``Context`` abstract base class has the following class definition::
This intro section talks about a combined "Context" objection, but the implementation has been split into ServerContext and ClientContext. That split could also use some explanation in the background section of the PEP.
Proposed Interface ^^^^^^^^^^^^^^^^^^
The proposed interface for the new module is influenced by the combined set of limitations of the above implementations. Specifically, as every implementation *except* OpenSSL requires that each individual cipher be provided, there is no option but to provide that lowest-common denominator approach.
The second sentence here doesn't match the description of SChannel cipher configuration, so I'm not clear on how the proposed interface would map to an SChannel backend.
Errors ~~~~~~
This module would define three base classes for use with error handling. Unlike the other classes defined here, these classes are not *abstract*, as they have no behaviour. They exist simply to signal certain common behaviours. Backends should subclass these exceptions in their own packages, but needn't define any behaviour for them.
In general, concrete implementations should subclass these exceptions rather than throw them directly. This makes it moderately easier to determine which concrete TLS implementation is in use during debugging of unexpected errors. However, this is not mandatory.
This is the one part of the PEP that I think may need to discuss transition strategies for libraries and frameworks that currently let ssl module exceptions escape to their users: how do they do that in a way that's transparent to API consumers that currently capture the ssl module exceptions? Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 21 Jan 2017, at 13:07, Nick Coghlan <ncoghlan@gmail.com> wrote:
I agree with Wes that the backwards compatibility guarantees around adding new configuration fields should be clarified.
I think it will suffice to say that "new fields are only appended, existing fields are never removed, renamed, or reordered". That means that:
- tuple unpacking will be forward compatible as long as you use *args at the end - numeric lookup will be forward compatible
Good idea.
This intro section talks about a combined "Context" objection, but the implementation has been split into ServerContext and ClientContext.
That split could also use some explanation in the background section of the PEP.
Good idea, I’ll expand on that.
Proposed Interface ^^^^^^^^^^^^^^^^^^
The proposed interface for the new module is influenced by the combined set of limitations of the above implementations. Specifically, as every implementation *except* OpenSSL requires that each individual cipher be provided, there is no option but to provide that lowest-common denominator approach.
The second sentence here doesn't match the description of SChannel cipher configuration, so I'm not clear on how the proposed interface would map to an SChannel backend.
Yeah, this is a point I’m struggling with internally. SChannel’s API is frustratingly limited. The way I see it there are two options: 1. Allow the possibility that SChannel may allow ciphers that others do not given the same cipher configuration. This is not a good solution, frankly, because the situation in which this will happen is predominantly that SChannel will allow modes or key/hash sizes that the other implementations would forbid, given the same cipher configuration. 2. Force all other implementations to be as bad as SChannel: that is, to make it impossible to restrict key sizes and cipher modes on all implementations because SChannel can’t. I don’t really like either of those choices. I *think* 2 is worse than 1, but I’m not sure about that. People with opinions should really weigh in on it.
This is the one part of the PEP that I think may need to discuss transition strategies for libraries and frameworks that currently let ssl module exceptions escape to their users: how do they do that in a way that's transparent to API consumers that currently capture the ssl module exceptions?
The short answer is that users who currently capture the ssl module exceptions need to start catching these exceptions instead when they transition. Cory
![](https://secure.gravatar.com/avatar/63ca18e130d527d0741f1da54bb129a7.jpg?s=120&d=mm&r=g)
On Sunday, January 22, 2017, Cory Benfield <cory@lukasa.co.uk> wrote:
On 21 Jan 2017, at 13:07, Nick Coghlan <ncoghlan@gmail.com <javascript:;>> wrote:
I agree with Wes that the backwards compatibility guarantees around adding new configuration fields should be clarified.
I think it will suffice to say that "new fields are only appended, existing fields are never removed, renamed, or reordered". That means that:
- tuple unpacking will be forward compatible as long as you use *args at the end - numeric lookup will be forward compatible
Good idea.
Looking at the GnuTLS manual [1], I see a number of potential additional configuration parameters: - session resumption (bool, expiration time) - Trust on first use (SSH-like) - DANE [2] - [1] https://gnutls.org/manual/gnutls.html#Selecting-cryptographic-key-sizes [2] https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities ... IDK about *args (and integer namedtuple field indexing). I also (these days) tend to disagree with items-accessible-as-attributes dicts because dashes and consistency of API.
This intro section talks about a combined "Context" objection, but the implementation has been split into ServerContext and ClientContext.
That split could also use some explanation in the background section of the PEP.
Good idea, I’ll expand on that.
Proposed Interface ^^^^^^^^^^^^^^^^^^
The proposed interface for the new module is influenced by the combined set of limitations of the above implementations. Specifically, as every implementation *except* OpenSSL requires that each individual cipher be provided, there is no option but to provide that lowest-common denominator approach.
The second sentence here doesn't match the description of SChannel cipher configuration, so I'm not clear on how the proposed interface would map to an SChannel backend.
Yeah, this is a point I’m struggling with internally.
SChannel’s API is frustratingly limited. The way I see it there are two options:
1. Allow the possibility that SChannel may allow ciphers that others do not given the same cipher configuration. This is not a good solution, frankly, because the situation in which this will happen is predominantly that SChannel will allow modes or key/hash sizes that the other implementations would forbid, given the same cipher configuration. 2. Force all other implementations to be as bad as SChannel: that is, to make it impossible to restrict key sizes and cipher modes on all implementations because SChannel can’t.
GCD, LCD. 3. ciphers__.get(SCHANNEL) OR ciphers
I don’t really like either of those choices. I *think* 2 is worse than 1, but I’m not sure about that. People with opinions should really weigh in on it.
This is the one part of the PEP that I think may need to discuss transition strategies for libraries and frameworks that currently let ssl module exceptions escape to their users: how do they do that in a way that's transparent to API consumers that currently capture the ssl module exceptions?
The short answer is that users who currently capture the ssl module exceptions need to start catching these exceptions instead when they transition.
Are these exceptions redundant? Could they derive from the new TLSError as well as the existing comparable exception?
Cory
_______________________________________________ Security-SIG mailing list Security-SIG@python.org <javascript:;> https://mail.python.org/mailman/listinfo/security-sig
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 22 Jan 2017, at 16:23, Wes Turner <wes.turner@gmail.com> wrote:
Looking at the GnuTLS manual [1], I see a number of potential additional configuration parameters:
- session resumption (bool, expiration time) - Trust on first use (SSH-like) - DANE [2]
Remember that the goal of this API is not to support every configuration option supported by 1 or more concrete implementations. The goal of this API is to support the common superset of the most-used APIs as needed for TLS. TOFU at the TLS level is not in that scope. DANE is not widely supported. So the only question there is session resumption, which I think may well be in-scope, but probably doesn’t need to go in the API in v1.
... IDK about *args (and integer namedtuple field indexing). I also (these days) tend to disagree with items-accessible-as-attributes dicts because dashes and consistency of API.
Can you elaborate on this? I feel like I’m missing some context.
GCD, LCD.
3. ciphers__.get(SCHANNEL) OR ciphers
Can you elaborate on this too?
Are these exceptions redundant? Could they derive from the new TLSError as well as the existing comparable exception?
This module should be entirely unattached to the ssl module, IMO. This is most important because the ssl module doesn’t exist when Python is not linked against OpenSSL: being unable to define the exceptions in a “you don’t need OpenSSL module” because Python isn’t linked against OpenSSL seems like a pretty silly problem. Cory
![](https://secure.gravatar.com/avatar/63ca18e130d527d0741f1da54bb129a7.jpg?s=120&d=mm&r=g)
On Sunday, January 22, 2017, Cory Benfield <cory@lukasa.co.uk> wrote:
On 22 Jan 2017, at 16:23, Wes Turner <wes.turner@gmail.com <javascript:_e(%7B%7D,'cvml','wes.turner@gmail.com');>> wrote:
Looking at the GnuTLS manual [1], I see a number of potential additional configuration parameters:
- session resumption (bool, expiration time) - Trust on first use (SSH-like) - DANE [2]
Remember that the goal of this API is not to support every configuration option supported by 1 or more concrete implementations. The goal of this API is to support the common superset of the most-used APIs as needed for TLS. TOFU at the TLS level is not in that scope. DANE is not widely supported.
- OpenSSL 1.1.0 supports DANE - GnuTLS supports DANE - AFAIU, neither SChannel nor Secure Transport yet support DANE So, in order to support e.g. DANE, where would the additional backend-specific configuration occur?
So the only question there is session resumption, which I think may well be in-scope, but probably doesn’t need to go in the API in v1.
... IDK about *args (and integer namedtuple field indexing). I also (these days) tend to disagree with items-accessible-as-attributes dicts because dashes and consistency of API.
Can you elaborate on this? I feel like I’m missing some context.
A frozen ordered dict would be preferable because: - Inevitable eventuality of new/additional config parameters - Consistency and readability of API access with [], .__getitem__(), .get() - https://www.python.org/dev/peps/pep-0416/#rejection-notice ... - https://docs.python.org/3.5/library/types.html#types.MappingProxyType NamedTuple is preferable because: - Immutability - Speed - RAM
GCD, LCD.
3. ciphers__.get(SCHANNEL) OR ciphers
Can you elaborate on this too?
If defined, backed specific configuration settings could take precedence over platform-neutral defaults.
Are these exceptions redundant? Could they derive from the new TLSError as well as the existing comparable exception?
This module should be entirely unattached to the ssl module, IMO. This is most important because the ssl module doesn’t exist when Python is not linked against OpenSSL: being unable to define the exceptions in a “you don’t need OpenSSL module” because Python isn’t linked against OpenSSL seems like a pretty silly problem.
Good call.
Cory
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 22 Jan 2017, at 17:01, Wes Turner <wes.turner@gmail.com> wrote:
- OpenSSL 1.1.0 supports DANE - GnuTLS supports DANE - AFAIU, neither SChannel nor Secure Transport yet support DANE
So, in order to support e.g. DANE, where would the additional backend-specific configuration occur?
This would be up to the individual specific backends. They are allowed to extend the API provided here in whatever manner they see fit.
A frozen ordered dict would be preferable because:
- Inevitable eventuality of new/additional config parameters - Consistency and readability of API access with [], .__getitem__(), .get() - https://www.python.org/dev/peps/pep-0416/#rejection-notice <https://www.python.org/dev/peps/pep-0416/#rejection-notice> ... - https://docs.python.org/3.5/library/types.html#types.MappingProxyType <https://docs.python.org/3.5/library/types.html#types.MappingProxyType>
NamedTuple is preferable because:
- Immutability - Speed - RAM
So immutable types are key, and a truly-frozen ordered dict would be fine from that perspective. However, the key reason I want to use a namedtuple instead of a dict is to discourage ad-hoc extension of the configuration type (with the side benefit of reducing the risk of typo-based configuration and implementation errors. Essentially, the dictionary approach encourages adding ad-hoc keys into the configuration, which *reduces* auditability and encourages the proliferation of essentially implementation-specific keys.
If defined, backed specific configuration settings could take precedence over platform-neutral defaults.
So, that’s definitionally true. What’s very hard is coming up with a general way to express those backend-specific configuration settings, which is why I’m not really trying. In fact, it would probably be possible to argue that doing that is impossible. Cory
![](https://secure.gravatar.com/avatar/63ca18e130d527d0741f1da54bb129a7.jpg?s=120&d=mm&r=g)
On Sunday, January 22, 2017, Cory Benfield <cory@lukasa.co.uk> wrote:
On 22 Jan 2017, at 17:01, Wes Turner <wes.turner@gmail.com <javascript:_e(%7B%7D,'cvml','wes.turner@gmail.com');>> wrote:
- OpenSSL 1.1.0 supports DANE - GnuTLS supports DANE - AFAIU, neither SChannel nor Secure Transport yet support DANE
So, in order to support e.g. DANE, where would the additional backend-specific configuration occur?
This would be up to the individual specific backends. They are allowed to extend the API provided here in whatever manner they see fit.
A frozen ordered dict would be preferable because:
- Inevitable eventuality of new/additional config parameters - Consistency and readability of API access with [], .__getitem__(), .get() - https://www.python.org/dev/peps/pep-0416/#rejection-notice ... - https://docs.python.org/3.5/library/types.html#types.MappingProxyType
NamedTuple is preferable because:
- Immutability - Speed - RAM
So immutable types are key, and a truly-frozen ordered dict would be fine from that perspective.
However, the key reason I want to use a namedtuple instead of a dict is to discourage ad-hoc extension of the configuration type (with the side benefit of reducing the risk of typo-based configuration and implementation errors. Essentially, the dictionary approach encourages adding ad-hoc keys into the configuration, which *reduces* auditability and encourages the proliferation of essentially implementation-specific keys.
So then there are two ways to preserve the centralized configuration (with a configuration object): 1. extra_config, a TLSConfiguration parameter pointing to e.g. a (mutable) List or an OrderedDict of additional parameters 2. create an additional configuration object for additional configuration parameters KeyErrors and AttributeErrors are indeed useful for catching configuration errors (in additional to the policy validation opportunity that a configuration object provides)
If defined, backed specific configuration settings could take precedence over platform-neutral defaults.
So, that’s definitionally true. What’s very hard is coming up with a general way to express those backend-specific configuration settings, which is why I’m not really trying. In fact, it would probably be possible to argue that doing that is impossible.
An Ordered of these backend-specific configuration parameters - while maybe not standardizable - could be helpful: cfg.backend_settings[BACKEND_ENUM] = Ordered the() One use case is applications which don't/won't define any actual config code for their app; they just want it to work everywhere (which may mean working with each platform's native TLS library). There's then an external config to be read into TLSConfiguration (and whatever else for backend-specific configuration parameters)
Cory
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 22 Jan 2017, at 18:55, Wes Turner <wes.turner@gmail.com> wrote:
So then there are two ways to preserve the centralized configuration (with a configuration object):
1. extra_config, a TLSConfiguration parameter pointing to e.g. a (mutable) List or an OrderedDict of additional parameters 2. create an additional configuration object for additional configuration parameters
Immutable types that contain user-modifyable mutable types are not immutable. ;) They are immutable *references*. I’m shooting for actual immutability here, which is why most of the API surface that accepted lists has been changed to require tuples instead. That means only (2) could work. It may be that the sensible idea is to say that the _BaseContext constructor should take one config object and then a Mapping of implementation-specific-constant to implementation-specific-config. But I don’t know that we need to enshrine that in the API: concrete implementations that need to do this can simply accept those as extra arguments without violating the abstract API.
One use case is applications which don't/won't define any actual config code for their app; they just want it to work everywhere (which may mean working with each platform's native TLS library). There's then an external config to be read into TLSConfiguration (and whatever else for backend-specific configuration parameters)
Sure, but how do you choose which implementation to use? For example, Python-on-macOS may find that there are backends available for SecureTransport, OpenSSL, and GnuTLS (say). There is no general correct answer to “what should I use here”: the answer is up to application developers. The best we could do is provide a hook that says “what does Python prefer”, and we may well have to do that eventually if we want to include stdlib support for other backends (so that httplib and friends can DTRT). However, we don’t need to put it into this API right now, and I don’t think there’s a good way to do it generically that doesn’t require someone, somewhere to make a choice. My argument is that whomever is choosing should also actually supply the Context as the method to indicate their choice. For example, Requests would have a function that attempts to import a whole bunch of TLS backends and selects the Context it would prefer to use on a given platform (allowing users to supply just TLSConfiguration if they’re that way inclined), while also allowing users to supply initialized Context objects if they want to. Cory
![](https://secure.gravatar.com/avatar/fce8285a62cae101b839f07d32ba7e8a.jpg?s=120&d=mm&r=g)
On 2017-01-22 18:01, Wes Turner wrote:
On Sunday, January 22, 2017, Cory Benfield <cory@lukasa.co.uk <mailto:cory@lukasa.co.uk>> wrote:
On 22 Jan 2017, at 16:23, Wes Turner <wes.turner@gmail.com <javascript:_e(%7B%7D,'cvml','wes.turner@gmail.com');>> wrote:
Looking at the GnuTLS manual [1], I see a number of potential additional configuration parameters:
- session resumption (bool, expiration time) - Trust on first use (SSH-like) - DANE [2]
Remember that the goal of this API is not to support every configuration option supported by 1 or more concrete implementations. The goal of this API is to support the common superset of the most-used APIs as needed for TLS. TOFU at the TLS level is not in that scope. DANE is not widely supported.
- OpenSSL 1.1.0 supports DANE - GnuTLS supports DANE - AFAIU, neither SChannel nor Secure Transport yet support DANE
So, in order to support e.g. DANE, where would the additional backend-specific configuration occur?
DANE is irrelevant for PKI and suffers from the same issue as OCSP requests. Melinda is working on a IETF standard for DANE stapling. For TLS API 2.0 we can talk about OCSP stapling, EV and CT. For the first iteration, any advanced feature is out of scope. Please remember, all features have to be implemented for at least four wrappers (Python ssl, cryptography for PyPy, SChannel and SecureTransport). Let's not get ahead of ourselves. Christian
![](https://secure.gravatar.com/avatar/f3ba3ecffd20251d73749afbfa636786.jpg?s=120&d=mm&r=g)
On 22 January 2017 at 17:34, Cory Benfield <cory@lukasa.co.uk> wrote:
This module should be entirely unattached to the ssl module, IMO. This is most important because the ssl module doesn’t exist when Python is not linked against OpenSSL: being unable to define the exceptions in a “you don’t need OpenSSL module” because Python isn’t linked against OpenSSL seems like a pretty silly problem.
The way Python handles inheritance means that we have a fair bit of flexibility in how we handle the exception catching transition. To be clear, the specific case where I'm interested in improving the migration strategy is the one where: - a networking helper library inside a project or app handles ssl API calls on behalf of its clients - the helper library wants to migrate to use the tls API instead - the only aspect of the ssl API exposed directly to users of the helper library is the exceptions, with everything else being handled by the library Option 1: No assistance is provided, so either all SSL exception catching code in clients of the helper library *must* be updated to also catch the corresponding new TLS exceptions in order to be agnostic regarding the use of the old ssl API or the new tls one, or else the library has to define its own wrapper exceptions that inherit from both the old ssl exceptions and the new tls ones Option 2: ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError are redefined as subclasses of tls.TLSError, tls.WantReadError, tls.WantWriteError respectively Option 3: the tls API defines tls.LegacySSLError, tls.LegacyWantReadError, tls.LegacyWantWriteError as aliases for ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError *if* the latter are defined, and otherwise defines them as new exception types Option 4: tls.TLSError, tls.WantReadError, tls.WantWriteError are defined as inheriting from ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError *if* the latter are defined Option 5: as with Option 4, but the "ssl" module is also changed such that it *always* defines at least ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError (and perhaps some of the other APIs that can be emulated atop the new tls abstraction), even if OpenSSL itself is unavailable I really don't like Option 1, as it just pushes the decision on how to deal with this migration to library designers, rather than coming up with a standardised approach that minimises the collective hassle for the overall ecosystem Option 2 isn't particularly appealing either - it still requires that all code catching these exceptions be modified in order to cope with underlying libraries migrating from using the legacy ssl API to the updated tls API, and it also isn't particularly amenable to being backported to earlier Python versions Option 3 starts being slightly more helpful to API consumers, but still doesn't really help library authors hide the backend migration for their API clients Option 4 by contrast means that library authors can still hide the migration to the new backend APIs even if they previously allowed the ssl module exceptions to escape as part of their public API - any clients catching those exceptions will still catch the new exceptions, and will only require changing in order to support running in environments that don't provide an OpenSSL backend at all Option 5 would cover even that last case: legacy API consumers that only relied on being able to catch the legacy exceptions would tolerate the use of non-OpenSSL backends even in environments where OpenSSL itself wasn't available Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
![](https://secure.gravatar.com/avatar/fce8285a62cae101b839f07d32ba7e8a.jpg?s=120&d=mm&r=g)
On 2017-01-26 08:49, Nick Coghlan wrote: [...]
Option 5: as with Option 4, but the "ssl" module is also changed such that it *always* defines at least ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError (and perhaps some of the other APIs that can be emulated atop the new tls abstraction), even if OpenSSL itself is unavailable [...] Option 5 would cover even that last case: legacy API consumers that only relied on being able to catch the legacy exceptions would tolerate the use of non-OpenSSL backends even in environments where OpenSSL itself wasn't available
Hi Nick, I'm a bit worried that option 5 is wasting resources and/or has unwanted side effects. Import of ssl is costly because it also loads and initializes OpenSSL. It's an unnecessary burden for applications that do not wish to use OpenSSL (macOS SecureTransport, Windows SChannel) at all or not the bundled OpenSSL version (static builds of cryptography). How about we move the exceptions and the base class for the TLSWrappedSocket to the `socket` module instead? In CPython the exception would live in _socket and get exported as PyCapsule. The socket module provides class TLSError(OSError): """socket.TLSError""" class TLWantWriteError(TLSError): """socket.TLSWantWriteError""" class TLWantReadError(TLSError): """socket.TLSWantReadError""" class AbstractSocket(meta=abc.ABCMeta) """socket.AbstractSockt""" The tls module provides: import socket from socket import TLSError, TLSWantReadError, TLSWantWriteError class TLSWrappedSocket(socket.AbstractSocket): pass Christian
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 26 Jan 2017, at 07:49, Nick Coghlan <ncoghlan@gmail.com> wrote:
Option 4: tls.TLSError, tls.WantReadError, tls.WantWriteError are defined as inheriting from ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError *if* the latter are defined
Option 5: as with Option 4, but the "ssl" module is also changed such that it *always* defines at least ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError (and perhaps some of the other APIs that can be emulated atop the new tls abstraction), even if OpenSSL itself is unavailable
Here’s my problem with this: try: socket.recv(8192) except tls.WantWriteError: socket.write(some_buffer) This code does not work with the legacy ssl module, because isinstance(ssl.SSLWantWriteError, tls.WantWriteError) is false. This means that we need to write a shim over the legacy ssl module that wraps *all* API calls, catches all exceptions and then translates them into subclasses of the tls error classes. That seems entirely batty to me. Cory
![](https://secure.gravatar.com/avatar/97c543aca1ac7bbcfb5279d0300c8330.jpg?s=120&d=mm&r=g)
On Thu, Jan 26, 2017 at 1:50 AM, Cory Benfield <cory@lukasa.co.uk> wrote:
On 26 Jan 2017, at 07:49, Nick Coghlan <ncoghlan@gmail.com> wrote:
Option 4: tls.TLSError, tls.WantReadError, tls.WantWriteError are defined as inheriting from ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError *if* the latter are defined
Option 5: as with Option 4, but the "ssl" module is also changed such that it *always* defines at least ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError (and perhaps some of the other APIs that can be emulated atop the new tls abstraction), even if OpenSSL itself is unavailable
Here’s my problem with this:
try: socket.recv(8192) except tls.WantWriteError: socket.write(some_buffer)
This code does not work with the legacy ssl module, because isinstance(ssl.SSLWantWriteError, tls.WantWriteError) is false. This means that we need to write a shim over the legacy ssl module that wraps *all* API calls, catches all exceptions and then translates them into subclasses of the tls error classes. That seems entirely batty to me.
It seems like the simplest effective solution to these problems would be for ssl in 3.7 to do ssl.py: from tls import TLSError as SSLError, WantWriteError as SSLWantWriteError, WantReadError as SSLWantReadError and then legacy code that catches SSLWant{Write,Read}Error will be automatically ported forward to the new TLS world. And in the backported version of the tls module for older Pythons, we could have it do the reverse to accomplish a similar effect (at the cost of importing ssl -- but this seems unavoidable in old-Python): tls.py: from ssl import SSLError as TLSError, SSLWantWriteError as WantWriteError, SSLWantReadError as WantReadError There's really no case where it's important to distinguish these, right? -n -- Nathaniel J. Smith -- https://vorpus.org
![](https://secure.gravatar.com/avatar/f3ba3ecffd20251d73749afbfa636786.jpg?s=120&d=mm&r=g)
On 26 January 2017 at 10:50, Cory Benfield <cory@lukasa.co.uk> wrote:
On 26 Jan 2017, at 07:49, Nick Coghlan <ncoghlan@gmail.com> wrote:
Option 4: tls.TLSError, tls.WantReadError, tls.WantWriteError are defined as inheriting from ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError *if* the latter are defined
Option 5: as with Option 4, but the "ssl" module is also changed such that it *always* defines at least ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError (and perhaps some of the other APIs that can be emulated atop the new tls abstraction), even if OpenSSL itself is unavailable
Here’s my problem with this:
try: socket.recv(8192) except tls.WantWriteError: socket.write(some_buffer)
This code does not work with the legacy ssl module, because isinstance(ssl.SSLWantWriteError, tls.WantWriteError) is false. This means that we need to write a shim over the legacy ssl module that wraps *all* API calls, catches all exceptions and then translates them into subclasses of the tls error classes. That seems entirely batty to me.
OK, so we have two competing problems here: 1. How do we write *new* API client code that is agnostic to whether or not the security implementation is traditional ssl or a new tls backend? 2. How does a library that exports the ssl exceptions migrate to using a tls backend instead, without having to catch and rewrap tls exceptions in the legacy ones? Meeting both constraints at the same time would require exception *aliases* rather than one set of exceptions inheriting from the other, such that we get: assert tls.TLSError is ssl.SSLError assert tls.WantWriteError is ssl.SSLWantWriteError assert tls.WantReadError is ssl.SSLWantReadError In an ideal world, that could be handled just by having the ssl module import the new tls module and alias the exceptions accordingly. Talking to Christian about it, things might be a little messier in practice due to the way the _ssl/ssl C/Python split works, but there shouldn't be any insurmountable barriers to going down the exception aliasing path in 3.7+. Backports to older versions would still need to contend with these being different exception objects, but one possible way of tackling that would be to inject a dummy ssl module into sys.modules before the regular one had a chance to be imported. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 26 Jan 2017, at 14:23, Nick Coghlan <ncoghlan@gmail.com> wrote:
In an ideal world, that could be handled just by having the ssl module import the new tls module and alias the exceptions accordingly. Talking to Christian about it, things might be a little messier in practice due to the way the _ssl/ssl C/Python split works, but there shouldn't be any insurmountable barriers to going down the exception aliasing path in 3.7+.
I’d be ok with going down the aliasing route. Is this a concern worth noting in the PEP, do you think? Cory
![](https://secure.gravatar.com/avatar/f3ba3ecffd20251d73749afbfa636786.jpg?s=120&d=mm&r=g)
On 26 January 2017 at 16:22, Cory Benfield <cory@lukasa.co.uk> wrote:
On 26 Jan 2017, at 14:23, Nick Coghlan <ncoghlan@gmail.com> wrote:
In an ideal world, that could be handled just by having the ssl module import the new tls module and alias the exceptions accordingly. Talking to Christian about it, things might be a little messier in practice due to the way the _ssl/ssl C/Python split works, but there shouldn't be any insurmountable barriers to going down the exception aliasing path in 3.7+.
I’d be ok with going down the aliasing route. Is this a concern worth noting in the PEP, do you think?
I think so, as the aliasing means that: - new code can just catch the tls exceptions and automatically catch the old ssl exceptions as well - old code that just catches the old exceptions will also catch the new exceptions, so helper library authors don't need to worry about catching the new exceptions and re-raising them as the old ones I don't think you need to explain the technical details of how the aliasing would work though - that really is an implementation detail (although you may want to provide a reference to Christian's email that spells it out in full). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
![](https://secure.gravatar.com/avatar/fce8285a62cae101b839f07d32ba7e8a.jpg?s=120&d=mm&r=g)
On 2017-01-26 15:23, Nick Coghlan wrote:
On 26 January 2017 at 10:50, Cory Benfield <cory@lukasa.co.uk> wrote:
On 26 Jan 2017, at 07:49, Nick Coghlan <ncoghlan@gmail.com> wrote:
Option 4: tls.TLSError, tls.WantReadError, tls.WantWriteError are defined as inheriting from ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError *if* the latter are defined
Option 5: as with Option 4, but the "ssl" module is also changed such that it *always* defines at least ssl.SSLError, ssl.SSLWantReadError, and ssl.SSLWantWriteError (and perhaps some of the other APIs that can be emulated atop the new tls abstraction), even if OpenSSL itself is unavailable
Here’s my problem with this:
try: socket.recv(8192) except tls.WantWriteError: socket.write(some_buffer)
This code does not work with the legacy ssl module, because isinstance(ssl.SSLWantWriteError, tls.WantWriteError) is false. This means that we need to write a shim over the legacy ssl module that wraps *all* API calls, catches all exceptions and then translates them into subclasses of the tls error classes. That seems entirely batty to me.
OK, so we have two competing problems here:
1. How do we write *new* API client code that is agnostic to whether or not the security implementation is traditional ssl or a new tls backend? 2. How does a library that exports the ssl exceptions migrate to using a tls backend instead, without having to catch and rewrap tls exceptions in the legacy ones?
Meeting both constraints at the same time would require exception *aliases* rather than one set of exceptions inheriting from the other, such that we get:
assert tls.TLSError is ssl.SSLError assert tls.WantWriteError is ssl.SSLWantWriteError assert tls.WantReadError is ssl.SSLWantReadError
In an ideal world, that could be handled just by having the ssl module import the new tls module and alias the exceptions accordingly. Talking to Christian about it, things might be a little messier in practice due to the way the _ssl/ssl C/Python split works, but there shouldn't be any insurmountable barriers to going down the exception aliasing path in 3.7+.
Backports to older versions would still need to contend with these being different exception objects, but one possible way of tackling that would be to inject a dummy ssl module into sys.modules before the regular one had a chance to be imported.
For technical reasons it is beneficial to define SSL exception in C code. I don't want to force people to load _ssl to avoid the overhead of OpenSSL loading and initialization. After some brain storming with Nick I came up with the idea to define the exceptions in _socket. The _ssl module imports a PyCapsule from _socket any way. I can just piggyback on the PyCapsule. For maximum compatibility with Python 2, the tls module should subclass from socket.error anyway, too. Python 2:
import ssl ssl.SSLError.__mro__ (<class 'ssl.SSLError'>, <class 'socket.error'>, <type 'exceptions.IOError'>, <type 'exceptions.EnvironmentError'>, <type 'exceptions.StandardError'>, <type 'exceptions.Exception'>, <type 'exceptions.BaseException'>, <type 'object'>)
Python 3:
import ssl, socket ssl.SSLError.__mro__ (<class 'ssl.SSLError'>, <class 'OSError'>, <class 'Exception'>, <class 'BaseException'>, <class 'object'>) socket.error <class 'OSError'>
This import block should (hopefully) cover all bases: try: # CPython 3.7+ from _socket import SSLError as TLSError from _socket import SSLWantWriteError as WantWriteError from _socket import SSLWantReadError as WantReadError except ImportError: # CPython < 3.7 and other implementations try: from ssl import SSLError as TLSError from ssl import SSLWantWriteError as WantWriteError from ssl import SSLWantReadError as WantReadError except ImportError: # ssl module is not available # Python 2: socket.error is a subclass of IOError # Python 3: socket.error is just an alias of OSError import socket class TLSError(socket.error): pass class WantWriteError(TLSError): pass class WantReadError(TLSError): pass Christian
![](https://secure.gravatar.com/avatar/de311342220232e618cb27c9936ab9bf.jpg?s=120&d=mm&r=g)
On 01/22/2017 04:18 AM, Cory Benfield wrote:
On 21 Jan 2017, at 13:07, Nick Coghlan wrote:
Proposed Interface ^^^^^^^^^^^^^^^^^^
The proposed interface for the new module is influenced by the combined set of limitations of the above implementations. Specifically, as every implementation *except* OpenSSL requires that each individual cipher be provided, there is no option but to provide that lowest-common denominator approach.
The second sentence here doesn't match the description of SChannel cipher configuration, so I'm not clear on how the proposed interface would map to an SChannel backend.
Yeah, this is a point I’m struggling with internally.
SChannel’s API is frustratingly limited. The way I see it there are two options:
1. Allow the possibility that SChannel may allow ciphers that others do not given the same cipher configuration. This is not a good solution, frankly, because the situation in which this will happen is predominantly that SChannel will allow modes or key/hash sizes that the other implementations would forbid, given the same cipher configuration.
2. Force all other implementations to be as bad as SChannel: that is, to make it impossible to restrict key sizes and cipher modes on all implementations because SChannel can’t.
I don’t really like either of those choices. I *think* 2 is worse than 1, but I’m not sure about that. People with opinions should really weigh in on it.
I am mostly woefully ignorant of these things, but I think option 1 is far preferable to option 2 -- especially if we get to choose which back-end will be used. Even if we can't, I don't think crippling the good actors is the way forward. As long as SChannel does not allow / ignores restrictions can we not issue a warning (probably with the logging module) saying such? -- ~Ethan~
![](https://secure.gravatar.com/avatar/214c694acb154321379cbc58dc91528c.jpg?s=120&d=mm&r=g)
On 22 Jan 2017, at 18:11, Ethan Furman <ethan@stoneleaf.us> wrote:
As long as SChannel does not allow / ignores restrictions can we not issue a warning (probably with the logging module) saying such?
That’s a matter for the concrete implementation. If the Python standard library grows an SChannel implementation then certainly it can. Cory
participants (6)
-
Christian Heimes
-
Cory Benfield
-
Ethan Furman
-
Nathaniel Smith
-
Nick Coghlan
-
Wes Turner