[Python-ideas] PEP 504: Using the system RNG by default

Nick Coghlan ncoghlan at gmail.com
Wed Sep 16 07:38:36 CEST 2015

On 16 September 2015 at 14:27, David Mertz <mertz at gnosis.cx> wrote:
> On Sep 15, 2015 7:23 PM, "Stephen J. Turnbull" <stephen at xemacs.org> wrote:
>> A pseudo-randomly selected recent quote:
>>  > It would never occur to me to reach for the random module if I want
>>  > to do cryptography.
>> That doesn't mean that security has to be #1 always and everywhere in
>> designing Python, but I find it pretty distressing that apparently a
>> lot of people either don't understand or don't care about what's at
>> stake in these kinds of decisions *for the rest of the world*.
>> The reality is that security that is not on by default is not
>> secure.  Any break in a dike can flood a whole town.
> This feels somewhere between disingenuous and dishonest. Just like I don't
> use the random module for cryptography, I also don't use the socket module
> or the threading module for cryptography.

That's great that you already know not to use the random module for
cryptography. Unfortunately, this is a lesson that needs to be taught
developer by developer: "don't use the random module for security
sensitive tasks". When they ask "Why not?", they get hit with a wall
of confusing arcana about brute force search spaces, and
cryptographically secure random number generators, and get left with a
feeling of dissatisfaction with the explanation because cryptography
is one of the areas of computing where our intuitions break down so it
takes years to retrain our brains to adopt the relevant mindset.
Beginners don't even get that far, as they have to ask "What's a
security sensitive task?" while they're still at a stage where they're
trying to grasp the basic concept of computer generated random numbers
(this is a concrete problem with the current situation, as a warning
that says "Don't use this for <X>" is equivalent to "Don't use this"
is you don't yet know how to identify "<X>").

It's instinctive for humans to avoid additional work when it provides
no immediate benefit to us personally. This is a sensible time
management strategy, but it's proved to be a serious problem in the
context of computer security. An analogy that came up in one of the
earlier threads is this:

* as an individual lottery ticket holder, assuming you're going to win
is a bad assumption
* as a lottery operator, assuming someone, somewhere, is going to win
is a good assumption

Infrastructure security engineers are lottery operators - with
millions of software development projects, millions of businesses
demanding online web presences, and tens of millions of developers
worldwide (with many, many more on the way as computing becomes a
compulsory part of schooling), any potential mistake is going to be
made and exploited eventually, we just have no way of predicting when
or where. Unlike lottery operators (who get to set their prize
levels), we also have no way of predicting the severity of the

The *problem* we have is that individual developers are lottery ticket
holders - the probability of *our* particular component being the one
that gets compromised is vanishingly small, so the incentive to
inflict additional work on ourselves to mitigate security concerns is
similarly small (although some folks do it anyway out of sheer
interest, and some have professional incentives to do so).

So let's assume any given component has a 1 in 10000 chance of being
compromised (0.01%). We only have to get to 100k components before the
aggregate chance of at least one component being compromised rises to
almost 100% (around 99.54%). It's at this point the sheer scale of the
internet starts working against us - while it's currently estimated
that there are currently only around 30 million developers (both
professionals and hobbyists) worldwide, it's further estimated that
there are 3 *billion* people with access to the internet. Neither of
those numbers is going to suddenly start getting smaller, so we start
getting interested in security risks with a lower and lower
probability of being exploited.

Accordingly, we want "don't worry about security" to be the *right
answer* in as many cases as possible - there's always going to be
plenty of unavoidable security risks in any software development
project, so eliminating the avoidable ones by default makes it easier
to focus attention on other areas of potential concern.


Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia

More information about the Python-ideas mailing list