[Python-Dev] BDFL ruling request: should we block forever waiting for high-quality random bits?

Larry Hastings larry at hastings.org
Thu Jun 9 07:25:04 EDT 2016


A problem has surfaced just this week in 3.5.1.  Obviously this is a 
good time to fix it for 3.5.2.  But there's a big argument over what is 
"broken" and what is an appropriate "fix".

As 3.5 Release Manager, I can put my foot down and make rulings, and 
AFAIK the only way to overrule me is with the BDFL.  In two of three 
cases I've put my foot down.  In the third I'm pretty sure I'm right, 
but IIUC literally everyone with a stated opinion else disagrees with 
me.  So I thought it best I escalate it.  Note that 3.5.2 is going to 
wait until the issue is settled and any changes to behavior are written 
and checked in.

(Blanket disclaimer for the below: in some places I'm trying to 
communicate other's people positions.  I apologize if I misrepresented 
yours; please reply and correct my mistake.  Also, sorry for the length 
of this email.  But feel even sorrier for me: this debate has already 
eaten two days this week.)


BACKGROUND

For 3.5 os.urandom() was changed: instead of reading from /dev/urandom, 
it uses the new system call getrandom() where available.  This is a new 
system call on Linux (which has already been cloned by Solaris).  
getrandom(), as CPython uses it, reads from the same PRNG that 
/dev/urandom gets its bits from.  But because it's a system call you 
don't have to mess around with file handles.  Also it always works in 
chrooted environments.  Sounds like a fine idea.

Also for 3.5, several other places where CPython internally needs random 
bits were switched from reading from /dev/urandom to calling 
getrandom().  The two that I know of: choosing the seed for hash 
randomization, and initializing the default Mersenne Twister for the 
random module.

There's one subtle but important difference between /dev/urandom and 
getrandom().  At startup, Linux seeds the urandom PRNG from the entropy 
pool.  If the entropy pool is uninitialized, what happens? CPython's 
calls to getrandom() will block until the entropy pool is initialized, 
which is usually just a few seconds (or less) after startup.  But 
/dev/urandom *guarantees* that reads will *always* work.  If the entropy 
pool hasn't been initialized, it pulls numbers from the PRNG before it's 
been properly seeded.  What this results in depends on various aspects 
of the configuration (do you have ECC RAM? how long was the machine 
powered down? does the system have a correct realtime clock?).  In 
extreme circumstances this may mean the "random" numbers are shockingly 
predictable!

Under normal circumstances this minor difference is irrelevant. After 
all, when would the entropy pool ever be uninitialized?


THE PROBLEM

Issue #26839:

    http://bugs.python.org/issue26839

(warning, the issue is now astonishingly long, and exhausting to read, 
and various bits of it are factually wrong)

A user reports that when starting CPython soon after startup on a fresh 
virtual machine, the process would hang for a long time. Someone on the 
issue reported observed delays of over 90 seconds. Later we found out: 
it wasn't 90 seconds before CPython became usable, these 90 seconds 
delays were before systemd timed out and simply killed the process.  
It's not clear what the upper bound on the delay might be.

The issue author had already identified the cause: CPython was blocking 
on getrandom() in order to initialize hash randomization. On this fresh 
virtual machine the entropy pool started out uninitialized.  And since 
the only thing running on the machine was CPython, and since CPython was 
blocked on initialization, the entropy pool was initializing very, very 
slowly.

Other posters to the thread pointed out that the same thing would happen 
in "import random", if your code could get that far.  The constructor 
for the Random() object would seed the Mersenne Twister, which would 
call getrandom() and block.

Naturally, callers to os.urandom() could also block for an unbounded 
period for the same reason.


MY RULINGS SO FAR

1) The change in 3.5 that means "import random" may block for an 
unbounded period of time on Linux due to the switch to getrandom() must 
be backed out or amended so that it never blocks.

I *think* everyone agrees with this.  The Mersenne Twister is not a 
CPRNG, so seeding it with crypto-quality bits isn't necessary.  And 
unbounded delays are bad.


2) The change in 3.5 that means hash randomization initialization may 
block for an unbounded period of time on Linux due to the switch to 
getrandom() must be backed out or amended so that it never blocks.

I believe most people agree with me.  The cryptography experts 
disagree.  IIUC both Alex Gaynor and Christian Heimes feel the blocking 
is preferable to non-random hash "randomization".

Yes, the bad random data means the hashing will be predictable. Neither 
choice is exactly what you want.  But most people feel it's simply 
unreasonable that in extreme corner cases CPython can block for an 
unbounded amount of time before running user code.


OS.URANDOM()

Here's where it gets complicated--and where everyone else thinks I'm wrong.

os.urandom() is currently the best place for a Python programmer to get 
high-quality random bits.  The one-line summary for os.urandom() reads: 
"Return a string of n random bytes suitable for cryptographic use."

On 3.4 and before, on Linux, os.urandom() would never block, but if the 
entropy pool was uninitialized it could return very-very-poor-quality 
random bits.  On 3.5.0 and 3.5.1, on Linux, when using the getrandom() 
call, it will instead block for an apparently unbounded period before 
returning high-quality random bits.  The question: is this new behavior 
preferable, or should we return to the old behavior?


Since I'm the one writing this email, let me make the case for my 
position: I think that os.urandom() should never block on Linux. Why?


1) Functions in the os module that look like OS functions should behave 
predictably like thin wrappers over those OS functions.

Most of the time this is exactly what they are.  In some cases they're 
more sophisticated; examples include os.popen(), os.scandir(), and the 
byzantine os.utime().  There are also some functions provided by the os 
module that don't resemble any native functionality, but these have 
unique names that don't look like anything provided by the OS.

This makes the behavior of the Python function easy to reason about: it 
always behaves like your local OS function.  Python provides os.stat() 
and it behaves like the local stat().  So if you want to know how any os 
module function behaves, just read your local man page.  Therefore, 
os.urandom() should behave exactly like a thin shell around reading the 
local /dev/urandom.

On Linux, /dev/urandom guarantees that it will never block.  This means 
it has undesirable behavior if read immediately after a fresh boot.  But 
this guarantee is so strong that Theodore Ts'o couldn't break it to fix 
the undesirable behavior.  Instead he added the getrandom() system 
call.  But he left /dev/urandom alone. Therefore, on Linux, os.urandom() 
should behave the same way, and also never block.


2) It's unfair to change the semantics of a well-established function to 
such a radical degree.

os.urandom() has been in Python since at least 2.6--I was too lazy to go 
back any further.  From 2.6 to 3.4, it behaved exactly like 
/dev/urandom, which meant that on Linux it would never block.  As of 
3.5, on Linux, it might now block for an unbounded period of time. Any 
code that calls os.urandom() has had its behavior radically changed in 
this extreme corner case.


3) os.urandom() doesn't actually guarantee it's suitable for cryptography.

The documentation for os.urandom() has contained this sentence, 
untouched, since 2.6:

    The returned data should be unpredictable enough for cryptographic
    applications, though its exact quality depends on the OS
    implementation. On a Unix-like system this will query /dev/urandom,
    and on Windows it will use CryptGenRandom().

Of course, version 3.5 added this:

    On Linux 3.17 and newer, the getrandom() syscall is now used when
    available.

But the waffling about its suitability for cryptography remains 
unchanged.  So, while it's undesirable that os.urandom() might return 
shockingly poor quality random bits, it is *permissible* according to 
the documentation.


4) This really is a rare corner-case we're talking about.

I just want to re-state: this case on Linux where /dev/urandom returns 
totally predictable bytes, and getrandom() will block, only happens when 
the entropy pool for urandom is uninitialized. Although it has been seen 
in the field, it's extremely rare. 99.99999%+ of the time, reading 
/dev/urandom and calling getrandom() will both return the exact same 
high-quality random bits without blocking.


5) This corner-case behavior is fixable externally to CPython.

I don't really understand the subject, but apparently it's entirely 
reasonable to expect sysadmins to directly manage the entropy pools of 
virtual machines.  They should be able to spin up their VMs with a 
pre-filled entropy pool.  So it should be possible to ensure that 
os.urandom() always returns the high-quality random bits we wanted, even 
on freshly-booted VMs.


6) Guido and Tim Peters already decided once that os.urandom() should 
behave like /dev/urandom.

Issue #25003:

    http://bugs.python.org/issue25003


In 2.7.10, os.urandom() was changed to call getentropy() instead of 
reading /dev/urandom when getentropy() was available.  getentropy() was 
"stunningly slow" on Solaris, on the order of 300x slower than reading 
/dev/urandom.  Guido and Tim both participated in the discussion on the 
issue; Guido also apparently discussed it via email with Theo De Raadt.

While it's not quite apples-to-apples, I think this establishes some 
precedent that os.urandom() should
   * behave like /dev/urandom, and
   * be fast.


--

On the other side is... everybody else.  I've already spent an enormous 
amount of time researching and writing and re-writing this email.  
Rather than try (and fail) to accurately present the other sides of this 
debate, I'm just going to end the email here and let the other 
participants reply and voice their views.


Bottom line: Guido, in this extreme corner case on Linux, should 
os.urandom() return bad random data like it used to, or should it block 
forever like it does in 3.5.0 and 3.5.1?


//arry/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20160609/62cc3884/attachment.html>


More information about the Python-Dev mailing list