On 11 Jun 2016, at 09:24, Larry Hastings <larry@hastings.org> wrote: Only Linux and OS X have never-blocking /dev/urandom. On Linux, you can choose to block by calling getrandom(). On OS X you have no choice, you can only use the never-blocking /dev/urandom. (OS X also has a /dev/random but it behaves identically to /dev/urandom.) OS X's man page reassuringly claims blocking is never necessary; the blogosphere disagrees. If I were writing the function for the secrets module, I'd write it like you have above: call os.getrandom() if it's present, and os.urandom() if it isn't. I believe that achieves current-best-practice everywhere: it does the right thing on Linux, it does the right thing on Solaris, it does the right thing on all the other OSes where reading from /dev/urandom can block, and it uses the only facility available to us on OS X.
Sorry Larry, but as far as I know this is misleading (it’s not *wrong*, but it suggests that OS X’s /dev/urandom is the same as Linux’s, which is emphatically not true). I’ve found the discussion around OS X’s random devices to be weirdly abstract, given that the source code for it is public, so I went and took a look. My initial reading of it (and, to be clear, this is a high-level read of a codebase I don’t know well, so please take this with the grain of salt that is intended) is that the operating system literally will not boot without at least 128 bits of entropy to read from the EFI boot loader. In the absence of 128 bits of entropy the kernel will panic, rather than continue to boot. Generally speaking that entropy will come from RDRAND, given the restrictions on where OS X can be run (Intel CPUs for real OS X, virtualised on top of OS X, and so on top of Intel CPUs, for VMs), which imposes a baseline on the quality of the entropy you can get. Assuming that OS X is being run in a manner that is acceptable from the perspective of its license agreement (and we can all agree that no-one would violate the terms of OS X’s license agreement, right?), I think it’s reasonable to assume that OS X, either virtualised or not, is getting 128 bits of somewhat sensible entropy from the boot loader/CPU before it boots. That means we can say this about OS X’s /dev/urandom: the reason it never blocks is because the situation of “not enough entropy to generate good random numbers” is synonymous with “not enough entropy to boot the OS”. So maybe we can stop casting aspersions on OS X’s RNG now. Cory