On Sun, Jun 12, 2016 at 01:49:34AM -0400, Random832 wrote:
The intention behind getrandom() is that it is intended *only* for cryptographic purposes.
I'm somewhat confused now because if that's the case it seems to accomplish multiple unrelated things. Why was this implemented as a system call rather than a device (or an ioctl on the existing ones)? If there's a benefit in not going through the non-atomic (and possibly resource limited) procedure of acquiring a file descriptor, reading from it, and closing it, why is that benefit not also extended to non-cryptographic users of urandom via allowing the system call to be used in that way?
This design was taken from OpenBSD, and the goal with getentropy(2) (which is also designed only for cryptographic use cases), was so that a denial of service attack (fd exhaustion) could force an application to fall back to a weaker -- in some cases, very weak or non-existent --- source of randomness. Non-cryptographic users don't need to use this interface at all. They can just use srandom(3)/random(3) and be happy.
Anyway, if you don't need cryptographic guarantees, you don't need getrandom(2) or getentropy(2); something like this will do just fine:
Then what's /dev/urandom *for*, anyway?
/dev/urandom is a legacy interface. It was intended originally for cryptographic use cases, but it was intended for the days when very few programs needed a secure cryptographic random generator, and it was assumed that application programmers would be very careful in checking error codes, etc. It also dates back to a time when the NSA was still pushing very hard for cryptographic export controls (hence the use of SHA-1 versus an encryption algorithm) and when many people questioned whether or not the SHA-1 algorithm, as designed by the NSA, had a backdoor in it. (As it turns out, the NSA put a back door into DUAL-EC, so retrospect this concern really wasn't that unreasonable.) Because of those concerns, the assumption is those few applications who really wanted to get security right (e.g., PGP, which still uses /dev/random for long-term key generation), would want to use /dev/random and deal with entropy accounting, and asking the user to type randomness on the keyboard and move their mouse around while generating a random key. But times change, and these days people are much more likely to believe that SHA-1 is in fact cryptographically secure, and future crypto hash algorithms are designed by teams from all over the world and NIST/NSA merely review the submissions (along with everyone else). So for example, SHA-3 was *not* designed by the NSA, and it was evaluated using a much more open process than SHA-1. Also, we have a much larger set of people writing code which is sensitive to cryptographic issues (back when I wrote /dev/random, I probably had met, or at least electronically corresponded with a large number of the folks who were working on network security protocols, at least in the non-classified world), and these days, there is much less trust that people writing code to use /dev/[u]random are in fact careful and competent security engineers. Whether or not this is a fair concern or not, it is true that there has been a change in API design ethos away from the "Unix let's make things as general as possible, in case someone clever comes up use case we didn't think of", to "idiots are ingenious so they will come up with ways to misuse an idiot-proof interface, so we need to lock it down as much as possible." OpenBSD's getentropy(2) interface is a strong example of this new attitude towards API design, and getrandom(2) is not quite so doctrinaire (I added a flags field when getentropy(2) didn't even give those options to progammers), but it is following in the same tradition. Cheers, - Ted