Guido van Rossum writes:
This is an attractive nuisance for anyone who *doesn't* need deterministic output from their random numbers and leads to situations where people are incorrectly using MT when they should be using SystemRandom because they don't know any better.
That feels condescending,
It is, but it's also accurate: there's plenty of anecdotal evidence that this actually happens, specifically that most of the recipes for password generation on SO silently fall back to a deterministic PRNG if SystemRandom is unavailable, and the rest happily start with random.random. Not only are people apparently doing a wrong thing here, they are eagerly teaching others to do the same. (There's also the possibility that the bad guys are seeding SO with backdoors in this way, I guess.)
as does the assumption that (almost) every naive use of randomness is somehow a security vulnerability.
This is a strawman. None of the advocates of this change makes that assumption. The advocates proceed from the (basically unimpeachable) assumptions that (1) the attacker only has to win once, and (2) they are out there knocking on a lot of doors. Then the questionable assumption is that (3) the attackers are knocking on *this* door. RC4 was at one time one of the best crypto algorithms available, but it also induced the WEP fiasco, and a scramble for a new standard. The question is whether we wait for a "Python security fiasco" to do something about this situation. Waiting *is* an option; the arguments that RNGs won't be a "Python security fiasco" before Python 4 is released are very plausible[1], and the overhead of a compatibility break is not negligible (though Paul Moore himself admits it's probably not huge, either). But the risk of a security fiasco (probably in a scenario not mentioned in this thread) is real. The arguments of the opponents of the change amount to "I have confirmed that the probability it will happen to me is very small, therefore the probability it will happen to anyone is small", which is, of course, a fallacy.
The concept of secure vs. insecure sources of randomness isn't *that* hard to grasp.
Once one *tries*. Read some of Paul Moore's posts, and you will discover that the very mention of some practice "improving security" immediately induces a non-trivial subset of his colleagues to start thinking about how to avoid doing it. I am almost not kidding; according to his descriptions, the situation in the trenches is very nearly that bad. Security is evidently hated almost as much as spam. If random.random were to default to an unseedable nondeterministic RNG, the scientific users would very quickly discover that (if not on their own, when their papers get rejected). On the other hand, inappropriate uses are nowhere near so lucky. In the current situation, the programs Just Work Fine (they produce passwords that no human would choose for themselves, for example), and noone is the wiser unless they deliberately seek the information. It seems to me that, given the "in your face" level of discoverability that removing the state-access methods would provide, backward compatibility with existing programs is the only real reason not to move to "secure" randomness by default. In fact "secure" randomness is *higher*-quality for any purpose, including science. Footnotes: [1] Cf. Tim Peters' posts especially, they're few and where the information content is low the humor content is high. ;-)