Re: [stdlib-sig] standardizing the deprecation policy (and how noisy they are)

In a message of Mon, 09 Nov 2009 07:39:30 +0100, Antoine Pitrou writes:
Experience has shown that when people get used to seeing 'a bunch of warnings that don't really matter' they either a) turn them off or b) ignore them, even when they are telling them valuable things that they should be paying attention to. So constantly spitting out DeprecationWarnings as soon as something becomes deprecated is a most excellent way to train people to ignore DeprecationWarnings.
Well at least people get a chance to see them. If some people think the warnings are useless (even though the messages warn about removal of a construct), they won't run a code checker either.
This is not true. Once people get into the habit of 'not seeing' things, they don't see them even when they are important. This is not only true of program warnings -- somebody has done a study of how cancerous tumours are overlooked. In one particularly frightening study, doctors at the Mayo clinic went back and checked the previously 'normal' chest X-rays of patients who subsequently developed cancer. What they found was horrifying: 90% of the tumours were visible on the previous Xrays. (see Lorenz, G.B.A. et. al 1999 Miss Rate of Lung Cancer on the Chest Radiograph in Clinical Practice in _Chest_ also Berlin, Leonard 2000 Hindsight Bias in _American Journal of Roentgenology_) A test in 2002 indicated that the TSA missed 1 in every 4 guns that were attempted to be smuggled in. In 2004 a test at Newark's airport indicated the same thing. At 2005 test at O'Hare's airport had 60% of bombs and explosive materials missed, while security officials in L.A. missed 75%. (I don't have the papers for this, I read this as a clipping from the Wall Street Journal, which didn't list its sources.) So this 'it's there, but we didn't see it' is a serious problem with wideranging implications outside of generating computer warnings. But we know some things about this. The reason we are lousy at detecting such things is in large part because they are rare. Most chest Xrays show no cancer. Most bags do not contain weapons and guns. We're only good at detecting things when a) we are looking for them and b) they actually occur relatively commonly where we are looking. As what we are looking for becomes more and more rare, then we are more and more likely to overlook them when they do occur. There are lots of cognitive psychology experiments to test this, for instance: Levin and Simons 1997 Failure to Detect Changes to Attended Objects in Motion Pictures _Psychonomic Bulletin and Review_ The conclusion is that 'surprising' people with unexpected warnings is less useful than one would think -- people tend to overlook them, and thus not be surprised. It's better when people get warnings which they have asked to see. Then they tend to notice them. But poorest of all is when people have trained themselves not to see warnings at all.
If Mercurial users and developers hadn't seen those warnings at all, perhaps Mercurial would have continued using deprecated constructs, and ended up broken when the N+1 Python version had been released. If even an established FLOSS project such as Mercurial is vulnerable to this kind of risk, then any in-house or one-man project will be even more vulnerable.
I agree, but I think that Mercurial developers ought to include a set of people who are interested in looking at warnings precisely to prevent such things. Thus their use case is different from that of the casual programmer, or the user of Mercurial who has no intention of doing anything with any warnings that ever show up. The first group will notice things the most if they get warnings when they explicitly ask for it. If the warnings show up unasked for, they too can be expected to get into the habit of ignoring them. ('oh they are just warnings'). But I don't know the precise point where producing such warnings becomes more harmful than helpful. As far as I know, nobody has run any experiements to determine this. But I would suspect that blasting out all the DeprecationWarnings for 3 whole releases before something goes away would err on the 'so frequent that it is ignored' side.
Besides, do we have such a code checker that is able to find out deprecated constructs (not talking about 2to3 here) ?
I was thinking of something more primative, such as running your code with all warnings on from time to time. Laura

Le lundi 09 novembre 2009 à 09:17 +0100, Laura Creighton a écrit :
The conclusion is that 'surprising' people with unexpected warnings is less useful than one would think -- people tend to overlook them, and thus not be surprised.
Whether or not it's "less useful than one would think" doesn't make it useless. There are many things which fall under the former predicate and not under the latter. For example documentation (many people don't read it), unit tests (they give a false sense of security)... do you suggest we drop them too?
It's better when people get warnings which they have asked to see.
Well, I don't see a contention here: they already can. If they want to get warnings, they just have to run the program. It's not like Python writes out lots of warnings, which is why you normally don't /need/ to choose a specific kind of warning to display. (-3 is an exception because it /can/ output many more warnings than is reasonable, but that's part of why it is opt-in rather than opt-out)
But I would suspect that blasting out all the DeprecationWarnings for 3 whole releases before something goes away would err on the 'so frequent that it is ignored' side.
I don't think anybody suggested that.
I was thinking of something more primative, such as running your code with all warnings on from time to time.
Which gives far less code coverage than enabling them by default.
participants (2)
-
Antoine Pitrou
-
Laura Creighton