[stdlib-sig] standardizing the deprecation policy (and how noisy they are)

Laura Creighton lac at openend.se
Mon Nov 9 13:33:35 CET 2009


In a message of Mon, 09 Nov 2009 12:37:05 +0100, Antoine Pitrou writes:
>Le lundi 09 novembre 2009 à 09:17 +0100, Laura Creighton a écrit :
>> The conclusion is that 'surprising' people with unexpected warnings
>> is less useful than one would think -- people tend to overlook them,
>> and thus not be surprised.
>
>Whether or not it's "less useful than one would think" doesn't make it
>useless. There are many things which fall under the former predicate and
>not under the latter. For example documentation (many people don't read
>it), unit tests (they give a false sense of security)... do you suggest
>we drop them too?

Ok, I was not strong enough.

Here is what I believe:

Take some sample set of 2000 programs, all of which use features
which are scheduled to be deprecated.  Divide them into two groups
of 1000.  In group A you issue DeprecationWarnings for one whole
release before the one where this code will break, unless the
programmer explicitly turns them off.  In group B you don't issue
any warnings unless the programmer asks for them.

Come the day when /env/python now points to the new release, which
group do you believe will have done a better job of modernising their
programs?  My belief is that this will be group B.  I think that the
number of programmers in group A who no longer saw the warnings 
because they were always there will greatly outnumber the ones who 
never run their programs with warnings turned on, and thus were 
completely unaware of the problem in the first place.  But I am
making some assumptions here -- one is that these programs will
be run fairly frequently.  The second assumption is that the
number of programs for whom the fix will be 'change the first line
to #! /usr/bin/env/some-older-version-of-python' is small.  I 
don't know if those are decent assumptions to make.  I don't
have anything approaching real data as to how Python programs
exist out there in the world.

The "does providing Deprecation Warnings when unasked for actually
help" experiement would be interesting to run.  This is the sort of
thing we need real numbers for, but I haven't seen any experiments
to test this so far.  All I have seen is indications from cognative
psychology that it is really hard for people to see things when
they are concentrating on something else.

For instance there is the test here
http://viscog.beckman.illinois.edu/media/dailytelegraph.html

'view the basketball video'

which is explained here:
http://www.scholarpedia.org/article/Inattentional_blindness

I think that we have two things to worry about here, inattentional
blindness (which is what the video tests) and problems caused by
insufficient attention (as opposed to inattention).  And, links
that I posted before which say that people really only see what
they expect, rather than what is there, and that our brains are
heavily optimised for 'ignoring stuff that doesn't matter' which
gets in the way of us noticing stuff that does matter.  Unless 
we expect this.

Laura




More information about the stdlib-sig mailing list