# Numpy outlier removal

Oscar Benjamin oscar.j.benjamin at gmail.com
Mon Jan 7 16:20:57 CET 2013

```On 7 January 2013 05:11, Steven D'Aprano
<steve+comp.lang.python at pearwood.info> wrote:
> On Mon, 07 Jan 2013 02:29:27 +0000, Oscar Benjamin wrote:
>
>> On 7 January 2013 01:46, Steven D'Aprano
>> <steve+comp.lang.python at pearwood.info> wrote:
>>> On Sun, 06 Jan 2013 19:44:08 +0000, Joseph L. Casale wrote:
>>>
>>> I'm not sure that this approach is statistically robust. No, let me be
>>> even more assertive: I'm sure that this approach is NOT statistically
>>> robust, and may be scientifically dubious.
>>
>> Whether or not this is "statistically robust" requires more explanation
>
> Not really. Statistics robustness is objectively defined, and the user's
> intention doesn't come into it. The mean is not a robust measure of
> central tendency, the median is, regardless of why you pick one or the
> other.

Okay, I see what you mean. I wasn't thinking of robustness as a
technical term but now I see that you are correct.

Perhaps what I should have said is that whether or not this matters
depends on the problem at hand (hopefully this isn't an important
medical trial) and the particular type of data that you have; assuming
normality is fine in many cases even if the data is not "really"
normal.

>
> There are sometimes good reasons for choosing non-robust statistics or
> techniques over robust ones, but some techniques are so dodgy that there
> is *never* a good reason for doing so. E.g. finding the line of best fit
> by eye, or taking more and more samples until you get a statistically
> significant result. Such techniques are not just non-robust in the
> statistical sense, but non-robust in the general sense, if not outright
> deceitful.

There are sometimes good reasons to get a line of best fit by eye. In
particular if your data contains clusters that are hard to separate,
sometimes it's useful to just pick out roughly where you think a line
through a subset of the data is.

Oscar

```