[Numpy-discussion] Testing for close to zero?

Robert Kern robert.kern at gmail.com
Tue Jan 20 01:26:22 EST 2009


On Tue, Jan 20, 2009 at 00:21, Charles R Harris
<charlesr.harris at gmail.com> wrote:
>
> On Mon, Jan 19, 2009 at 10:48 PM, Robert Kern <robert.kern at gmail.com> wrote:
>>
>> On Mon, Jan 19, 2009 at 23:36, Charles R Harris
>> <charlesr.harris at gmail.com> wrote:
>> >
>> > On Mon, Jan 19, 2009 at 9:17 PM, Robert Kern <robert.kern at gmail.com>
>> > wrote:
>> >>
>> >> On Mon, Jan 19, 2009 at 22:09, Charles R Harris
>> >> <charlesr.harris at gmail.com> wrote:
>> >> >
>> >> >
>> >> > On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor
>> >> > <jonathan.taylor at utoronto.ca> wrote:
>> >> >>
>> >> >> Interesting.  That makes sense and I suppose that also explains why
>> >> >> there is no function to do this sort of thing for you.
>> >> >
>> >> > A combination of relative and absolute errors is another common
>> >> > solution,
>> >> > i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In
>> >> > cases
>> >> > like
>> >> > this relerr is typically eps and abserr tends to be something like
>> >> > 1e-12,
>> >> > which keeps you from descending towards zero any further than you
>> >> > need
>> >> > to.
>> >>
>> >> I don't think the absolute error term is appropriate in this case. If
>> >> all of my inputs are of the size 1e-12, I would expect a result of
>> >> 1e-14 to be significantly far from 0.
>> >
>> > Sure, that's why you *chose* constants appropriate to the problem.
>>
>> But that's what eps*max(abs(array_of_inputs)) is supposed to do.
>>
>> In the formulation that you are using (e.g. that of
>> assert_arrays_almost_equal()), the absolute error comes into play when
>> you are comparing two numbers in ignorance of the processes that
>> created them. The relative error in that formula is being adjusted by
>> the size of the two numbers (*not* the inputs to the algorithm). The
>> two numbers may be close to 0, but the relevant inputs to the
>> algorithm may be ~1, let's say. In that case, you need the absolute
>> error term to provide the scale information that is otherwise not
>> present in the comparison.
>>
>> But if you know what the inputs to the calculation were, you can
>> estimate the scale factor for the relative tolerance directly
>> (rigorously, if you've done the numerical analysis) and the absolute
>> tolerance is supernumerary.
>
>
> So you do bisection on an oddball curve,  512 iterations later you hit
> zero... Or you do numeric integration where there is lots of cancellation.
> These problems aren't new and the mixed method for tolerance is quite
> standard and has been for many years. I don't see why you want to argue
> about it, if you don't like the combined method, set the absolute error to
> zero, problem solved.

I think we're talking about different things. I'm talking about the
way to estimate a good value for the absolute error. My
array_of_inputs was not the values that you are comparing to zero, but
the inputs to the algorithm that created the value you are comparing
to zero.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco



More information about the NumPy-Discussion mailing list