[Numpy-discussion] np.gradient

Nathaniel Smith njs at pobox.com
Sat Oct 18 21:35:55 EDT 2014


On Sun, Oct 19, 2014 at 2:23 AM, Matthew Brett <matthew.brett at gmail.com> wrote:
> Hi,
>
> On Sat, Oct 18, 2014 at 6:17 PM, Nathaniel Smith <njs at pobox.com> wrote:
>> Okay! I think I now actually understand what was going on with
>> np.gradient! The discussion has been pretty confused, and I'm worried
>> that what's in master right now is not the right solution, which is a
>> problem because we're supposed to cut 1.9.1 tomorrow.
>>
>> Background:
>>
>> np.gradient computes gradients using a finite difference method.
>> Wikipedia has a good explanation of how this works:
>> https://en.wikipedia.org/wiki/Numerical_differentiation
>> The key point is that there are multiple formulas one can use that all
>> converge to the derivative, e.g.
>>
>>    (f(x + h) - f(x)) / h
>>
>> or
>>
>>    (f(x + h) - f(x - h)) / 2h
>>
>> The first is the textbook definition of the derivative. As h -> 0, the
>> error in that approximation shrinks like O(h). The second formula also
>> converges to the derivative as h -> 0, but it converges like O(h^2),
>> i.e., much faster. And there's are many many formulas like this with
>> different trade-offs:
>>    https://en.wikipedia.org/wiki/Finite_difference_coefficient
>> In practice, given a grid of values of f(x) and a grid stepsize of h,
>> all of these formulas come down to doing a simple convolution of the
>> data with certain coefficients (e.g. [-1/h, 1/h] or [-1/2h, 0, 1/2h]
>> for the two formulas above).
>>
>> Traditionally np.gradient has used the second formula above, with its
>> quadratically diminishing error term, for interior points in the grid.
>> For points on the boundary of the grid, though, this formula has a
>> problem, b/c it requires you look at points to both the left and the
>> right of the current point -- but for boundary points one of these
>> doesn't exist. In such situations np.gradient has traditionally used
>> the first formula instead (the "forward (or backward) finite
>> difference approximation with linear error", where
>> "forward"/"backward" means that it works on the boundary). As the web
>> page linked above shows, though, there's an easy alternative formula
>> that works on
>
> Did you lose some text here?

"There's an easy alternative formula that works on edge points and
provides quadratic accuracy."

Not too critical, you probably figured out the gist of it :-)

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org



More information about the NumPy-Discussion mailing list