
On Fri, Dec 6, 2019 at 9:04 AM Juan Nunez-Iglesias via scikit-image < scikit-image@python.org> wrote:
On 6 Dec 2019, at 5:20 am, Stefan van der Walt <stefanv@berkeley.edu> wrote:
I’ll submit a PR soon but the outcome was clean enough that I wanted to share with you ahead of time my n-dimensional Sobel filter here:
https://nbviewer.jupyter.org/github/jni/skimage-tutorials/blob/monash-df2-co... < https://nbviewer.jupyter.org/github/jni/skimage-tutorials/blob/monash-df2-co...
(search for "def sobel”).
That's too smart for me; could you explain the construction of the kernel?
Yep: sobel 2D is separable into two convolutions, performed along orthogonal axes:
1. A smoothing convolution [1, 2, 1], and 2. A difference convolution, [1, 0, -1], in the orthogonal direction.
This gives the filter:
[1, 2, 1] [0, 0, 0] [-1, -2, -1]
or its transpose.
The generalisation to higher dimensions is to have an edge filter along one axis and the smoothing filter along all the remaining axes. This can be done by reshaping each kernel to have extent 3 along the correct dims, then multiplying — numpy broadcasting takes care of the rest. e.g. a 3D Sobel with the difference along the 0th axis is:
[[[-1]], [[0]], [[1]]] * [[[1], [2], [1]]] * [[[1, 2, 1]]]
(where the above shapes are (3, 1, 1), (1, 3, 1), and (1, 1, 3)). _______________________________________________ scikit-image mailing list -- scikit-image@python.org To unsubscribe send an email to scikit-image-leave@python.org
Hi Juan, In your implementation you are forming the full 3x3x3 kernel (in 3D) which should be more computationally expensive than performing three separate convolutions with 3 x 1 x 1, a 1 x 3 x 1 and a 1 x 1 x 3 kernels. Indeed, scipy.ndimage has its own (separable) n-d sobel implementation that does exactly that. You can make an equivalent to your sobel implementation using it as follows: def sobel(img): edge_img = np.zeros(img.shape, dtype=float) for axis in range(img.ndim): e1 = ndi.sobel(img, axis) e1 *= e1 edge_img += e1 np.sqrt(edge_img, out=edge_img) return edge_img / np.sqrt(img.ndim) However, surprisingly, the function above was ~30% slower for an img of shape 256x256x256 when I tried it. I would guess perhaps your implementation has more favorable cache/memory access pattern that improves the performance despite the theoretically higher number of FLOPS required. - Greg