ndimensional Sobel filter
Hi everyone,
To my horror as Guardian of the Dimensions, I discovered yesterday that we have *no* ndimensional edge filters in scikitimage! I was preparing to give a workshop and update the 3D image processing example when I found this.
I’ll submit a PR soon but the outcome was clean enough that I wanted to share with you ahead of time my ndimensional Sobel filter here:
https://nbviewer.jupyter.org/github/jni/skimagetutorials/blob/monashdf2co... https://nbviewer.jupyter.org/github/jni/skimagetutorials/blob/monashdf2completed/lectures/three_dimensional_image_processing.ipynb
(search for "def sobel”).
I want to allow an axis= kwarg and then replace our 2D implementations with calls to this one. =D Many of our other filters are similarly generalisable.
Juan.
Hi Juan,
On Wed, 04 Dec 2019 20:39:06 +1100, Juan NunezIglesias wrote:
I’ll submit a PR soon but the outcome was clean enough that I wanted to share with you ahead of time my ndimensional Sobel filter here:
https://nbviewer.jupyter.org/github/jni/skimagetutorials/blob/monashdf2co... https://nbviewer.jupyter.org/github/jni/skimagetutorials/blob/monashdf2completed/lectures/three_dimensional_image_processing.ipynb
(search for "def sobel”).
That's too smart for me; could you explain the construction of the kernel?
Stéfan
On 6 Dec 2019, at 5:20 am, Stefan van der Walt stefanv@berkeley.edu wrote:
I’ll submit a PR soon but the outcome was clean enough that I wanted to share with you ahead of time my ndimensional Sobel filter here:
https://nbviewer.jupyter.org/github/jni/skimagetutorials/blob/monashdf2co... https://nbviewer.jupyter.org/github/jni/skimagetutorials/blob/monashdf2completed/lectures/three_dimensional_image_processing.ipynb
(search for "def sobel”).
That's too smart for me; could you explain the construction of the kernel?
Yep: sobel 2D is separable into two convolutions, performed along orthogonal axes:
1. A smoothing convolution [1, 2, 1], and 2. A difference convolution, [1, 0, 1], in the orthogonal direction.
This gives the filter:
[1, 2, 1] [0, 0, 0] [1, 2, 1]
or its transpose.
The generalisation to higher dimensions is to have an edge filter along one axis and the smoothing filter along all the remaining axes. This can be done by reshaping each kernel to have extent 3 along the correct dims, then multiplying — numpy broadcasting takes care of the rest. e.g. a 3D Sobel with the difference along the 0th axis is:
[[[1]], [[0]], [[1]]] * [[[1], [2], [1]]] * [[[1, 2, 1]]]
(where the above shapes are (3, 1, 1), (1, 3, 1), and (1, 1, 3)).
On Fri, Dec 6, 2019 at 9:04 AM Juan NunezIglesias via scikitimage < scikitimage@python.org> wrote:
On 6 Dec 2019, at 5:20 am, Stefan van der Walt stefanv@berkeley.edu
wrote:
I’ll submit a PR soon but the outcome was clean enough that I wanted to
share with you ahead of time my ndimensional Sobel filter here:
https://nbviewer.jupyter.org/github/jni/skimagetutorials/blob/monashdf2co... < https://nbviewer.jupyter.org/github/jni/skimagetutorials/blob/monashdf2co...
(search for "def sobel”).
That's too smart for me; could you explain the construction of the kernel?
Yep: sobel 2D is separable into two convolutions, performed along orthogonal axes:
 A smoothing convolution [1, 2, 1], and
 A difference convolution, [1, 0, 1], in the orthogonal direction.
This gives the filter:
[1, 2, 1] [0, 0, 0] [1, 2, 1]
or its transpose.
The generalisation to higher dimensions is to have an edge filter along one axis and the smoothing filter along all the remaining axes. This can be done by reshaping each kernel to have extent 3 along the correct dims, then multiplying — numpy broadcasting takes care of the rest. e.g. a 3D Sobel with the difference along the 0th axis is:
[[[1]], [[0]], [[1]]] * [[[1], [2], [1]]] * [[[1, 2, 1]]]
(where the above shapes are (3, 1, 1), (1, 3, 1), and (1, 1, 3)). _______________________________________________ scikitimage mailing list  scikitimage@python.org To unsubscribe send an email to scikitimageleave@python.org
Hi Juan,
In your implementation you are forming the full 3x3x3 kernel (in 3D) which should be more computationally expensive than performing three separate convolutions with 3 x 1 x 1, a 1 x 3 x 1 and a 1 x 1 x 3 kernels. Indeed, scipy.ndimage has its own (separable) nd sobel implementation that does exactly that. You can make an equivalent to your sobel implementation using it as follows:
def sobel(img): edge_img = np.zeros(img.shape, dtype=float) for axis in range(img.ndim): e1 = ndi.sobel(img, axis) e1 *= e1 edge_img += e1 np.sqrt(edge_img, out=edge_img) return edge_img / np.sqrt(img.ndim)
However, surprisingly, the function above was ~30% slower for an img of shape 256x256x256 when I tried it. I would guess perhaps your implementation has more favorable cache/memory access pattern that improves the performance despite the theoretically higher number of FLOPS required.
 Greg
However, surprisingly, the function above was ~30% slower for an img of shape 256x256x256 when I tried it. I would guess perhaps your implementation has more favorable cache/memory access pattern that improves the performance despite the theoretically higher number of FLOPS required.
It is quite common as a problem: When using numpy arrays, moving data is often more expensive than the calculation themselves as arrays no more fits into the cache.
participants (5)

Gregory Lee

Juan NunezIglesias

Juan NunezIglesias

Jérôme Kieffer

Stefan van der Walt