Hi, I am new to python and image processing, which may be my problem, but I don't understand how to interpret/use the integer mask indicating segment labels output from the SLIC and Quickshift algorithms. I have a RGB-D image. Using only RGB I segment the image into superpixels using SLIC and Quickshift algorithms provided in scikit-image. I am trying visit each superpixel, calculate some depth features for each superpixel. Specifically I want to calculate the surface normal of the superpixel and the average angular difference with the neighbouring superpixels. Eventually I plan to combine the superpixels based on these depth features. Could someone explain the segment_mask format/structure and how I should use the mask? Thanks in advance. Brickle. --
Hi Brickle, Cool problem. =) iirc the return type of these algorithms is an M x N integer-type numpy array (where the input image is an M x N x 3 numpy array). Every pixel with the same value belongs in the same superpixel. So, all pixels with value 1 make up the 1st superpixel, all pixels with value 2 make up the 2nd, and so on until the nth superpixel. Does that answer your question? Juan. On Mon, Apr 22, 2013 at 6:02 PM, Brickle Macho <bricklemacho@gmail.com>wrote:
Hi,
I am new to python and image processing, which may be my problem, but I don't understand how to interpret/use the integer mask indicating segment labels output from the SLIC and Quickshift algorithms.
I have a RGB-D image. Using only RGB I segment the image into superpixels using SLIC and Quickshift algorithms provided in scikit-image. I am trying visit each superpixel, calculate some depth features for each superpixel. Specifically I want to calculate the surface normal of the superpixel and the average angular difference with the neighbouring superpixels. Eventually I plan to combine the superpixels based on these depth features.
Could someone explain the segment_mask format/structure and how I should use the mask?
Thanks in advance.
Brickle. --
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe@**googlegroups.com<scikit-image%2Bunsubscribe@googlegroups.com> . For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out> .
On Mon, Apr 22, 2013 at 10:02 AM, Brickle Macho <bricklemacho@gmail.com> wrote:
I have a RGB-D image. Using only RGB I segment the image into superpixels using SLIC and Quickshift algorithms provided in scikit-image. I am trying visit each superpixel, calculate some depth features for each superpixel. Specifically I want to calculate the surface normal of the superpixel and the average angular difference with the neighbouring superpixels. Eventually I plan to combine the superpixels based on these depth features.
If you had a mask for an individual superpixel, and indices into your array, x, y, z, you can imagine finding the coordinates of all pixels under that mask with x[mask], y[mask], z[mask] The mask you typically recover from a label image, so, e.g., mask = (labels == 3). Now, the trickier problem is figuring out where, relative to other super-pixels, this one is located. For that, it may be better to represent the image as a graph, where each node represents a super-pixel, and edges represent links to other super-pixels (in fact, this is something we should implement in scikit-image to make handling labels easier). Would you be interested in collaborating on such a feature? Stéfan
participants (3)
-
Brickle Macho
-
Juan Nunez-Iglesias
-
Stéfan van der Walt