Hello

Only if in_place = True. If not, the new node could be something entirely different.

The new_id is the safer bet.


Thanks
Vighnesh
On Sunday, September 20, 2015 at 11:26:12 PM UTC-4, bricklemacho wrote:
Hi Vighnesh,

That basically what I am after.  If I inject the code at line 130 after the merger then I only need use [dst] to create the label map, correct?

I just submitted a [WIP] pull request, but will update code to reflect these changes.

Thanks for your help.

Michael.
--




On 21/09/2015 11:13 am, Vighnesh Birodkar wrote:
Hello

Sorry for the repost, I accidentally submitted incomplete code in the last one. 

For the demo videos in that post I was traversing the graph at each iteration to obtain the segmentation, see 118 of graph_merge.py

This was because I needed the entire segmentation at each step to display it. Juan, I think you mean I traverse it only once in the code on master ? For the video I was indeed traversing it each time.

The nodes are merged here

On line 127 you can inject your logic.

So if I understand correctly, you want [src + dst](the regions being merged at that point of time) as one region and the rest of the graph as other ?

label_map = np.arange(labels.max() + 1)
label_map[:] = 2 # Label the rest of the graph as one region

# Label src as one region
for l in rag.node[src]['labels']:
    label_map[l] = 1 


for l in rag.node[dst]['labels']:
    label_map[l] = 1 

seg_list.append(labels[label_map])

Thanks
Vighnesh


On Tuesday, September 1, 2015 at 7:24:31 PM UTC-4, bricklemacho wrote:
Hi All,

I am looking at generating some detection proposals, see  Hosang, Jan, et al. "What makes for effective detection proposals?." arXiv preprint arXiv:1502.05082 (2015), http://arxiv.org/pdf/1502.05082.pdf    Starting with the Selective Search algorithm, Section 3 of  Uijlings, Jasper RR, et al. "Selective search for object recognition." International journal of computer vision 104.2 (2013): 154-171,    https://staff.fnwi.uva.nl/th.gevers/pub/GeversIJCV2013.pdf

The basic idea is the performing a hierarchical merging of the image, where each new merge get added to the list of regions suspected to contain an object, you can capture objects at all scales.  This reduces the search space significantly than say compared to floating window.  The output is NOT a image segmentaiton, rather a list of regions (bounding boxes) of potential objects (deteciton proposals).

I have looked in the gallery at RAG Merging http://scikit-image.org/docs/dev/auto_examples/plot_rag_merge.html,  fairly confident I can setup the callback methods to provided the similarity measure.   I am naively hoping that future.graph.hierarchical(), even though it seems to output a segmentation (labels), can be easily adapted to the task.    What would be the best way to have future.graph.merge_hierarchica() merge regions with the "highest" similarity measure, rather thana threshold?   What would be the best way future.graph.merge_hierarchica() save each merged region?   Tried setting  "in_place" to false, but didn't notice any difference.

Any help appreciated,

Brickle.
--