Re: Livewire segmentation in scikit-image

Hi Pavlo, I did manage to get your example code working, though I agree with Stéfan that the Matplotlib GUI doesn't really enable great interactivity. I also tried loading a different image (a frontal view X-ray of the hips), but since it was really big (4kx4k) your cost calculation became pretty slow, so there's room for improvement there ;-) Also, the choice of filter and threshold didn't work out at all for my image, so that should probably be more configurable. I was rather surprised how well it worked, given that it only calculates a minimal cost path. I reckon that once the edges aren't as strong as they are in this image, it won't perform as well. Then fixing such segmentations suddenly becomes a lot more painful in my experience, because the algorithm won't help you anymore and you don't want to have to draw the entire path by hand. Perhaps Emmanuelle or Juan has any suggestions on how we can make the calculations more robust? <https://lh3.googleusercontent.com/-L-83NNetFSI/VtWqy-C7Y4I/AAAAAAAFr8E/seCgu...> Ivo

Hello Ivo, thanks for taking your time for evaluating my code! 1. Matplotlib GUI. What interaction possibilities are missing in your opinion? Click and drag are available already, what else do you think would be helpful for a good illustration of the algorithm? 2. Cost computation. That's a good point. I am not sure if there is a way to effectively compute the shortest path exactly on a large image matrix. A good heuristic that limits search to a local neighbourhood might help here. The bottleneck seems to stem from the dictionary representation of the graph. The conversion takes extremely long for large images. 3. The gradient image. The notebook thresholds a gradient image in order to ignore the minor intensity differences along the boundary. That was chosen as a heuristic and as well as Gaussian smoothing are optional steps inside the algorithm. You can check the API here: https://github.com/pdyban/livewire/blob/master/livewire/livewiresegmentation... In my next step - until Juan or Emmanuelle reply - I will integrate the shortest path computation from skimage's graph module. Thanks, Pavlo On Tuesday, 1 March 2016 16:12:27 UTC+1, Ivo Flipse wrote:
Hi Pavlo,
I did manage to get your example code working, though I agree with Stéfan that the Matplotlib GUI doesn't really enable great interactivity.
I also tried loading a different image (a frontal view X-ray of the hips), but since it was really big (4kx4k) your cost calculation became pretty slow, so there's room for improvement there ;-) Also, the choice of filter and threshold didn't work out at all for my image, so that should probably be more configurable. I was rather surprised how well it worked, given that it only calculates a minimal cost path. I reckon that once the edges aren't as strong as they are in this image, it won't perform as well. Then fixing such segmentations suddenly becomes a lot more painful in my experience, because the algorithm won't help you anymore and you don't want to have to draw the entire path by hand. Perhaps Emmanuelle or Juan has any suggestions on how we can make the calculations more robust?
<https://lh3.googleusercontent.com/-L-83NNetFSI/VtWqy-C7Y4I/AAAAAAAFr8E/seCgu...>
Ivo

Hi Pavlo, Using just Matplotlib GUI, there's not much you can do, since it's rather limited. But I reckon Stéfan's problems arose from the fact that updates made to the window are really slow, so you don't see what the algorithm is doing while you're clicking around. Or at least it didn't do that for me. You could make the logic you execute inside the mouse events available as functions, so people can just pass (mouse) coordinates to your functions and get back coordinates to draw (whichever way they like). That way people can easily integrate your code in whatever GUI they use (an Jupyter Notebook widget would be awesome). If you want to improve your own GUI code, you could figure out how to get it to draw a bit more often, so you get more feedback. If you click, perhaps leave a marker and then regularly draw the resulting contour. I'd expect that if I click another time, it would at least draw the line it calculated between those two points. In my experience with livewire tools, such as TurtleSeg, you want to add "checkpoints" where you manually specify points where the contour should pass through. Without regular visual feedback, it's hard to judge whether you need one or not. Still thanks for sharing the code, I hadn't expected such a simple implementation would already work so well Ivo

Hello Ivo, on my machine (Macbook Pro 15) the updates for the example images are interactive. You click in the image once, then the algorithm calculates the shrotest path to every point under mouse while you move the mouse around the image. When you click your mouse for the second time, the last shortest path prediction is saved and displayed over the image. You can then continue moving your mouse and the notebook will keep computing the shortest pathes, until you press Escape key. I am surprised that this has not worked at your machine. Have you tried first with the example image? The code in the notebook is an illustration of the livewire algorithm *principle*. The Python package I propose for inclusion in scikit-image is located under: https://github.com/pdyban/livewire/tree/master/livewire. The test where you can see how the package can be called is here: https://github.com/pdyban/livewire/blob/master/test.py. For example: from livewire import LiveWireSegmentation
from skimage import dataimage = data.coins()
algorithm = LiveWireSegmentation(image) path = algorithm.compute_shortest_path(from_, to_)
More examples can be found in the documentation: http://livewire-segmentation.readthedocs.org/en/latest/ Please let me know if these links answer your questions! I would be happy to contribute my Python package to skimage! Best regards, Pavlo On Wednesday, 2 March 2016 13:15:20 UTC+1, Ivo Flipse wrote:
Hi Pavlo,
Using just Matplotlib GUI, there's not much you can do, since it's rather limited. But I reckon Stéfan's problems arose from the fact that updates made to the window are really slow, so you don't see what the algorithm is doing while you're clicking around. Or at least it didn't do that for me. You could make the logic you execute inside the mouse events available as functions, so people can just pass (mouse) coordinates to your functions and get back coordinates to draw (whichever way they like). That way people can easily integrate your code in whatever GUI they use (an Jupyter Notebook widget would be awesome).
If you want to improve your own GUI code, you could figure out how to get it to draw a bit more often, so you get more feedback. If you click, perhaps leave a marker and then regularly draw the resulting contour. I'd expect that if I click another time, it would at least draw the line it calculated between those two points. In my experience with livewire tools, such as TurtleSeg, you want to add "checkpoints" where you manually specify points where the contour should pass through. Without regular visual feedback, it's hard to judge whether you need one or not.
Still thanks for sharing the code, I hadn't expected such a simple implementation would already work so well
Ivo
participants (2)
-
Ivo Flipse
-
Pavlo Dyban