[Neuroimaging] SyN registration: Weird effect when updating sigma_diff between resolution levels

Marvin Albert marvin.albert at gmail.com
Sun May 19 13:52:38 EDT 2019


Hi,

first of all thanks a lot for the great open source tools, I’m using dipy’s image registration extensively.

I have a question regarding using SymmetricDiffeomorphicRegistration in combination with CCMetric.

I’m optimising the registration performance for my application (zebrafish embryos imaged with fluorescence microscopy). More specifically, I am playing around with the scaling factors of the scale space and the update field smoothing (parameter sigma_diff in CCMetric). To do so, I modified the registration process in such a way that I can manually set the current scaling factor and sigma_diff (taking advantage of the fact that dipy is written in accessible python!). 

I found parameter sets which achieve significantly better performance for my dataset than the presets (meaning doubling of scaling factors in each level and constant sigma_diff). So far so good.

However, there’s an odd behaviour I don't understand. Often when I decrease sigma_diff from one level to the next, the metric increases instead of decreasing. However, when I run the two levels with different sigma_diffs in separate registration calls, the metric nicely decreases as expected and registration performance improves. Why is this and is this supposed to happen?

I tried to come up with a minimal example shown in this jupiter notebook: https://github.com/quakberto/dipy_syn/blob/master/test_syn_sigma_diffs.ipynb <https://github.com/quakberto/dipy_syn/blob/master/test_syn_sigma_diffs.ipynb>. The notebook also shows the custom code I use for creating the modified scale space.

Here’s a short summary:
- I generate a random static image S and rotate it to get a moving image M
- I run registrations with different sigma_diffs:
  - a) one level, constant sigma_diff = 1 (S,M)->M_a
  - b) one level, constant sigma_diff = 2 (S,M)->M_b
  - c) two levels, decreasing sigma_diffs = [2,1] (S,M)->M_c
  - d) one level, sigma_diff = 1, starting with M_b: (S,M_b)->M_c

Now, using the normalised cross correlation and also the sum of norms of differences between the experimental and ground truth displacement fields as metrics, surprisingly case c) performs worse than b). However, case d) greatly improves on the outcome of b). Here's a figure:



My question is: Why does the last level in case c) deteriorate the registration performance? And why does a fresh run behave so differently?

It got a bit lengthy so thanks a lot if you’re still reading! I’m happy for any comments.

Best,
Marvin


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/neuroimaging/attachments/20190519/d98189bc/attachment.html>


More information about the Neuroimaging mailing list