Re: Normalised cross correlation
2012/3/24 Stéfan van der Walt <stefan@sun.ac.za>
Hi Mike
On Sat, Mar 24, 2012 at 2:16 PM, Mike Sarahan <msarahan@gmail.com> wrote:
I'm out of time for now, but I hope this helps a little. I will investigate further when time allows, provided you all don't beat me to it.
Thanks for looking into this, and for identifying the boundary issue. The fact that there are differences even with padding is disconcerting; I'll see if I can review this soon.
Stéfan
Ok,... so it turns out I made a stupid with the convolutiontocorrelation<https://github.com/tonysyu/scikitsimage/commit/ff4d66305c35fc7c749ab0d3ff128d07657c5096>translation. Fixing that gives comparable *looking* results to matlab (after Mike's padding patch). BUT, I also get NaN values with the padded output. These NaNs come from the sqrt of negative values (see code<https://github.com/tonysyu/scikitsimage/blob/skimagetemplate/skimage/feature/_template.pyx#L122> which solves the denominator<https://github.com/tonysyu/scikitsimage/blob/skimagetemplate/skimage/feature/_template.pyx#L22>). I *think* the argument of the sqrt should (mathematically) be zero, but roundoff errors are causing it to go negative. If that's the case, then it's an easy fix to check that the argument is positive, but someone should check my math to make sure the the code<https://github.com/tonysyu/scikitsimage/blob/skimagetemplate/skimage/feature/_template.pyx#L122>matches the equation<https://github.com/tonysyu/scikitsimage/blob/skimagetemplate/skimage/feature/_template.pyx#L22> . As for the paddingthat was a hack. I noticed the original implementation clipped the image to (Mm+1, Nn+1) so I padded *the result* with zeros to make the output (M, N). Also, the way I padded it (all padding on bottom and right) meant that the output has a high value when "origin" (i.e. topleft corner) of the template matches, as opposed to the center. This could be done by padding the image with half the template width to the left and right, and half the template height to the top and bottom. Padding *the input* sounds like a good idea, but I think it should be done in such a way that the output is (M, N)i.e. same size as the input image. I'm not a huge fan of the Matlab output size (but maybe some people expect this output?). Is there a "right" way to do the padding? Or maybe "pad_output" should be changed to "mode" (to match scipy.signal.convolve) with values 'valid' (no padding; output (Mm+1, Nn+1)), 'same' (pad input with image mean), and 'zeros' (pad output with zeros)? T
participants (1)

Tony Yu