Hi Brian, and Tony. Thanks both for your response. I hope my newbie terminology is not making this more confusing.
Tony's answer is spot on here. Perhaps you expect that the HoG image should look like the gradient image? Instead, what the descriptor is really aiming to capture is the direction (that's the 'oriented' part), that the gradients go. So assuming an image where the left half is black and the right half white, there would be a vertical of horizontal (or as close as possible, depending on the number of bins) lines in the HoG image.
Okay, makes sense, the direction of the change would be horizontal direction if the line in the orignal image was vertical. This is how I originally interpreted the visualisation provide by the skimagehog algorithm. The problem was when I comparing this to the visualisation from the Dalal & Triggs paper. In the paper the RHOG descriptor seems to show a different relationship, when visualised the dominant orientations appear to be parallel to lines in the original image. If you have access to the Dalal & Triggs paper, I am basing my expectations on Figure 6(e), top of page 8. I don't have access to the original Dalal & Triggs visualisation code so I will reread the paper to make sure I am comparing like with like.
This is what I was referring to when I talked about the number of bins. If you look carefully at the HoG image, you'll notice that there are vertical lines in some places (like at the black billboard), but there are no perfectly horizontal lines. The closest approximation to horizontal is maybe a 20deg line. That's because there are 9 bins. If you tried this with 8 bins, you should see some horizontal lines.
If I change play with the bins I do end up with horizontal lines. I suspect what is being visualised is different, or being interpreted differently, to what the Dala & Triggs paper is visualisation. I just have to understand the different visualisations. Thanks again for your help. Michael. 
Thought I would follow up. I have sorted out the visualisation expectations. I have been able to modify the visualisation output/ plot of the D&T paper. SOme simple maths can make the orientations "parallel" if I so desire. It also appears that the D&T paper plot the orientations after block normalisation which also accounts for why I was seeing different in gradients. Anyway, I have a better understanding now, so thanks everyone for your help. Now to work out how to calculate HoG descriptor for a region rather than a whole image. Michael. 
Hi Michael, I'm glad you've got it all sorted out. An issue report has been raised regarding the HoG around a point (actually a list of points). It seems to me that there are 2 options. 1 we could compute the HoG over the whole image and then copy the descriptor values around a given point and return. This requires very little change to the existing code, but will be inefficient for those only requiring sparse keypoints. 2 move the heavy lifting into another function and call it on small image patches. Good for sparse keypoints, bad for dense. So it depends on what you want really. Quick win is number 1. Cheers Brian Original Message From: bricklemacho <bricklemacho@gmail.com> Sender: scikitsimage@googlegroups.com Date: Fri, 17 Feb 2012 03:19:20 To: scikitsimage<scikitsimage@googlegroups.com> ReplyTo: scikitsimage@googlegroups.com Subject: Re: Understanding HoG output Thought I would follow up. I have sorted out the visualisation expectations. I have been able to modify the visualisation output/ plot of the D&T paper. SOme simple maths can make the orientations "parallel" if I so desire. It also appears that the D&T paper plot the orientations after block normalisation which also accounts for why I was seeing different in gradients. Anyway, I have a better understanding now, so thanks everyone for your help. Now to work out how to calculate HoG descriptor for a region rather than a whole image. Michael. 
participants (2)

Brian Holt

bricklemacho