Re: seeking advice on HoG applicability
Thanks! I'll look into alternative ways of producing features. -Lisa On Monday, July 8, 2013 6:28:52 AM UTC-4, Stefan van der Walt wrote:
Hi Lisa
Interestingly, Adam Wisniewski was working on this one-class classification problem at the recent SciPy2013 sprint. Olivier Grisel and Nelle Varoquaux from the sklearn team were able to give us some helpful advice, and it might be worth getting in touch with them as well.
On Wed, Jul 3, 2013 at 9:10 PM, Lisa Torrey <lisa....@gmail.com<javascript:>> wrote:
- I have much less data. (Just 77 positives and 78 negatives, compared to Dalal's 1239 and 12180.)
You'll probably have to do some kind of cross-validation.
- My images aren't all the same size, like the pedestrian images are. (I'm not sure if this would matter?)
Perhaps investigate multi-scale texture features, such as the wavelet coefficients (see http://www.pybytes.com/pywavelets/ ; even simple statistics might suffice).
- My images are much higher resolution. (I've been downscaling them by a factor of 8, but the feature vectors are still enormous.)
You'd want to extract some features that help the classifier, e.g. daisy (http://scikit-image.org/docs/dev/auto_examples/plot_daisy.html), texture features via grey-level co-occurrence matrices, or haralick features (we don't yet have those in skimage, although they are available in Luis Coelho's Mahotas).
Regards Stéfan
A minor question that comes up as I make the images a uniform size: is it better to use pyramid_reduce() rather than resize(), or does it not matter? I see that pyramid_reduce() does smoothing, then calls resize(). On Monday, July 8, 2013 12:06:36 PM UTC-4, Lisa Torrey wrote:
Thanks!
I'll look into alternative ways of producing features.
-Lisa
On Monday, July 8, 2013 6:28:52 AM UTC-4, Stefan van der Walt wrote:
Hi Lisa
Interestingly, Adam Wisniewski was working on this one-class classification problem at the recent SciPy2013 sprint. Olivier Grisel and Nelle Varoquaux from the sklearn team were able to give us some helpful advice, and it might be worth getting in touch with them as well.
On Wed, Jul 3, 2013 at 9:10 PM, Lisa Torrey <lisa....@gmail.com> wrote:
- I have much less data. (Just 77 positives and 78 negatives, compared to Dalal's 1239 and 12180.)
You'll probably have to do some kind of cross-validation.
- My images aren't all the same size, like the pedestrian images are. (I'm not sure if this would matter?)
Perhaps investigate multi-scale texture features, such as the wavelet coefficients (see http://www.pybytes.com/pywavelets/ ; even simple statistics might suffice).
- My images are much higher resolution. (I've been downscaling them by a factor of 8, but the feature vectors are still enormous.)
You'd want to extract some features that help the classifier, e.g. daisy (http://scikit-image.org/docs/dev/auto_examples/plot_daisy.html), texture features via grey-level co-occurrence matrices, or haralick features (we don't yet have those in skimage, although they are available in Luis Coelho's Mahotas).
Regards Stéfan
The smoothing is applied before sub-sampling to suppress high frequencies which result in aliasing effects when sub-sampling. I recommend to use the Gaussian pyramid. The Laplacian pyramid shows you the difference between the smoothed and original image (the suppressed high frequencies) for each pyramid layer, respectively. Johannes Schönberger Am 09.07.2013 um 21:46 schrieb Lisa Torrey <lisa.torrey@gmail.com>:
A minor question that comes up as I make the images a uniform size: is it better to use pyramid_reduce() rather than resize(), or does it not matter? I see that pyramid_reduce() does smoothing, then calls resize().
On Monday, July 8, 2013 12:06:36 PM UTC-4, Lisa Torrey wrote: Thanks!
I'll look into alternative ways of producing features.
-Lisa
On Monday, July 8, 2013 6:28:52 AM UTC-4, Stefan van der Walt wrote: Hi Lisa
Interestingly, Adam Wisniewski was working on this one-class classification problem at the recent SciPy2013 sprint. Olivier Grisel and Nelle Varoquaux from the sklearn team were able to give us some helpful advice, and it might be worth getting in touch with them as well.
On Wed, Jul 3, 2013 at 9:10 PM, Lisa Torrey <lisa....@gmail.com> wrote:
- I have much less data. (Just 77 positives and 78 negatives, compared to Dalal's 1239 and 12180.)
You'll probably have to do some kind of cross-validation.
- My images aren't all the same size, like the pedestrian images are. (I'm not sure if this would matter?)
Perhaps investigate multi-scale texture features, such as the wavelet coefficients (see http://www.pybytes.com/pywavelets/ ; even simple statistics might suffice).
- My images are much higher resolution. (I've been downscaling them by a factor of 8, but the feature vectors are still enormous.)
You'd want to extract some features that help the classifier, e.g. daisy (http://scikit-image.org/docs/dev/auto_examples/plot_daisy.html), texture features via grey-level co-occurrence matrices, or haralick features (we don't yet have those in skimage, although they are available in Luis Coelho's Mahotas).
Regards Stéfan
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.
Ok, will do smoothing. Thanks! On Wednesday, July 10, 2013 4:12:56 AM UTC-4, Johannes Schönberger wrote:
The smoothing is applied before sub-sampling to suppress high frequencies which result in aliasing effects when sub-sampling. I recommend to use the Gaussian pyramid. The Laplacian pyramid shows you the difference between the smoothed and original image (the suppressed high frequencies) for each pyramid layer, respectively.
Johannes Schönberger
Am 09.07.2013 um 21:46 schrieb Lisa Torrey <lisa....@gmail.com<javascript:>>:
A minor question that comes up as I make the images a uniform size: is it better to use pyramid_reduce() rather than resize(), or does it not matter? I see that pyramid_reduce() does smoothing, then calls resize().
On Monday, July 8, 2013 12:06:36 PM UTC-4, Lisa Torrey wrote: Thanks!
I'll look into alternative ways of producing features.
-Lisa
On Monday, July 8, 2013 6:28:52 AM UTC-4, Stefan van der Walt wrote: Hi Lisa
Interestingly, Adam Wisniewski was working on this one-class classification problem at the recent SciPy2013 sprint. Olivier Grisel and Nelle Varoquaux from the sklearn team were able to give us some helpful advice, and it might be worth getting in touch with them as well.
On Wed, Jul 3, 2013 at 9:10 PM, Lisa Torrey <lisa....@gmail.com> wrote:
- I have much less data. (Just 77 positives and 78 negatives, compared to Dalal's 1239 and 12180.)
You'll probably have to do some kind of cross-validation.
- My images aren't all the same size, like the pedestrian images are. (I'm not sure if this would matter?)
Perhaps investigate multi-scale texture features, such as the wavelet coefficients (see http://www.pybytes.com/pywavelets/ ; even simple statistics might suffice).
- My images are much higher resolution. (I've been downscaling them by a factor of 8, but the feature vectors are still enormous.)
You'd want to extract some features that help the classifier, e.g. daisy (http://scikit-image.org/docs/dev/auto_examples/plot_daisy.html), texture features via grey-level co-occurrence matrices, or haralick features (we don't yet have those in skimage, although they are available in Luis Coelho's Mahotas).
Regards Stéfan
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image...@googlegroups.com <javascript:>. For more options, visit https://groups.google.com/groups/opt_out.
participants (2)
-
Johannes Schönberger
-
Lisa Torrey