This is really helpful. Thanks Guillaume!
On Monday, November 25, 2013 3:27:17 AM UTC-5, Guillaume wrote:
For the uneven background issue, you can always filter out the low frequency parts of the image. You can do this in Fourrier space, or just subtract a Gaussian filtered version of the image:
from skimage import img_as_floatfrom scipy import ndimage def preprocess_highpass(image, filter_width=100): '''Emulates a highpass filter by subtracted a smoothed version of the image from the image.
Parameters: ---------------- image: a ndarray filter_width: an int, should be much bigger than the relevant features in the image, and about the scale of the background variations Returns: ----------- f_image: ndarray with the same shape as the input image, with float dtype, the filtered image, with minimum at 0. ''' image = img_as_float(image) lowpass = ndimage.gaussian_filter(image, filter_width) f_image = image - lowpass f_image -= f_image.min() return f_image
On 25/11/2013 08:49, Evelyn Liu wrote:
Thanks for your helpful response Juan. But I have the issue with choosing parameter for threshold_adaptive.. Some of my images have uneven contrast, like the lower part of image has darker background compared to the upper part. So if the threshold is proper for the upper part, some neighboring particles will be taken as a cluster even they are individual. I tried different diam, but none of them leads to good threshold in terms of the full image. Is there any other thresholding methods for those uneven contrast images?
I also tried the edge operator *filter.sobel *and it looks good for drawing up particles' edges(the attached image). I wonder if i can fill these circles up to get the thresholding image? I tried *ndimage.**binary_fill_holes *but it gives me either blank or total total black pic..
On Wednesday, November 20, 2013 12:33:19 AM UTC-5, Juan Nunez-Iglesias wrote:
I'm guessing you are applying label() directly to your image, which is not the right way to use it. label() connects all neighboring *nonzero*points together. Since images are rarely zero (rather than some very small intensity value), you are simply connecting all the pixels of your image together into one label.
The correct way to do this is to threshold your image, eg using:
from skimage.filter import threshold_adaptive from scipy import ndimage as nd diam = 51 # "51" is a guess, you might have to fiddle with this parameter. image_t = (image > threshold_adaptive(image, diam)) image_labeled = nd.label(image_t) particle_sizes = np.bincount(image_labeled.ravel())[1:] # [1:] is to select only the foreground labels
Hope this helps!
On Wed, Nov 20, 2013 at 3:45 PM, Evelyn Liu eve...@gmail.com wrote:
On Tuesday, November 19, 2013 9:39:47 PM UTC-5, Juan Nunez-Iglesias wrote:
Is the goal only to count particles? In that case, I think a local thresholding (threshold_adaptive) would work on all these images. Then, just do a labelling (scipy.ndimage.label) and draw a histogram of particle sizes. You'll get a sharp peak around the true particle size, with bigger peaks for clumps
I'd like to use scikit to plot the size distribution histogram of particles in an image, which is similar with Adam's. I tried scipy.ndimage.measurements.label(image), which I thought would give an array about particle sizes. However, the output array is with all 1, obviously nothing about size. I must get something wrong...So which function should i call for the size distribution? Thanks Juan!
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image...@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.