that's an excellent question, and not a troll :-). Opencv is a
very powerful library, but it focuses primarily on computer vision
(feature detection and extraction, classification, ...), as opposed to
image processing in general (with other tasks such as denoising,
The other big difference is that skimage builds on numpy
ndarrays, and uses the full power of the numpy API (including of course
the basic facilities for processing arrays as images that come with
numpy), as well as some of scipy functions (you could have added
scipy.ndimage to your list -- a few functions in skimage are wrappers
around scipy.ndimage, that exist for the sake of completeneness). One
important consequence is that algorithms working for 3-d or even n-d
images can be easily implemented in 3-d/n-d in skimage, whereas opencv is
restricted to 2-D images (as far as I know). Thanks to the use of numpy
arrays, the API of skimage is also quite pleasant for a numpy user, more
than the API of opencv.
A related difference is that skimage is written in python and
cython, whereas opencv is a C++ library. The two libraries attract a
different crowd of developers, and a Python/Cython toolkit based on numpy
arrays is easier to develop and maintain inside the Scientific Python
I'm sure that other devs/users will have things to add to this
On Thu, Dec 27, 2012 at 02:06:08PM -0800, FranÃ¯Â¿Â½ois wrote:
> Hi users and devs,
> It came to my knowledge that another python library (based on C++ and C
> codes) for image processing exists too : opencv
> I understand that numpy intregrates some basic features and we need some
> advanced features but I have the feeling that skimages is redoundant with
> opencv in some ways.
> What's the position of skimage about that? (Don't read this question as a
> troll but like a real question).
> I mean that similar features exist in both. Would not be possible to
> reuse/integrate opencv or merge? what's the reason for keeping them apart?
> My observation is there is 4 libraries to manipulate images:
> * PIL
> * numpy
> * skimages
> * opencv
> That's a lot.
I've been implementing my own HoG transform looking at different sources.
While the implementation in scikit-image seems to lack certain features
(multiple normalization schemes, general block overlap, Gaussian block
window, trillinear interpolation/weighting of bin assignments,...) these
don't seem to be that important, at least when applied to my current
problem (eye blink analysis).
Most of these would increase complexity, giving the implementation a
complicated look, with little gain. I've also been looking into some
practical improvements - integral histogram, separating the cell-block
histogram feature to use it with other dense feature transforms such as
LBP, a HoG visualization function that would render the visualization at
higher resolutions that the original image.
Would any of these be welcomed additions to scikit-image?
A while ago I wrote a post on my blog with code on how to import in Python
a colour palette in an ASCII file and convert it to a Matplotlib colormap:
Following that post I wrote a tutorial on a geoscience magazine on how to
evaluate and compare colormaps using Python:
In the accompanying notebook I show how to convert a 256x3 RGB colormap to
a 256x256x3 RGB image, then convert it to CIELAB using scikit-image's
rgb2lab, then plot the 256x1 lightness (L) array to evaluate the colormaps
You can read the relevant extract of the notebook using this nbviewer link:
In that case scikit-image worked really well for me.
Now I am trying to follow up with a new tutorial and I run into problems
with the color space conversions.
You can follow what I am trying to do in this other notebook extract:
The goal of this new tutorial is to show how to build colormaps from
scratch using perceptual principles. I design a color palette in LCH (polar
version of CIELAB) by keeping Chroma and Lightness fixed and interpolating
Hue around the circle, then convert to LAB, then to RGB.
As far as I know the code I wrote should work, but the result is a black
colormap. I am thinking I got wrong one or more of the ranges for the LCH
coordinates. I assumed L between (0,1), Ch between (0,1), and H between (0,
Is that wrong, and if that's the case, what are the ranges? Many of them
are not stated clearly in the documentation in here:
Is it possible to update the documentation to clearly state all ranges for
all colour spaces.
Thanks for your help.
*Issues with morphological filters when trying to remove white holes in
black objects in a binary images. Using opening or filling holes on
inverted (or complement) of the original binary.*
I have a series of derivatives calculated on geophysical data.
Many of these derivatives have nice continuous maxima, so I treat them as
images on which I do some cleanup with morphological filter.
Here's one example of operations that I do routinely, and successfully:
# threshold theta map using Otsu method
thresh_th = threshold_otsu(theta)
binary_th = theta > thresh_th
# clean up small objects
label_objects_th, nb_labels_th = sp.ndimage.label(binary_th)
sizes_th = np.bincount(label_objects_th.ravel())
mask_sizes_th = sizes_th > 175
mask_sizes_th = 0
binary_cleaned_th = mask_sizes_th[label_objects_th]
# further enhance with morphological closing (dilation followed by an
erosion) to remove small dark spots and connect small bright cracks
# followed by an extra erosion
selem = disk(1)
closed_th = closing(binary_cleaned_th, selem)/255
eroded_th = erosion(closed_th, selem)/255
# Finally, extract lienaments using skeletonization
# plot to compare
fig = plt.figure(figsize=(20, 7))
ax = fig.add_subplot(1, 2, 1)
imshow(skeleton_th, cmap='bone_r', interpolation='none')
ax2 = fig.add_subplot(1, 3, 2)
imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none')
Unfortunately I cannot share the data as it is proprietary, but I will for
the next example, which is the one that does not work.
There's one derivative that shows lots of detail but not continuous maxima.
As a workaround I created filled contours in Matplotlib
exported as an image. The image is attached.
Now I want to import back the image and plot it to test:
# import back image
# threshold using using Otsu method
thresh_thdr = threshold_otsu(cfthdr)
binary_thdr = cfthdr > thresh_thdr
# plot it
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(1, 1, 1)
The above works without issues.
Next I want to fill the white holes inside the black blobs. I thought of 2
The first would be to use opening; the second to invert the image, and then
fill the holes as in here:
By the way, I found a similar example for opencv here
Let's start with opening. When I try:
selem = disk(1)
opened_thdr = opening(binary_thdr, selem)
selem = disk(1)
opened_thdr = opening(cfthdr, selem)
I get an error message like this:
ValueError Traceback (most recent call last)
<ipython-input-49-edc0d01ba327> in <module>()
----> 2 opened_thdr = opening(binary_thdr, selem)/255
4 # plot it
5 fig = plt.figure(figsize=(5, 5))
C:\...\skimage\morphology\grey.pyc in opening(image, selem, out)
160 shift_y = True if (h % 2) == 0 else False
--> 162 eroded = erosion(image, selem)
163 out = dilation(eroded, selem, out=out, shift_x=shift_x,
164 return out
C:\...\skimage\morphology\grey.pyc in erosion(image, selem, out, shift_x,
58 selem = img_as_ubyte(selem)
59 return cmorph._erode(image, selem, out=out,
---> 60 shift_x=shift_x, shift_y=shift_y)
C:\...\skimage\morphology\cmorph.pyd in skimage.morphology.cmorph._erode
ValueError: Buffer has wrong number of dimensions (expected 2, got 3)
Any idea of what is going on and how I can fix it?
As for inverting (or finding the complement) and then hole filling, that
would be my preferred option.
However, I have not been able to invert the image. I tried numpy.invert,
adapting the last example from here:
I tried something like this:
But none of these methods worked. Is there a way in scikit.image to do
that, and if not, do you have any suggestions?
I am trying to make pairs of images from the following set of images
(chromosomes sorted by size after rotation). The idea is to make a feature
vector for unsupervised classification (kmeans with 19 clusters)
>From each chromosome an integral image was calculated:
plt.figure(figsize = (15,15))
gs1 = gridspec.GridSpec(6,8)
gs1.update(wspace=0.0, hspace=0.0) # set the spacing between axes.
for i in range(38):
# i = i + 1 # grid spec indexes from 0
ax1 = plt.subplot(gs1[i])
image = sk.transform.integral_image(reallysorted[i][:,:,2])
imshow(image , interpolation='nearest')
Then each integral image was flatten and combined with the others:
for i in range(38):
X = np.asarray(Features)
The X array contains *38* lines and 9718 features, which is not good.
However, I trried to submit these raw features to kmeans classification
with sklearn using a direct example
from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=*19*, algorithm='ball_tree').fit(X)
distances, indices = nbrs.kneighbors(X)
connection = nbrs.kneighbors_graph(X).toarray()
Ploting the connection graph shows that a chromosomes is similar to more
than one ...
- Do you think that integral images can be used to discriminate the
- If so, how to reduce the number of features to 10~20? (to get a better
Thanks for your advices.
I have an image that I would like to do some smoothing over on.
Lets say there is a faint red spot on a white background. I would like to
apply some algorithm that will smooth over the red with the white pixels
surrounding the red pixels.
I thought this would be an application for the noise reduction tools ...
... but the picture output looked the same as before.
What is the best algorithm for smoothing over pixels by re assigning a
pixel value with the average value of the pixels surrounding it?
Could you please help me to use skimage and matplotlib correctly to
I try it in IPython notebook using tifffile or PIL, all fail to show the
image. A part of the problem is the size which causes MemoryError - I'd be
happy to see the reduced resolution as well.
I am new to python and image processing in general.
I am having trouble working with 12 bit images to do some very simple
calculations. Here is some example code:
#import Imaging Library
from PIL import Image
from skimage import io, filters
from skimage import img_as_float
from skimage import exposure
#import pylab for plotting
from pylab import *
Aim = io.imread("A.tiff")
Bim = io.imread("B.tiff")
Cim = io.imread("C.tiff")
Dim = Aim - Cim
Eim = Bim - Cim
#print min and max values of the background subtracted images
print("min %d max %d" % (Aim.min(),Aim.max()))
print("min %d max %d" % (Bim.min(),Bim.max()))
print("min %d max %d" % (Dim.min(),Dim.max()))
print("min %d max %d" % (Eim.min(),Eim.max()))
Input images A,B and C are 12 bit greyscale TIFFs.
min 0 max 4095
min 0 max 4095
min 0 max 65533
min 0 max 65533
The input image data response to min and max make good 12bit sense, but
it is totally beyond me how I am getting 16bit responses for D and E.
I want to understand what's happening here so that I don't get bitten
when I try to do transformations that require me to use floating point
On 2015-06-21 13:58:32, 'Kevin Keraudren' via scikit-image
> I was surprised by the success of scikit-learn outside of
> academia , and I was wondering if people on the mailing list
> were aware of companies that would similarly rely on
I just returned from SciPy2015, and I spoke to several people that
are using skimage in their workflow. There are some difficulties
in gathering this type of information, among others that a) not
everyone is on the list and b) people may be somewhat secretive
about their toolchains.
I wonder if a survey on the skimage website would be helpful.
I.e., "Using skimage? Please let us know!" with a link to a short
survey. What do you think?