that's an excellent question, and not a troll :-). Opencv is a
very powerful library, but it focuses primarily on computer vision
(feature detection and extraction, classification, ...), as opposed to
image processing in general (with other tasks such as denoising,
The other big difference is that skimage builds on numpy
ndarrays, and uses the full power of the numpy API (including of course
the basic facilities for processing arrays as images that come with
numpy), as well as some of scipy functions (you could have added
scipy.ndimage to your list -- a few functions in skimage are wrappers
around scipy.ndimage, that exist for the sake of completeneness). One
important consequence is that algorithms working for 3-d or even n-d
images can be easily implemented in 3-d/n-d in skimage, whereas opencv is
restricted to 2-D images (as far as I know). Thanks to the use of numpy
arrays, the API of skimage is also quite pleasant for a numpy user, more
than the API of opencv.
A related difference is that skimage is written in python and
cython, whereas opencv is a C++ library. The two libraries attract a
different crowd of developers, and a Python/Cython toolkit based on numpy
arrays is easier to develop and maintain inside the Scientific Python
I'm sure that other devs/users will have things to add to this
On Thu, Dec 27, 2012 at 02:06:08PM -0800, FranÃ¯Â¿Â½ois wrote:
> Hi users and devs,
> It came to my knowledge that another python library (based on C++ and C
> codes) for image processing exists too : opencv
> I understand that numpy intregrates some basic features and we need some
> advanced features but I have the feeling that skimages is redoundant with
> opencv in some ways.
> What's the position of skimage about that? (Don't read this question as a
> troll but like a real question).
> I mean that similar features exist in both. Would not be possible to
> reuse/integrate opencv or merge? what's the reason for keeping them apart?
> My observation is there is 4 libraries to manipulate images:
> * PIL
> * numpy
> * skimages
> * opencv
> That's a lot.
I've been implementing my own HoG transform looking at different sources.
While the implementation in scikit-image seems to lack certain features
(multiple normalization schemes, general block overlap, Gaussian block
window, trillinear interpolation/weighting of bin assignments,...) these
don't seem to be that important, at least when applied to my current
problem (eye blink analysis).
Most of these would increase complexity, giving the implementation a
complicated look, with little gain. I've also been looking into some
practical improvements - integral histogram, separating the cell-block
histogram feature to use it with other dense feature transforms such as
LBP, a HoG visualization function that would render the visualization at
higher resolutions that the original image.
Would any of these be welcomed additions to scikit-image?
A while ago I wrote a post on my blog with code on how to import in Python
a colour palette in an ASCII file and convert it to a Matplotlib colormap:
Following that post I wrote a tutorial on a geoscience magazine on how to
evaluate and compare colormaps using Python:
In the accompanying notebook I show how to convert a 256x3 RGB colormap to
a 256x256x3 RGB image, then convert it to CIELAB using scikit-image's
rgb2lab, then plot the 256x1 lightness (L) array to evaluate the colormaps
You can read the relevant extract of the notebook using this nbviewer link:
In that case scikit-image worked really well for me.
Now I am trying to follow up with a new tutorial and I run into problems
with the color space conversions.
You can follow what I am trying to do in this other notebook extract:
The goal of this new tutorial is to show how to build colormaps from
scratch using perceptual principles. I design a color palette in LCH (polar
version of CIELAB) by keeping Chroma and Lightness fixed and interpolating
Hue around the circle, then convert to LAB, then to RGB.
As far as I know the code I wrote should work, but the result is a black
colormap. I am thinking I got wrong one or more of the ranges for the LCH
coordinates. I assumed L between (0,1), Ch between (0,1), and H between (0,
Is that wrong, and if that's the case, what are the ranges? Many of them
are not stated clearly in the documentation in here:
Is it possible to update the documentation to clearly state all ranges for
all colour spaces.
Thanks for your help.
*Issues with morphological filters when trying to remove white holes in
black objects in a binary images. Using opening or filling holes on
inverted (or complement) of the original binary.*
I have a series of derivatives calculated on geophysical data.
Many of these derivatives have nice continuous maxima, so I treat them as
images on which I do some cleanup with morphological filter.
Here's one example of operations that I do routinely, and successfully:
# threshold theta map using Otsu method
thresh_th = threshold_otsu(theta)
binary_th = theta > thresh_th
# clean up small objects
label_objects_th, nb_labels_th = sp.ndimage.label(binary_th)
sizes_th = np.bincount(label_objects_th.ravel())
mask_sizes_th = sizes_th > 175
mask_sizes_th = 0
binary_cleaned_th = mask_sizes_th[label_objects_th]
# further enhance with morphological closing (dilation followed by an
erosion) to remove small dark spots and connect small bright cracks
# followed by an extra erosion
selem = disk(1)
closed_th = closing(binary_cleaned_th, selem)/255
eroded_th = erosion(closed_th, selem)/255
# Finally, extract lienaments using skeletonization
# plot to compare
fig = plt.figure(figsize=(20, 7))
ax = fig.add_subplot(1, 2, 1)
imshow(skeleton_th, cmap='bone_r', interpolation='none')
ax2 = fig.add_subplot(1, 3, 2)
imshow(skeleton_cleaned_th, cmap='bone_r', interpolation='none')
Unfortunately I cannot share the data as it is proprietary, but I will for
the next example, which is the one that does not work.
There's one derivative that shows lots of detail but not continuous maxima.
As a workaround I created filled contours in Matplotlib
exported as an image. The image is attached.
Now I want to import back the image and plot it to test:
# import back image
# threshold using using Otsu method
thresh_thdr = threshold_otsu(cfthdr)
binary_thdr = cfthdr > thresh_thdr
# plot it
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(1, 1, 1)
The above works without issues.
Next I want to fill the white holes inside the black blobs. I thought of 2
The first would be to use opening; the second to invert the image, and then
fill the holes as in here:
By the way, I found a similar example for opencv here
Let's start with opening. When I try:
selem = disk(1)
opened_thdr = opening(binary_thdr, selem)
selem = disk(1)
opened_thdr = opening(cfthdr, selem)
I get an error message like this:
ValueError Traceback (most recent call last)
<ipython-input-49-edc0d01ba327> in <module>()
----> 2 opened_thdr = opening(binary_thdr, selem)/255
4 # plot it
5 fig = plt.figure(figsize=(5, 5))
C:\...\skimage\morphology\grey.pyc in opening(image, selem, out)
160 shift_y = True if (h % 2) == 0 else False
--> 162 eroded = erosion(image, selem)
163 out = dilation(eroded, selem, out=out, shift_x=shift_x,
164 return out
C:\...\skimage\morphology\grey.pyc in erosion(image, selem, out, shift_x,
58 selem = img_as_ubyte(selem)
59 return cmorph._erode(image, selem, out=out,
---> 60 shift_x=shift_x, shift_y=shift_y)
C:\...\skimage\morphology\cmorph.pyd in skimage.morphology.cmorph._erode
ValueError: Buffer has wrong number of dimensions (expected 2, got 3)
Any idea of what is going on and how I can fix it?
As for inverting (or finding the complement) and then hole filling, that
would be my preferred option.
However, I have not been able to invert the image. I tried numpy.invert,
adapting the last example from here:
I tried something like this:
But none of these methods worked. Is there a way in scikit.image to do
that, and if not, do you have any suggestions?
I am trying to make pairs of images from the following set of images
(chromosomes sorted by size after rotation). The idea is to make a feature
vector for unsupervised classification (kmeans with 19 clusters)
>From each chromosome an integral image was calculated:
plt.figure(figsize = (15,15))
gs1 = gridspec.GridSpec(6,8)
gs1.update(wspace=0.0, hspace=0.0) # set the spacing between axes.
for i in range(38):
# i = i + 1 # grid spec indexes from 0
ax1 = plt.subplot(gs1[i])
image = sk.transform.integral_image(reallysorted[i][:,:,2])
imshow(image , interpolation='nearest')
Then each integral image was flatten and combined with the others:
for i in range(38):
X = np.asarray(Features)
The X array contains *38* lines and 9718 features, which is not good.
However, I trried to submit these raw features to kmeans classification
with sklearn using a direct example
from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=*19*, algorithm='ball_tree').fit(X)
distances, indices = nbrs.kneighbors(X)
connection = nbrs.kneighbors_graph(X).toarray()
Ploting the connection graph shows that a chromosomes is similar to more
than one ...
- Do you think that integral images can be used to discriminate the
- If so, how to reduce the number of features to 10~20? (to get a better
Thanks for your advices.
Just wanted to mention an algorithm which I was very impressed by and
which might be an awesome addition to scikit-image. The algorithm is
called COSFIRE (Combination of Shifted Filter Responses).
I saw a talk by Geore Azzopardi, who presented the use of the algorithm
for object and patter recognition. I was very impressed with the quality
of segmentation of the shown examples, but also with the wide variety of
image data (e.g. retina images, traffic signs, hand-written characters,
The filter is inspired by how the brain processes visual information,
and according to the author the algorithm is actually quite simple. The
algorithm was the topic of his PhD and he's going to continue working on
it in his new job.
The code (in Matlab) is available, so it might be worth a shot to try
and port it to Python.
- PhD thesis: http://www.cs.rug.nl/~george/phd-thesis/
- Matlab code: nl.mathworks.com/matlabcentral/fileexchange/37395
- Paper in MIA:
I'm pleased to announce version 1.0 of imageio - a library for reading
and writing images. This library started as a spin-off of the freeimage
plugin in skimage, and is now a fully-fledged library with unit tests
Imageio provides an easy interface to read and write a wide range of
image data, including animated images, volumetric data, and scientific
formats. It is cross-platform, runs on Python 2.x and 3.x, and is easy
Imageio is plugin-based, making it easy to extend. It could probably use
more scientific formats. I welcome anyone who's interested to contribute!
install: pip install imageio
release notes: http://imageio.readthedocs.org/en/latest/releasenotes.html
I am pleased to see interest in the COSFIRE approach that I started during
my PhD studies.
The COSFIRE approach is a trainable pattern recognition approach which can
be applied to several applications, including feature detection, object
recognition and localization, image classification, contour detection and
vessel segmentation. The selectively for a pattern of interest is
automatically configured in a training process. The method involves several
computations that are independent of each other, and thus it can be easily
implemented using parallel programming (e.g. on a GPU). The original paper
(http://www.cs.rug.nl/~george/articles/PAMI2013.pdf) combines information
about the contours of the concerned pattern. We now have another paper
which is currently being reviewed for CVPR2015 where we show that by adding
colour information COSFIRE filters become even more robust.
Please feel free to send me other ideas on how this work can be developed
I would be very happy and available to work with an undergraduate or a
postgraduate student (or any other person) to have this parallel
implementation in Python. I see that you already added it to the
Requested-features page. You can also add my contact details (geazzo@gmail)
there for the interested readers.
All my papers can be freely downloaded from my
On Tuesday, 16 December 2014 15:22:45 UTC+1, Stefan van der Walt wrote:
> On Tue, Dec 16, 2014 at 1:57 PM, Pratap Vardhan <prat...(a)gmail.com
> > I found few copies of the paper hosted by universities. I haven't
> checked if
> > these are the actual pre-prints - However, by the citation it looks like
> Thanks! I've added it to the list:
Ok sorry, maybe I need to explain my project better.
I have a Autodesk Maya Model of a simplified eyeball with the pupil (see
image). I render the images of the eyeball from different angles and try to
detect the pupil center as accurate as possible. Since I know the geometry
<http://www.dict.cc/englisch-deutsch/geometry.html> and rotation of the
eyeball, I can calculate the mapping of the real center of the pupil on my
virtual maya camera sensor. I proved the validity
<http://www.dict.cc/englisch-deutsch/validity.html> of my calculation with
several other methods of ellipse center detection (center of mass, distance
transform, opencv ellipse fit, starbust) where I get errors between my
calculation and measurement with sometimes less than 0.02 pixels.
But nevertheless <http://www.dict.cc/englisch-deutsch/nevertheless.html> I
want to try the hough ellipse approach, because it may be more robust
against noise or other errors I want to simulate later. And it seems so far
only the hough ellipse approach is quite inaccurate so I was wondering why.
I think the reason could be the ellipse center detection has only a half
pixel accuracy while my other approaches have sub-pixel accuracy.
I think posting the calculation code would be too much, but I am quite sure
the calculation is right.