that's an excellent question, and not a troll :-). Opencv is a
very powerful library, but it focuses primarily on computer vision
(feature detection and extraction, classification, ...), as opposed to
image processing in general (with other tasks such as denoising,
The other big difference is that skimage builds on numpy
ndarrays, and uses the full power of the numpy API (including of course
the basic facilities for processing arrays as images that come with
numpy), as well as some of scipy functions (you could have added
scipy.ndimage to your list -- a few functions in skimage are wrappers
around scipy.ndimage, that exist for the sake of completeneness). One
important consequence is that algorithms working for 3-d or even n-d
images can be easily implemented in 3-d/n-d in skimage, whereas opencv is
restricted to 2-D images (as far as I know). Thanks to the use of numpy
arrays, the API of skimage is also quite pleasant for a numpy user, more
than the API of opencv.
A related difference is that skimage is written in python and
cython, whereas opencv is a C++ library. The two libraries attract a
different crowd of developers, and a Python/Cython toolkit based on numpy
arrays is easier to develop and maintain inside the Scientific Python
I'm sure that other devs/users will have things to add to this
On Thu, Dec 27, 2012 at 02:06:08PM -0800, FranÃ¯Â¿Â½ois wrote:
> Hi users and devs,
> It came to my knowledge that another python library (based on C++ and C
> codes) for image processing exists too : opencv
> I understand that numpy intregrates some basic features and we need some
> advanced features but I have the feeling that skimages is redoundant with
> opencv in some ways.
> What's the position of skimage about that? (Don't read this question as a
> troll but like a real question).
> I mean that similar features exist in both. Would not be possible to
> reuse/integrate opencv or merge? what's the reason for keeping them apart?
> My observation is there is 4 libraries to manipulate images:
> * PIL
> * numpy
> * skimages
> * opencv
> That's a lot.
A while ago I wrote a post on my blog with code on how to import in Python
a colour palette in an ASCII file and convert it to a Matplotlib colormap:
Following that post I wrote a tutorial on a geoscience magazine on how to
evaluate and compare colormaps using Python:
In the accompanying notebook I show how to convert a 256x3 RGB colormap to
a 256x256x3 RGB image, then convert it to CIELAB using scikit-image's
rgb2lab, then plot the 256x1 lightness (L) array to evaluate the colormaps
You can read the relevant extract of the notebook using this nbviewer link:
In that case scikit-image worked really well for me.
Now I am trying to follow up with a new tutorial and I run into problems
with the color space conversions.
You can follow what I am trying to do in this other notebook extract:
The goal of this new tutorial is to show how to build colormaps from
scratch using perceptual principles. I design a color palette in LCH (polar
version of CIELAB) by keeping Chroma and Lightness fixed and interpolating
Hue around the circle, then convert to LAB, then to RGB.
As far as I know the code I wrote should work, but the result is a black
colormap. I am thinking I got wrong one or more of the ranges for the LCH
coordinates. I assumed L between (0,1), Ch between (0,1), and H between (0,
Is that wrong, and if that's the case, what are the ranges? Many of them
are not stated clearly in the documentation in here:
Is it possible to update the documentation to clearly state all ranges for
all colour spaces.
Thanks for your help.
Just wanted to mention an algorithm which I was very impressed by and
which might be an awesome addition to scikit-image. The algorithm is
called COSFIRE (Combination of Shifted Filter Responses).
I saw a talk by Geore Azzopardi, who presented the use of the algorithm
for object and patter recognition. I was very impressed with the quality
of segmentation of the shown examples, but also with the wide variety of
image data (e.g. retina images, traffic signs, hand-written characters,
The filter is inspired by how the brain processes visual information,
and according to the author the algorithm is actually quite simple. The
algorithm was the topic of his PhD and he's going to continue working on
it in his new job.
The code (in Matlab) is available, so it might be worth a shot to try
and port it to Python.
- PhD thesis: http://www.cs.rug.nl/~george/phd-thesis/
- Matlab code: nl.mathworks.com/matlabcentral/fileexchange/37395
- Paper in MIA:
I'm pleased to announce version 1.0 of imageio - a library for reading
and writing images. This library started as a spin-off of the freeimage
plugin in skimage, and is now a fully-fledged library with unit tests
Imageio provides an easy interface to read and write a wide range of
image data, including animated images, volumetric data, and scientific
formats. It is cross-platform, runs on Python 2.x and 3.x, and is easy
Imageio is plugin-based, making it easy to extend. It could probably use
more scientific formats. I welcome anyone who's interested to contribute!
install: pip install imageio
release notes: http://imageio.readthedocs.org/en/latest/releasenotes.html
I am pleased to see interest in the COSFIRE approach that I started during
my PhD studies.
The COSFIRE approach is a trainable pattern recognition approach which can
be applied to several applications, including feature detection, object
recognition and localization, image classification, contour detection and
vessel segmentation. The selectively for a pattern of interest is
automatically configured in a training process. The method involves several
computations that are independent of each other, and thus it can be easily
implemented using parallel programming (e.g. on a GPU). The original paper
(http://www.cs.rug.nl/~george/articles/PAMI2013.pdf) combines information
about the contours of the concerned pattern. We now have another paper
which is currently being reviewed for CVPR2015 where we show that by adding
colour information COSFIRE filters become even more robust.
Please feel free to send me other ideas on how this work can be developed
I would be very happy and available to work with an undergraduate or a
postgraduate student (or any other person) to have this parallel
implementation in Python. I see that you already added it to the
Requested-features page. You can also add my contact details (geazzo@gmail)
there for the interested readers.
All my papers can be freely downloaded from my
On Tuesday, 16 December 2014 15:22:45 UTC+1, Stefan van der Walt wrote:
> On Tue, Dec 16, 2014 at 1:57 PM, Pratap Vardhan <prat...(a)gmail.com
> > I found few copies of the paper hosted by universities. I haven't
> checked if
> > these are the actual pre-prints - However, by the citation it looks like
> Thanks! I've added it to the list:
I am Ma Chienli, an undergraduate from China, majoring in electronic and
I have beed using python for scientific computation for a year and now I am
I would like to offer my for your next (possible) GSoC project of writting
the scipy.ndimage in cython :)
So, at first, from where should I start?
P.S. I just created another thread which did not show up. If this thread
conflicts with it, please delete one of them .
---------- Forwarded message ----------
From: "Ralf Gommers" <ralf.gommers(a)gmail.com>
Date: Dec 28, 2014 10:43 AM
Subject: [Numpy-discussion] numpy dev-version-string change
To: "Discussion of Numerical Python" <numpy-discussion(a)scipy.org>
This is a heads up that the numpy version string for development versions
is changing from x.y.z.dev-githash to x.y.z.dev+githash (note the +). This
is due to PEP 440 , which specifies local (i.e. non-released) versions
have to use a "+". Pip 6.0, released a few days ago, enforces this so we
noticed immediately that without this version string change pip sorted the
latest dev wheel build from master below any released version.
Change in numpy at ; identical change in scipy at . Note that this is
unlikely but not impossible to break custom version string parsers (like
NumPy-Discussion mailing list
Sorry for late reply. Can you expand on this a little bit? Benjamin Root
said to ask you about how the docs work. Essentially, how do you host them
on the master branch under the docs folder and still have them hosted at
matplotlib.org? Instead of hosting them on gh-pages and having them show
up as github.com/matplotlib? I'm having the hardest time finding info on
On Thursday, December 11, 2014 12:58:00 PM UTC-5, Thomas Caswell wrote:
> matplotlib uses the organization-level gh-pages to host the documentation
> which seems to work pretty well.
> On Thu Dec 11 2014 at 12:45:30 PM Adam Hughes <hughes...(a)gmail.com
>> Thanks, that's great advice. Also, do you guys pay to host your site?
>> Can you recommend a low-cost, or free, static-website hosting vendor? We
>> are currently using gh-pages branch on github to host our site, and I
>> really hate having this secondary branch. Especially when I need the
>> documentation to have access to the source code modules or setup.py file
>> (as it's on a second branch).
>> On Thu, Dec 11, 2014 at 6:56 AM, Juan Nunez-Iglesias <jni....(a)gmail.com
>>> Adam, just fyi, we are considering moving away from GoogleGroups... (See
>>> a separate discussion on this list). Since you are on the ground floor, you
>>> might want to reconsider your choice. I've been using Nabble for my (very
>>> low volume!) gala list and it's been really good.
>>> On Thu, Dec 11, 2014 at 12:01 PM, Adam Hughes <hughes...(a)gmail.com
>>>> Thanks Michael.
>>>> Be warned, only about 3 people are using the library at the moment, so
>>>> it's quite likely that you'll encounter some bugs just by virtue of using a
>>>> new dataset that probably will bring to light some considerations that we
>>>> overlooked. In addition, I only support pandas 0.14 right now (0.15 has a
>>>> lot of private API changes that directly affect us). But we did feel like
>>>> the library was close enough to ready to share it, and I hope you can try
>>>> it out. If you do want to give it a whirl sometime soon, please feel free
>>>> personally) and we will at least make sure to help get you up and running.
>>>> Let us know what kind of data you're working on and what type of analysis
>>>> you're doing. We've spend a lot of time putting out datastructures
>>>> together, but we haven't put a great deal of thought into the actual
>>>> spectral utilitiles, workflows and analysis that we should host, other than
>>>> correlation spectroscopy and base things like dynamic baseline fitting.
>>>> On Wed, Dec 10, 2014 at 6:14 PM, Michael Aye <kmicha...(a)gmail.com
>>>>> On 2014-12-10 02:43:46, Adam Hughes <hughes...(a)gmail.com> wrote:
>>>>>> > http://hugadams.github.io/scikit-spectra/
>>>>>>>> That is a very impressive notebook widget shown in the video! I
>>>>>>>> how it interacts with mpl3d.
>>>>>>>> Ditto! Man, can't wait to play with this! Very happy that there's
>>>>> finally a package focusing on spectral analysis! It also just comes at the
>>>>> right time, just started in my new job working on data from an imaging
>>>>> spectrometer for the first time. ;)
>>>>> You received this message because you are subscribed to a topic in the
>>>>> Google Groups "scikit-image" group.
>>>>> To unsubscribe from this topic, visit
>>>>> To unsubscribe from this group and all its topics, send an email to
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>> You received this message because you are subscribed to the Google
>>>> Groups "scikit-image" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> For more options, visit https://groups.google.com/d/optout.
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "scikit-image" group.
>>> To unsubscribe from this topic, visit
>>> To unsubscribe from this group and all its topics, send an email to
>>> For more options, visit https://groups.google.com/d/optout.
>> You received this message because you are subscribed to the Google Groups
>> "scikit-image" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> For more options, visit https://groups.google.com/d/optout.
Hi to all
I'm new in scikit-image, and i develop algorithms for medical applications.
Sometimes is very very useful to have user interaction, in particular
Region of Interest (ROI) selection over an image. I found in skimage the
RectangleTool and i'm trying to write a simple ROI selector in order to use
the rectangle coords in successive computations.
unfortunately i can't be able to handle mouse events: for example if i've:
import skimage as sk
from skimage import data
import matplotlib.pyplot as plt
from skimage.viewer.canvastools import RectangleTool
import numpy as np
from skimage.draw import polygon
from skimage.draw import line
im = data.lena()
f, ax = plt.subplots()
# is here usable "extents?"
rect_tool = RectangleTool(ax,on_enter=cable(im))
i simply want to see "extents" which is the attribute that store coords for
user-selected rectangle according to
i hope this is clear and if it's possible i think is useful to embed this
example in docs.
thanks in advance