On Jul 31, 2012 9:30 AM, "Nicklas Nordenmark" <nordenmark(a)gmail.com> wrote:
> Heya, I'm trying to bundle an application that's using scikits-image and
other libraries but I'm running into problems (see this Stackoverflow
thread for a more detailed description of the problem:
Is there any way of locating files in such a distribution? If so, we can
incorporate it in the plugin loader.
On Jul 31, 2012 9:07 AM, "jeff witz" <witzjean(a)gmail.com> wrote:
> I must use regionprops in order to identify a lot of el
> I must use regionprops in order to identify a lot of ellipse in a picture.
> I have constated a mistake in the implementation of the 'Orientation'
> in skimage/measure/_regionprops.py at line 298 the function atan is used
> to recover the angle. In my case I constate that a lot of the ellipse are
> correctly identified using atan, but in some cases the 'MinorAxisLength'
> and the 'MajorAxisLength' are swapped. You can solve this issue using the
> atan2 function that take into account the trigonometric quadrant.
> Best Regards,
So I've been hacking on a new implementation of an
image-viewer/interactive-image-processor. See PR 229:
The previous implementation was written purely in Matplotlib, partly for
portability. Unfortunately, I found this implementation a bit limiting
because Matplotlib doesn't provide quite-enough widget support. In
addition, there was no way to add custom toolbars or menus without getting
So, I've added a new implementation that requires PyQt but also uses
Matplotlib because of the wealth of plotting functionality that it provides.
There's a lot that I added in my original implementation that needs to be
updated for this implementation (an image collection viewer, and Guillaume
Gay's contrast setter and line-profile plugins). I'm of the opinion,
however, that these should be implemented later in order to reduce the work
to review this PR (that's actually why I moved the line profile
a separate branch).
Overall, I'm pretty happy with the current implementation. The basic idea
is that there's a viewer class (to view images, of course). You can then
connect plugins to the viewer, which typically calls some sort of filter
function, but that's in no way a requirement (for example, the line-profile
plugin measures values in the image). Also, there are widgets that get
attached to these plugins (currently, just a slider and a combo box) to
adjust filtering or plugin parameters.
There are two ways to implement a plugin: The first is by subclassing the
base `Plugin` class<https://github.com/tonysyu/scikits-image/blob/qtmpl-viewer/skimage/viewer/p…>.
The second (which I shamelessly stole from Bokeh) instantiates a plugin and
just adds widgets<https://github.com/tonysyu/scikits-image/blob/qtmpl-viewer/viewer_examples/…>(using
the addition operator) to control filter parameters. The first is
probably more flexible, while the second is convenient for people who may
not be as comfortable with object-oriented programming (plus I think it
reads really well).
It's a big PR, but I'd really appreciate comments and suggestions.
P.S. Stefan: In a separate branch, I've added the Hough Transform plugin
you so desired. Right now it's a little hacky, partly because I haven't yet
built the infrastructure to support this type of plot (not only does it
plot an overlay but also Matplotlib lines; in addition, parameters have to
be delegated to two different functions). Nevertheless, the implementation
despite not adding anything else to support this use case.
> Thanks for your reply! I see your great work in scikits-learn and your
> comments are quite useful.
> Many will overlap, I think we can maintain a list. As a quick example
> in the following:
There is no discussion that machine learning methods are helpful for
vision problems and that vision problems
are an important application domain for machine learning tools.
The question is if there is something you would like in skimage that
would require the use of something from sklearn.
I.e. what algorithm you want in skimage is only useful together with
something from sklearn?
I don't think any of your examples are in this category. Which is well
enough, since this means
it should be easy to keep the two things separate ;)
Btw, MRFs in vision are often not learned, so this is no ML, just
optimization. And I would rather place
that in skimage, as it is quite image specific. When I talked about
graph-cuts, that's what I meant.
Normalized cuts are of limited use in low level vision since they are
very slow for superpixels.
I was rather thinking of Boykov-Kolmogorov push-relabel - which is my
next project when I get
my superpixels done. (Stefan actually mentioned he'd like to have it :)
Abour RBMs: I am
but I would rather not include them in sklearn and definitely not in
While I'm not so active in skimage at the moment, I am very interested
in how to connect sklearn and skimage.
I think at the moment the best approach would be to leave the two
packages independent but try
not to reimplement to much.
Many of the ML methods applied to images can be made much more efficient
for images, so reimplementing these is
definitely worth it. For example image segmentation algorithms often
build on clustering algorithms (like the one you
mentioned) but they can be made much faster by only considering local
(which is why I work on #206
In most cases that I am interested in the "higher level" ML sort of
"wraps around" the CV, as for example with descriptors
and classification. I would place all descriptors in skimage but skimage
doesn't need sklearn as a dependence for me to
be able to use an SVM on them.
Do you have more examples where sklearn and skimage might overlap?
I was actually wondering about doing a grab-cut implementation which
needs Gaussian mixture models.
But before I get there, we need graph cuts in skimage ;)
On 07/24/2012 08:09 AM, LI, Wei wrote:
> Dear ALL:
> I am new to scikits-image and I am not sure the intended coverage of
> this package. Seems digital image processing using a signal processing
> view is quite mature but there are also trends where methods from
> statistical learning theory and computer vision are adpated into
> solving standard digial image processing tasks just like using
> learning method to do super resolution, image denoising. There are
> also some discussion in this forum that are related to computer vision
> like the object detection. As far as I know, there is one scikits
> package as the scikits-learn implementing many machine learning
> algorithm. When we need some functions to perform some tasks whether
> we need to reimplement some functions or just to use another package
> as a denepdence?
> As I am a computer vision researcher I find this package by searching
> for available implementation for hog features in python. Just
> wondering whether some modules I can help to implement for this as I
> have already gotten help from package :-). But I have some questions
> when picturing whether I can take the tasks in the wiki page
> http://scikits-image.org/docs/dev/contribute.html . For example the
> graph-cut based segmentation. As we know, the graph cut base algorithm
> based mainly on graph-based clustering method. Such method is
> implemented in various packages and is in scikits-learn
> Then we have two choices,
> 1. Write one JUST for image clustering in this package (Pros: self
> contained package, Cons: cannot get updated when clustering methods
> are updated)
> 2. Include the sklearn as a dependence, adapt the mehtods in that
> package, and write a routine that just build the graph from the image
> ,throw it to sklearn to solve the learning problem and convert the
> result back?
> As more and more papers in standard image processing conference like
> ICIP, using the statistical learning method, furthuer implementations
> may have more functions need the machine learning as subroutines. So
> what is the indented choice from the founders of this package?
I wanted to look into doing a RGB2Lab conversion in skimage
but I got a bit confused with the different color spaces.
The standard method to go from RGB to Lab seems to be via XYZ.
I compared the way that RGB2XYZ is done in scikits-image, vl_feat
and the SLIC code, and they seem to be three different ways.
As I want to use Lab for the segmentation methods by the vl_feat
authors and SLIC, I feel this is a bit unfortunate.
The difference between wikipedia, skimage and vl_feat seems to be
that vl_feat does a gamma-correction of 2.2.
SLIC does something else that I don't really understand.
Could someone help me understand the differences so that I can
try and reproduce other peoples' results?
> No, but I recently saw some activity at:
> Cool, thanks.
>>> By the way, you mentioned you are interested in matting.
>>> I'm working on the alphamatting.com dataset this week :)
>> Ah, fantastic! Are you implementing their paper?
> I'm working with Carsten Rother and we are doing super fancy super
> secret things ;)
>> By the way, you mentioned you are interested in matting.
>> I'm working on the alphamatting.com dataset this week :)
> Ah, fantastic! Are you implementing their paper?
The paper compare a lot of methods, right? Which one do you mean?
I'm working with Carsten Rother and we are doing super fancy super
secret things ;)
Basically it's an extension of an upcoming ECCV paper on Decision Tree
>> OT: Sorry for not finishing up my segmentation PR.
>> It is quite mature but I wanted to compare it to the standard
>> implementations. But for that I need Lab color space conversions,
>> which I didn't get around doing yet.
> I'll add Lab color space conversion to the todo list for Friday's SciPy sprint.
Thanks, that would be great. I'll try to do it myself but I'm pretty
swamped with work atm.
Actually I don't know how these kind of filter banks compare
with wavelets. They just seem kind of popular with computer
There is no wavelet filter bank in skimage, is there?
I guess it would be easy enough to do with scipy, though.
By the way, you mentioned you are interested in matting.
I'm working on the alphamatting.com dataset this week :)
OT: Sorry for not finishing up my segmentation PR.
It is quite mature but I wanted to compare it to the standard
implementations. But for that I need Lab color space conversions,
which I didn't get around doing yet.
On 07/17/2012 09:44 PM, StÃ¯Â¿Â½fan van der Walt wrote:
> Hey Andy
> On Tue, Jul 17, 2012 at 10:38 AM, Andreas MÃ¯Â¿Â½ller
> <amueller(a)ais.uni-bonn.de> wrote:
>> I put together some simple filterbank code here:
> Interesting! How do those compare with wavelet filter banks?
> (I happened to have dabbled in texture recognition before:
> http://mentat.za.net/msc_thesis.html . I didn't know that Zisserman's
> lab worked on that--interesting because they also did a lot of
> super-resolution, the topic of my PhD).
> P.S. Small typo: Andres -> Andrew Zisserman