Currently, imread doesn't properly handle palette images since PIL
palette images can't be converted directly to numpy arrays. Well, you
can convert it, but the output is garbage since the array values
correspond to the image palette, which you don't have access.
Flattening the image seems to be an OK work around, but if the image
had a color palette, this information is lost. Also, this work around
requires you to know that an image is in palette mode before calling
Below is a short patch that checks if an image is in palette mode; if
it is, grayscale images are converted to luminance mode and color
images are converted to RGB. I'm not sure if these conversions are
appropriate, but at least it's an improvement.
I hope this is helpful to somebody. Cheers,
P.S. `palette_is_grayscale` is a convoluted function to check whether
the palette is grayscale. PIL may provide a simpler check; if so, I
couldn't find it.
P.P.S. The following link has examples of palette images if you want
(NOTE: the thumbnails are actually not palette images, but the linked
diff --git a/scikits/image/io/pil_imread.py b/scikits/image/io/
index 421f45c..946d00c 100644
@@ -34,6 +34,34 @@ def imread(fname, flatten=False, dtype=None):
im = Image.open(fname)
+ if im.mode == 'P':
+ if palette_is_grayscale(im):
+ im = im.convert('L')
+ im = im.convert('RGB')
if flatten and not im.mode in ('1', 'L', 'I', 'F', 'I;16', 'I;
im = im.convert('F')
return np.array(im, dtype=dtype)
+ """Return True if PIL image is grayscale.
+ pil_image : PIL image
+ PIL Image that is in Palette mode.
+ is_grayscale : bool
+ True if all colors in image palette are gray.
+ assert pil_image.mode == 'P'
+ # get palette as an array with R, G, B columns
+ palette = np.asarray(pil_image.getpalette()).reshape((256, 3))
+ # Not all palette colors are used; unused colors have junk values.
+ start, stop = pil_image.getextrema()
+ valid_palette = palette[start:stop]
+ # Image is grayscale if channel differences (R - G and G - B) are
+ return np.allclose(np.diff(valid_palette), 0)
I'm working on the color spaces, and would like some feedback. Doing color
space conversions is very simple, but finding the right references and
making sure things are correct is a real pain. So I'm wondering what spaces
are useful and unambiguously defined.
So far I have
- RGB : sRGB with D65 whitepoint, the "central" color space in the API.
- HSV : unambiguous
- XYZ : unambiguous
- CIE RGB : unambiguous
- RGB NTSC : from Travis' code, not sure what it is. Not equal to YIQ, which
is the color space used for NTSC. Will delete again unless I figure it out.
Some code, seems clearly defined
- CIE LAB
- CIE LUV
Some code, but not sure about
- RGBP : gamma corrected sRGB. I am usually interested in intensities /
photon counts, so I have a natural aversion to this one. Anyone care?
- YIQ : see RGB NTSC above
- RGB SB : Stiles and Burch (1955) 2-degree RGB color space. seems obsolete.
- UVW : seems obsolete
- YUV / YCbCr / YCC / YPbPr / 8-bit YCbCr : these are all similar and often
confused. A mess.
No code yet, but commonly used
The code I have so far lives here:
I'm afraid adding color spaces that are not clearly defined does more harm
than good, so please let me know which color spaces are useful for you.
I was just looking at the iso-contouring code of Zach Pincus (
http://mail.scipy.org/pipermail/scipy-user/2009-July/021719.html), and saw
it is implemented with the marching squares algorithm. On the task list that
algorithm is listed as well, with a note to investigate patent issues. So I
had a look at that.
The patent issue was earlier raised in this thread in 2004:
http://osdir.com/ml/python.matplotlib.devel/2004-10/msg00066.html. The open
question was whether marching squares was covered under the patent for
marching cubes or not. This is a 1985 patent and has expired in the
meantime, as noted here: http://en.wikipedia.org/wiki/Marching_cubes.
IANAL, but I think there are no patent issues anymore.
Separate question to Zach: the code you sent to the SciPy list in July was
GPL licensed. You noted you were willing to send it again under a different
license. Could you please send it to the list with a BSD/MIT/similar
On Wed, Oct 21, 2009 at 1:32 PM, SirVer <sirver(a)gmx.de> wrote:
> On a side note, can't we get a real bug tracker somewhere?
> This one should do the job right? At least for now.
> I have a selection of grayscale (8bit) tifs in a directory.
> In : k = io.ImageCollection("calib/*.tif", True)
> In : k.dtype
> Out: dtype('uint8')
> In : k = io.ImageCollection("calib/*.tif", as_grey=True)
> In : k.dtype
> Out: dtype('float32')
> But the docs say:
> as_grey : bool, optional
> If True, convert the input images to grey-scale. This does not
> affect images that are already in a grey-scale format.
> Obviously the image data type gets converted though.
Thanks for testing Holger.
I wrote the docstring based on what I thought pil_imread was doing.
Obviously I was wrong. This is the desired behavior imho, so I'll come up
with a patch for pil_imread.
That's a good point.
On Mon, Oct 19, 2009 at 6:55 PM, SirVer <sirver(a)gmx.de> wrote:
> I have something of a lot of importance to mention. We should start to
> separate our tests into unit and integration tests. For what reason?
> The 40 tests need 3.256 seconds on my box to run; that's approximately
> the time it takes to compile the opencv module here. So compiling +
> tests = 2 * compiling. That's still acceptable, but 40 tests is
> nothing. 400 tests will need 30 seconds which is too much to run after
> each edit.
The main thing I think is to be careful with images. The slowest one by far
now are the tests for lpi_filter, because they do a lot of things with a
This is why I originally put my test images for io under io/tests. I thought
data_dir should only contain images for examples, useful for users. Images
for testing algorithms can often be small. For the color conversion I now
use images of size (4, 2, 3), which means they take no time at all.
> But that is was unittests are for. The reason why I come up
> with this now is that it is important to make this separation while it
> is still easy and possible. The best way is to group tests into many
> groups ('opencv', 'fast', 'slow', 'need_camera').
What is lacking when you apply decorators like @opencv_skip and @slow? Note
also that if you run nosetests from some folder you run only tests in
subfolders. This is a natural separation already.
> A short blog post
> concerning this is:
> That's mostly semantics.
> I have no idea how to achieve this with pynose in a simple way
> Once again: I think scikit_image will grow fast and therefore test
> will grow fast. Users will stop running the tests if they take to
> long. Too long for an interactive coding session is > 10s .
True. The @slow decorator should take care of this. What is the approximate
limit again when it should be used?
On a side note, can't we get a real bug tracker somewhere?
I have a selection of grayscale (8bit) tifs in a directory.
In : k = io.ImageCollection("calib/*.tif", True)
In : k.dtype
In : k = io.ImageCollection("calib/*.tif", as_grey=True)
In : k.dtype
But the docs say:
as_grey : bool, optional
If True, convert the input images to grey-scale. This does not
affect images that are already in a grey-scale format.
Obviously the image data type gets converted though.
And so it begins...
Time to reevaluate the posting permissions?
On 20-Oct-09, at 1:13 PM, FunGuy wrote:
> Muhahahhaaa )))awesome!!))) still can't believe this))....
2009/10/19 SirVer <sirver(a)gmx.de>:
> Stéfan, which is the solution (1 or 2) you'd like to have for sckit-
Personally, I'd prefer 1(b) [Cython wrapper] with a second choice 2
[close collaboration with an existing project], simply because it is
easier to manage code under our own roof.
In order to get this implemented as soon as possible, we should build
on the work done by yourself and Andrew. His code is also BSD
licensed, so no problems there.
Also have a look at his higher-level Python wrapper: