Converting a numpy array to grayscale

I porting a 8 line Matlab script. Basically I read in am image, down sample the image, perform some FFT operations and output a final smoothed image. I am porting this to a standalone function. The problem is that I am getting different from my python script to what Matlab was outputs
After comparing the contents of the array/matrix in python/matlab I noticed the values were different after the image was converted to grayscale. That is, when the image is read in, the values are the same, once converted they begin to differ around the first/second decimal place.
If I read in the file using ndimage.imread( flatten=true) then I get the same value as matlab correct to 5 decimal places, whereas using color.rgb2gray() is only correct to 2 decimal places. In the final version of the code I will only have access to the numpy array after it has been loaded into memory, so using imread() was just for tracing/locating/identifying the problem. Here is a snippet of code:
gray = ndimage.imread('1.jpg',flatten=True) gray /= gray.max() gray
array([[ 0.30133331, 0.2895686 , 0.28172547, ... ...
gray2 = color.rgb2gray(rgb) gray2
array([[ 0.31065608, 0.29889137, 0.29104824, ..., ...
I believe this difference is causing the problem. Note if I convert the image to grayscale using a external tool and read this in then the values of the numpy array match similar/same to Matlab matrix.
So what is the difference between converting an array to gray scale verse reading it in as grayscale? Have I done something wrong? Is there another way to convert a numpy array to grayscale?
Any help appreciated.
Michael. --

Hi,
Could you provide us with both the grayscale and RGB version of your image?
Johannes Schönberger
Am 17.05.2013 um 19:15 schrieb Brickle Macho bricklemacho@gmail.com:
I porting a 8 line Matlab script. Basically I read in am image, down sample the image, perform some FFT operations and output a final smoothed image. I am porting this to a standalone function. The problem is that I am getting different from my python script to what Matlab was outputs
After comparing the contents of the array/matrix in python/matlab I noticed the values were different after the image was converted to grayscale. That is, when the image is read in, the values are the same, once converted they begin to differ around the first/second decimal place.
If I read in the file using ndimage.imread( flatten=true) then I get the same value as matlab correct to 5 decimal places, whereas using color.rgb2gray() is only correct to 2 decimal places. In the final version of the code I will only have access to the numpy array after it has been loaded into memory, so using imread() was just for tracing/locating/identifying the problem. Here is a snippet of code:
gray = ndimage.imread('1.jpg',flatten=True) gray /= gray.max() gray
array([[ 0.30133331, 0.2895686 , 0.28172547, ... ...
gray2 = color.rgb2gray(rgb) gray2
array([[ 0.31065608, 0.29889137, 0.29104824, ..., ...
I believe this difference is causing the problem. Note if I convert the image to grayscale using a external tool and read this in then the values of the numpy array match similar/same to Matlab matrix.
So what is the difference between converting an array to gray scale verse reading it in as grayscale? Have I done something wrong? Is there another way to convert a numpy array to grayscale?
Any help appreciated.
Michael.
-- You received this message because you are subscribed to the Google Groups "scikit-image" group. To unsubscribe from this group and stop receiving emails from it, send an email to scikit-image+unsubscribe@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.

On 18/05/13 1:19 AM, Johannes Sch�nberger wrote:
Hi,
Could you provide us with both the grayscale and RGB version of your image?
Sure. Here is a link to the images:
[1] http://i.imgur.com/jxBoL93.jpg [2] http://i.imgur.com/q5jFcgL.jpg
Regards,
Michel. --

On Fri, May 17, 2013 at 12:15 PM, Brickle Macho bricklemacho@gmail.comwrote:
I porting a 8 line Matlab script.
<snip>
So what is the difference between converting an array to gray scale verse reading it in as grayscale? Have I done something wrong? Is there another way to convert a numpy array to grayscale?
Any help appreciated.
Michael.
Hi Michael,
They're just different color conversion factors. Based on http://www.mathworks.com/help/images/ref/rgb2gray.html, Matlab uses: 0.2989 R + 0.5870 G + 0.1140 B
Based on the docstring for `color.rgb2gray`: 0.2125 R + 0.7154 G + 0.0721 B
Wikipedia (http://en.wikipedia.org/wiki/Grayscale) seems to suggest that Matlab's is an older standard while the one in scikit-image is a more recent spec.
participants (3)
-
Brickle Macho
-
Johannes Schönberger
-
Tony Yu