Hello Nelle,
Thank you for your nice proposition, it's really appreciated. To me,
this sounds like the way to move but let's wait for other reactions.
Best,
--
François Boulogne.
http://www.sciunto.org
GPG: 32D5F22F
Dear devs,
I would like to raise a potential issue with our test suite. Let me list
the facts.
Our tests rely on nose. As you may have seen, we often have deprecation
warnings (See https://github.com/scikit-image/scikit-image/issues/2414
). These warnings come from nose,
https://github.com/nose-devs/nose/issues/929 a ticket opened in Jun '15.
Tom Caswell opened a PR in Sept '15
https://github.com/nose-devs/nose/pull/952 that got only little attention.
In another of our issues, some tests are not run (those in directories
starting by an underscore).
https://github.com/scikit-image/scikit-image/issues/2127 As stated in
our issue, it is hard-coded in nose and WON'T be fixed. Upstream
suggests to upgrade to nose2. Egor investigated this possibility but it
sounds not possible (Egor, can you elaborate?)
It seems clear that nose is not maintained any more (last commit, 10
months ago, last release June '15). The deprecation warning we have will
turn to a broken code with python 3.6 that has just been released.
I let you check that my interpretation is correct, but to me, it
represents a major issue for scikit-image. Feel free to comment and to
correct me if necessary. I'm not well-experienced with the different
libraries for unittesting.
Best,
--
François Boulogne.
http://www.sciunto.org
GPG: 32D5F22F
Hi Egor, Hi Juan,
Thank you very much for the help!
Yuanyuan
On Thu, Dec 29, 2016 at 4:16 AM, Egor Panfilov <egor.v.panfilov(a)gmail.com>
wrote:
> Hi Yuanyuan,
>
> In your example the image data range is not being rescaled as it already
> has dtype float. `img_as_float` will rescale from [0:255] to [0:1] only
> if the dtype of input ndarray is of integer family (and, in your case,
> uint8).
>
> Take a look:
> In [3]: nd_int = np.random.randint(0, 255, (3, 3))
>
> In [4]: nd_int
> Out[4]:
> array([[ 85, 15, 60],
> [225, 252, 32],
> [162, 173, 34]])
>
> In [5]: nd_int = nd_int.astype(np.uint8)
>
> In [6]: skimage.img_as_float(nd_int)
> Out[6]:
> array([[ 0.33333333, 0.05882353, 0.23529412],
> [ 0.88235294, 0.98823529, 0.1254902 ],
> [ 0.63529412, 0.67843137, 0.13333333]])
>
> Please, notice that if your data lies in a range [0:255], but the ndarray
> dtype is not uint8 (e.g. uint16, int8, etc), you will get different results.
>
> Regards,
> Egor
>
> 2016-12-27 19:12 GMT+03:00 wine lover <winecoding(a)gmail.com>:
>
>> Hi Egor,
>>
>> Thank you for the suggestion. This is how I modify the code
>>
>> imgs_equalized = np.random.rand(imgs.shape[0],i
>> mgs.shape[1],imgs.shape[2],imgs.shape[3])
>> for i in range(imgs.shape[0]):
>> print('imgs[i,0] ',imgs[i,0].shape)
>> print('imgs[i,0] ',imgs[i,0].dtype)
>> print('imgs[i,0] ',imgs[i,0].max())
>> print('imgs[i,0] ',imgs[i,0].min())
>> imgs[i,0]=img_as_float(imgs[i,0])
>> print('afte applying astype')
>> print('imgs[i,0] ',imgs[i,0].shape)
>> print('imgs[i,0] ',imgs[i,0].dtype)
>> print('imgs[i,0] ',imgs[i,0].max())
>> print('imgs[i,0] ',imgs[i,0].min())
>>
>> the output is
>>
>> imgs[i,0] (584, 565)
>> imgs[i,0] float64
>> imgs[i,0] 255.0
>> imgs[i,0] 0.0
>> afte applying astype
>> imgs[i,0] (584, 565)
>> imgs[i,0] float64
>> imgs[i,0] 255.0
>> imgs[i,0] 0.0
>>
>>
>> Looks like it does not convert the image type as I expected, in specific,
>> the maximum value.
>>
>> Thanks,
>> Yuanyuan
>>
>>
>>
>>
>>
>>
>> On Tue, Dec 27, 2016 at 1:39 AM, Egor Panfilov <egor.v.panfilov(a)gmail.com
>> > wrote:
>>
>>> Dear Yuanyuan,
>>>
>>> First of all, it is not a good idea to initialize the array with values
>>> using `np.empty`. I'd recommend to use either `np.random.rand` or
>>> `np.random.randint`.
>>>
>>> As for main point of your question, I believe you might need
>>> http://scikit-image.org/docs/dev/api/skimage.html#img-as-float (see
>>> also http://scikit-image.org/docs/dev/user_guide/data_types.html ).
>>> So, you can either create an array of floats [0:1) via `np.random.rand`,
>>> or create an array of uints via `np.random.randint`, and call
>>> `img_as_float`. Then `equalize_adapthist` should work flawlessly.
>>>
>>> Regards,
>>> Egor
>>>
>>> 2016-12-27 1:27 GMT+03:00 wine lover <winecoding(a)gmail.com>:
>>>
>>>> Dear All,
>>>>
>>>> I was trying to use the above code segment for performing Contrast
>>>> Limited Adaptive Histogram Equalization (CLAHE).
>>>> def clahe_equalized(imgs):
>>>> imgs_equalized = np.empty(imgs.shape)
>>>> for i in range(imgs.shape[0]):
>>>>
>>>> print('imgs[i,0] ',imgs[i,0].dtype)
>>>> print('imgs[i,0] ',imgs[i,0].max())
>>>> print('imgs[i,0] ',imgs[i,0].min())
>>>> imgs_equalized[i,0] = exposure.equalize_adapthist(im
>>>> gs[i,0],clip_limit=0.03)
>>>> return imgs_equalized
>>>>
>>>> The dtype is float64, maximum value is 255.0 and minimum value is 0.0
>>>>
>>>> Running the program generates the following error message ( I only
>>>> keep the related ones)
>>>>
>>>> imgs_equalized[i,0] = exposure.equalize_adapthist(im
>>>> gs[i,0],clip_limit=0.03)
>>>> raise ValueError("Images of type float must be between -1 and 1.")
>>>> ValueError: Images of type float must be between -1 and 1.
>>>>
>>>> In accordance with the above error message and image characteristics,
>>>> what are the best way to handle this scenario.
>>>>
>>>> I have been thinking of two approaches
>>>>
>>>>
>>>> 1. add imgs[i,0] = imgs[i,0]/255. which scale it to 0 and 1
>>>> 2. convert imgs[i,0] from float64 to unit8
>>>>
>>>> but imgs[i,0] = imgs[i,0].astype(np.unit8) gives the error message such
>>>> as
>>>> imgs[i,0]=imgs[i,0].astype(np.unit8)
>>>>
>>>> AttributeError: 'module' object has no attribute 'unit8'
>>>>
>>>> Would you like to give any advice on this problem? Thank you very much!
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> scikit-image mailing list
>>>> scikit-image(a)python.org
>>>> https://mail.python.org/mailman/listinfo/scikit-image
>>>>
>>>>
>>>
>>> _______________________________________________
>>> scikit-image mailing list
>>> scikit-image(a)python.org
>>> https://mail.python.org/mailman/listinfo/scikit-image
>>>
>>>
>>
>
Hi Yuanyuan,
In your example the image data range is not being rescaled as it already
has dtype float. `img_as_float` will rescale from [0:255] to [0:1] only if
the dtype of input ndarray is of integer family (and, in your case, uint8).
Take a look:
In [3]: nd_int = np.random.randint(0, 255, (3, 3))
In [4]: nd_int
Out[4]:
array([[ 85, 15, 60],
[225, 252, 32],
[162, 173, 34]])
In [5]: nd_int = nd_int.astype(np.uint8)
In [6]: skimage.img_as_float(nd_int)
Out[6]:
array([[ 0.33333333, 0.05882353, 0.23529412],
[ 0.88235294, 0.98823529, 0.1254902 ],
[ 0.63529412, 0.67843137, 0.13333333]])
Please, notice that if your data lies in a range [0:255], but the ndarray
dtype is not uint8 (e.g. uint16, int8, etc), you will get different results.
Regards,
Egor
2016-12-27 19:12 GMT+03:00 wine lover <winecoding(a)gmail.com>:
> Hi Egor,
>
> Thank you for the suggestion. This is how I modify the code
>
> imgs_equalized = np.random.rand(imgs.shape[0],imgs.shape[1],imgs.shape[2],
> imgs.shape[3])
> for i in range(imgs.shape[0]):
> print('imgs[i,0] ',imgs[i,0].shape)
> print('imgs[i,0] ',imgs[i,0].dtype)
> print('imgs[i,0] ',imgs[i,0].max())
> print('imgs[i,0] ',imgs[i,0].min())
> imgs[i,0]=img_as_float(imgs[i,0])
> print('afte applying astype')
> print('imgs[i,0] ',imgs[i,0].shape)
> print('imgs[i,0] ',imgs[i,0].dtype)
> print('imgs[i,0] ',imgs[i,0].max())
> print('imgs[i,0] ',imgs[i,0].min())
>
> the output is
>
> imgs[i,0] (584, 565)
> imgs[i,0] float64
> imgs[i,0] 255.0
> imgs[i,0] 0.0
> afte applying astype
> imgs[i,0] (584, 565)
> imgs[i,0] float64
> imgs[i,0] 255.0
> imgs[i,0] 0.0
>
>
> Looks like it does not convert the image type as I expected, in specific,
> the maximum value.
>
> Thanks,
> Yuanyuan
>
>
>
>
>
>
> On Tue, Dec 27, 2016 at 1:39 AM, Egor Panfilov <egor.v.panfilov(a)gmail.com>
> wrote:
>
>> Dear Yuanyuan,
>>
>> First of all, it is not a good idea to initialize the array with values
>> using `np.empty`. I'd recommend to use either `np.random.rand` or
>> `np.random.randint`.
>>
>> As for main point of your question, I believe you might need
>> http://scikit-image.org/docs/dev/api/skimage.html#img-as-float (see also
>> http://scikit-image.org/docs/dev/user_guide/data_types.html ).
>> So, you can either create an array of floats [0:1) via `np.random.rand`,
>> or create an array of uints via `np.random.randint`, and call
>> `img_as_float`. Then `equalize_adapthist` should work flawlessly.
>>
>> Regards,
>> Egor
>>
>> 2016-12-27 1:27 GMT+03:00 wine lover <winecoding(a)gmail.com>:
>>
>>> Dear All,
>>>
>>> I was trying to use the above code segment for performing Contrast
>>> Limited Adaptive Histogram Equalization (CLAHE).
>>> def clahe_equalized(imgs):
>>> imgs_equalized = np.empty(imgs.shape)
>>> for i in range(imgs.shape[0]):
>>>
>>> print('imgs[i,0] ',imgs[i,0].dtype)
>>> print('imgs[i,0] ',imgs[i,0].max())
>>> print('imgs[i,0] ',imgs[i,0].min())
>>> imgs_equalized[i,0] = exposure.equalize_adapthist(im
>>> gs[i,0],clip_limit=0.03)
>>> return imgs_equalized
>>>
>>> The dtype is float64, maximum value is 255.0 and minimum value is 0.0
>>>
>>> Running the program generates the following error message ( I only
>>> keep the related ones)
>>>
>>> imgs_equalized[i,0] = exposure.equalize_adapthist(im
>>> gs[i,0],clip_limit=0.03)
>>> raise ValueError("Images of type float must be between -1 and 1.")
>>> ValueError: Images of type float must be between -1 and 1.
>>>
>>> In accordance with the above error message and image characteristics,
>>> what are the best way to handle this scenario.
>>>
>>> I have been thinking of two approaches
>>>
>>>
>>> 1. add imgs[i,0] = imgs[i,0]/255. which scale it to 0 and 1
>>> 2. convert imgs[i,0] from float64 to unit8
>>>
>>> but imgs[i,0] = imgs[i,0].astype(np.unit8) gives the error message such
>>> as
>>> imgs[i,0]=imgs[i,0].astype(np.unit8)
>>>
>>> AttributeError: 'module' object has no attribute 'unit8'
>>>
>>> Would you like to give any advice on this problem? Thank you very much!
>>>
>>>
>>>
>>> _______________________________________________
>>> scikit-image mailing list
>>> scikit-image(a)python.org
>>> https://mail.python.org/mailman/listinfo/scikit-image
>>>
>>>
>>
>> _______________________________________________
>> scikit-image mailing list
>> scikit-image(a)python.org
>> https://mail.python.org/mailman/listinfo/scikit-image
>>
>>
>
Hi Simone,
I have had a little experience with HDF5 and am interested to see where you
go with this. I wonder if you could use "feather":
https://github.com/wesm/feather
There was a recent post from Wes McKinney about feather, which sparked my
interest:
http://wesmckinney.com/blog/high-perf-arrow-to-pandas/
Do you use HDF5 to store intermediates? if so, I would try storing
intermediates to a file format like feather and then reducing to a HDF5
file at the end. The reduction should be IO bound and not dependent on RAM
so would suit your cluster.
If you need to read a large array then I think HDF5 supports that (for
single write but multiple reads) without the need for MPI - so this could
map well to a tool like distributed:
http://distributed.readthedocs.io/en/latest/
Not sure this helps, there is an assumption (on my part) that your
intermediate calculations are not terabytes in size.
Good luck!
Nathan
On 29 December 2016 at 05:07, simone codeluppi <simone.codeluppi(a)gmail.com>
wrote:
> Hi all!
>
> I would like to pick your brain for some suggestion on how to modify my
> image analysis pipeline.
>
> I am analyzing terabytes of image stacks generated using a microscope. The
> current code I generated rely heavily on scikit-image, numpy and scipy. In
> order to speed up the analysis the code runs on a HPC computer (
> https://www.nsc.liu.se/systems/triolith/) with MPI (mpi4py) for
> parallelization and hdf5 (h5py) for file storage. The development cycle of
> the code has been pretty painful mainly due to my non familiarity with mpi
> and problems in compiling parallel hdf5 (with many open/closing bugs).
> However, the big drawback is that each core has only 2Gb of RAM (no shared
> ram across nodes) and in order to run some of the processing steps i ended
> up reserving one node (16 cores) but running only 3 cores in order to have
> enough ram (image chunking won’t work in this case). As you can imagine
> this is extremely inefficient and i end up getting low priority in the
> queue system.
>
>
> Our lab currently bought a new 4 nodes server with shared RAM running
> hadoop. My goal is to move the parallelization of the processing to dask. I
> tested it before in another system and works great. The drawback is that,
> if I understood correctly, parallel hdf5 works only with MPI
> (driver=’mpio’). Hdf5 gave me quite a bit of headache but works well in
> keeping a good structure of the data and i can save everything as numpy
> arrays….very handy.
>
>
> If I will move to hadoop/dask what do you think will be a good solution
> for data storage? Do you have any additional suggestion that can improve
> the layout of the pipeline? Any help will be greatly appreciated.
>
>
> Simone
> --
> *Bad as he is, the Devil may be abus'd,*
> *Be falsy charg'd, and causelesly accus'd,*
> *When men, unwilling to be blam'd alone,*
> *Shift off these Crimes on Him which are their*
> *Own*
>
> *Daniel Defoe*
>
> simone.codeluppi(a)gmail.com
>
> simone(a)codeluppi.org
>
>
> _______________________________________________
> scikit-image mailing list
> scikit-image(a)python.org
> https://mail.python.org/mailman/listinfo/scikit-image
>
>
Hi all!
I would like to pick your brain for some suggestion on how to modify my
image analysis pipeline.
I am analyzing terabytes of image stacks generated using a microscope. The
current code I generated rely heavily on scikit-image, numpy and scipy. In
order to speed up the analysis the code runs on a HPC computer (
https://www.nsc.liu.se/systems/triolith/) with MPI (mpi4py) for
parallelization and hdf5 (h5py) for file storage. The development cycle of
the code has been pretty painful mainly due to my non familiarity with mpi
and problems in compiling parallel hdf5 (with many open/closing bugs).
However, the big drawback is that each core has only 2Gb of RAM (no shared
ram across nodes) and in order to run some of the processing steps i ended
up reserving one node (16 cores) but running only 3 cores in order to have
enough ram (image chunking won’t work in this case). As you can imagine
this is extremely inefficient and i end up getting low priority in the
queue system.
Our lab currently bought a new 4 nodes server with shared RAM running
hadoop. My goal is to move the parallelization of the processing to dask. I
tested it before in another system and works great. The drawback is that,
if I understood correctly, parallel hdf5 works only with MPI
(driver=’mpio’). Hdf5 gave me quite a bit of headache but works well in
keeping a good structure of the data and i can save everything as numpy
arrays….very handy.
If I will move to hadoop/dask what do you think will be a good solution for
data storage? Do you have any additional suggestion that can improve the
layout of the pipeline? Any help will be greatly appreciated.
Simone
--
*Bad as he is, the Devil may be abus'd,*
*Be falsy charg'd, and causelesly accus'd,*
*When men, unwilling to be blam'd alone,*
*Shift off these Crimes on Him which are their*
*Own*
*Daniel Defoe*
simone.codeluppi(a)gmail.com
simone(a)codeluppi.org
Oh, right, sorry, now I see what you're doing.
Arrays are homogeneous, meaning every value has the same type. If you write:
imgs[i, 0] = imgs[i, 0].astype(np.uint8)
you are not changing the type of imgs, so you explicitly cast to uint8 and then the assignment (=) implicitly casts it back to float64. Oops! =)
Please follow the advice of Egor and find the img_as_ubyte and img_as_float methods, and use those to convert images of different types.
Juan.
On 28 Dec 2016, 2:48 AM +1100, wine lover , wrote:
> Hi Juan,
>
> Thanks for pointing the typo. I corrected it, and looks like imgs[i,0]=imgs[i,0].astype(np.unit8) does not solve the problem.
>
> Here is the screenshot of result
>
> imgs[i,0] (584, 565)
> imgs[i,0] float64
> imgs[i,0] 255.0
> imgs[i,0] 0.0
> afte applying astype
> imgs[i,0] (584, 565)
> imgs[i,0] float64
> imgs[i,0] 255.0
> imgs[i,0] 0.0
>
> Thanks,
> Yuanyuan
>
>
>
> > On Tue, Dec 27, 2016 at 12:10 AM, Juan Nunez-Iglesias <jni.soma(a)gmail.com> wrote:
> > > Typo: unit8 -> uint8
> > >
> > >
> > > On 27 Dec 2016, 9:27 AM +1100, wine lover <winecoding(a)gmail.com>, wrote:
> > > > Dear All,
> > > >
> > > > I was trying to use the above code segment for performing Contrast Limited Adaptive Histogram Equalization (CLAHE).
> > > > def clahe_equalized(imgs):
> > > > imgs_equalized = np.empty(imgs.shape)
> > > > for i in range(imgs.shape[0]):
> > > >
> > > > print('imgs[i,0] ',imgs[i,0].dtype)
> > > > print('imgs[i,0] ',imgs[i,0].max())
> > > > print('imgs[i,0] ',imgs[i,0].min())
> > > > imgs_equalized[i,0] = exposure.equalize_adapthist(imgs[i,0],clip_limit=0.03)
> > > > return imgs_equalized
> > > >
> > > > The dtype is float64, maximum value is 255.0 and minimum value is 0.0
> > > >
> > > > Running the program generates the following error message ( I only keep the related ones)
> > > >
> > > > imgs_equalized[i,0] = exposure.equalize_adapthist(imgs[i,0],clip_limit=0.03)
> > > > raise ValueError("Images of type float must be between -1 and 1.")
> > > > ValueError: Images of type float must be between -1 and 1.
> > > >
> > > > In accordance with the above error message and image characteristics, what are the best way to handle this scenario.
> > > >
> > > > I have been thinking of two approaches
> > > >
> > > >
> > > > - add imgs[i,0] = imgs[i,0]/255. which scale it to 0 and 1
> > > > - convert imgs[i,0] from float64 to unit8
> > > >
> > > > but imgs[i,0] = imgs[i,0].astype(np.unit8) gives the error message such as
> > > > imgs[i,0]=imgs[i,0].astype(np.unit8)
> > > > AttributeError: 'module' object has no attribute 'unit8'
> > > >
> > > > Would you like to give any advice on this problem? Thank you very much!
> > > >
> > > >
> > > > _______________________________________________
> > > > scikit-image mailing list
> > > > scikit-image(a)python.org
> > > > https://mail.python.org/mailman/listinfo/scikit-image
>
On 27 Dec 2016, 9:27 AM +1100, wine lover <winecoding(a)gmail.com>, wrote:
> Dear All,
>
> I was trying to use the above code segment for performing Contrast Limited Adaptive Histogram Equalization (CLAHE).
> def clahe_equalized(imgs):
> imgs_equalized = np.empty(imgs.shape)
> for i in range(imgs.shape[0]):
>
> print('imgs[i,0] ',imgs[i,0].dtype)
> print('imgs[i,0] ',imgs[i,0].max())
> print('imgs[i,0] ',imgs[i,0].min())
> imgs_equalized[i,0] = exposure.equalize_adapthist(imgs[i,0],clip_limit=0.03)
> return imgs_equalized
>
> The dtype is float64, maximum value is 255.0 and minimum value is 0.0
>
> Running the program generates the following error message ( I only keep the related ones)
>
> imgs_equalized[i,0] = exposure.equalize_adapthist(imgs[i,0],clip_limit=0.03)
> raise ValueError("Images of type float must be between -1 and 1.")
> ValueError: Images of type float must be between -1 and 1.
>
> In accordance with the above error message and image characteristics, what are the best way to handle this scenario.
>
> I have been thinking of two approaches
>
>
> - add imgs[i,0] = imgs[i,0]/255. which scale it to 0 and 1
> - convert imgs[i,0] from float64 to unit8
>
> but imgs[i,0] = imgs[i,0].astype(np.unit8) gives the error message such as
> imgs[i,0]=imgs[i,0].astype(np.unit8)
> AttributeError: 'module' object has no attribute 'unit8'
>
> Would you like to give any advice on this problem? Thank you very much!
>
>
> _______________________________________________
> scikit-image mailing list
> scikit-image(a)python.org
> https://mail.python.org/mailman/listinfo/scikit-image
Dear Yuanyuan,
First of all, it is not a good idea to initialize the array with values
using `np.empty`. I'd recommend to use either `np.random.rand` or
`np.random.randint`.
As for main point of your question, I believe you might need
http://scikit-image.org/docs/dev/api/skimage.html#img-as-float (see also
http://scikit-image.org/docs/dev/user_guide/data_types.html ).
So, you can either create an array of floats [0:1) via `np.random.rand`,
or create an array of uints via `np.random.randint`, and call
`img_as_float`. Then `equalize_adapthist` should work flawlessly.
Regards,
Egor
2016-12-27 1:27 GMT+03:00 wine lover <winecoding(a)gmail.com>:
> Dear All,
>
> I was trying to use the above code segment for performing Contrast Limited
> Adaptive Histogram Equalization (CLAHE).
> def clahe_equalized(imgs):
> imgs_equalized = np.empty(imgs.shape)
> for i in range(imgs.shape[0]):
>
> print('imgs[i,0] ',imgs[i,0].dtype)
> print('imgs[i,0] ',imgs[i,0].max())
> print('imgs[i,0] ',imgs[i,0].min())
> imgs_equalized[i,0] = exposure.equalize_adapthist(
> imgs[i,0],clip_limit=0.03)
> return imgs_equalized
>
> The dtype is float64, maximum value is 255.0 and minimum value is 0.0
>
> Running the program generates the following error message ( I only
> keep the related ones)
>
> imgs_equalized[i,0] = exposure.equalize_adapthist(
> imgs[i,0],clip_limit=0.03)
> raise ValueError("Images of type float must be between -1 and 1.")
> ValueError: Images of type float must be between -1 and 1.
>
> In accordance with the above error message and image characteristics, what
> are the best way to handle this scenario.
>
> I have been thinking of two approaches
>
>
> 1. add imgs[i,0] = imgs[i,0]/255. which scale it to 0 and 1
> 2. convert imgs[i,0] from float64 to unit8
>
> but imgs[i,0] = imgs[i,0].astype(np.unit8) gives the error message such as
> imgs[i,0]=imgs[i,0].astype(np.unit8)
>
> AttributeError: 'module' object has no attribute 'unit8'
>
> Would you like to give any advice on this problem? Thank you very much!
>
>
>
> _______________________________________________
> scikit-image mailing list
> scikit-image(a)python.org
> https://mail.python.org/mailman/listinfo/scikit-image
>
>
Dear Yuanyuan,
There is no strict correspondence between these two clip limits.
If you would like to have something like OpenCV implementation of CLAHE,
consider trying https://github.com/anntzer/clahe.
Also, feel free to join the discussion in https://github.com/scikit-i
mage/scikit-image/issues/2219. There you might find a bit more details.
Regards,
Egor
2016-12-27 2:22 GMT+03:00 wine lover <winecoding(a)gmail.com>:
> Dear All,
>
> The following is an example given in opencv regarding applying Contrast
> Limited Adaptive Histogram Equalization (CLAHE)
>
> *import numpy as np*
> *import cv2*
> *img = cv2.imread('tsukuba_l.png',0)*
> *clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))*
> *cl1 = clahe.apply(img)*
>
> Here the parameter clipLimit =2.0
>
> In Skimage, CLAHE is perfored using *exposure.equalize_adapthist*
>
> For instance, in this example, http://scikit-image.org/docs/
> dev/auto_examples/plot_equalize.html
>
> *img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)*
>
> My question is that how to setup the clip_limit value in skimage for a
> corresponding case in opencv
>
>
> For instance, in an example implemented using opencv, clipLimit is setup
> as 2.0; if I want to convert this implementation using skimage
> which value should I assign to clip_limit?
>
> According to the document looks like clip_limit between 0 and 1.
> *clip_limit : float, optional*
> *Clipping limit, normalized between 0 and 1 (higher values give more
> contrast).*
>
> while opencv does not have this limitation for clipLimit
>
> Thanks,
> Yuanyuan
>
> _______________________________________________
> scikit-image mailing list
> scikit-image(a)python.org
> https://mail.python.org/mailman/listinfo/scikit-image
>
>
Dear All,
The following is an example given in opencv regarding applying Contrast
Limited Adaptive Histogram Equalization (CLAHE)
*import numpy as np*
*import cv2*
*img = cv2.imread('tsukuba_l.png',0)*
*clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))*
*cl1 = clahe.apply(img)*
Here the parameter clipLimit =2.0
In Skimage, CLAHE is perfored using *exposure.equalize_adapthist*
For instance, in this example,
http://scikit-image.org/docs/dev/auto_examples/plot_equalize.html
*img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)*
My question is that how to setup the clip_limit value in skimage for a
corresponding case in opencv
For instance, in an example implemented using opencv, clipLimit is setup as
2.0; if I want to convert this implementation using skimage
which value should I assign to clip_limit?
According to the document looks like clip_limit between 0 and 1.
*clip_limit : float, optional*
*Clipping limit, normalized between 0 and 1 (higher values give more
contrast).*
while opencv does not have this limitation for clipLimit
Thanks,
Yuanyuan