Since I am new to the project I am not sure whether that has been
discussed before, but
I think there is still some work to do concerning data types of images.
As far as I can tell, most algorithms work with either uchar in the
range 0-255 or
float in the range 0-1.
As far as I could see, no conversion was done between them and if the
input to an
algorithm was not in the right type, it just gave a garbage result (for
some edge detection or color space conversion).
I personally believe that the library should be written in a way that a
build a pipeline without doing any explicit type conversions.
What do you think about that?
That would mean that the algorithms would do automatic type conversion
when needed. From the doc it seems like something similar is planned.
To do automatic type conversions, there needs to be some way for the
algorithms to figure out what range the data has. I would assume
that float numbers should always be between 0 and 1 and
ubyte is always between 0 and 255.
At the moment, if you give an ubyte image to (for example) the
color space transforms, you get a _float_ in the range 0-255.
After that, I think it is impossible to automatically find out the range
of the data.
I think given ubyte data, either the algorithms should return a ubyte
(loosing precision) or return a float between 0 and 1.
Do you think this is a good approach?
Or is there already some other concept planned?
I think this is an important issue for an image library since
having to do explicit type conversions really hinders usability.