[Image-SIG] some technical support
Fri, 25 May 2001 12:01:52 -0400
I am using python 2.01, I would like to use the Python Imaging Library
to do some imaging processes, the image that i have is from a digital
camera and it has 12 bits digital output. The image file format is .tif
When i run the imaging capture software that support the camera, i can
set it to either 8 grayscale, 16 graycale, rgb 24 bits. But, for my
application i would to have the 16 grayscale (16 bits), now since the
camera output is 12 bits (it actually trunscated the last for bits), so
really actually i am getting is 12 bits depth image.
Now, my problem is, when i load it up in python, it only tells me that
it is a 8 bits tiff file, and i have no clues what is going on. Please
help!!! Following are several additional questions that i would like to
(1) Could it be the 12 bits is packed in a pixel?
(2) If so, how do I unpacked them to get 12 bits values?
(3) Or, it is really something else? If so, what could it be???
(4) Any script examples that i can refer to??
In case my work email doesn't work, please reply at email@example.com.
I REALLY APPRECIATE YOUR TIME, CONCERNS, ANSWER, AND HELP!
Thanks a thousand!
Tel: 603 879-3376