[Image-SIG] 16-bit unsigned short: PIL tile descriptor?
Thu, 06 Mar 2003 08:27:08 +0100
>From my understanding, the correct mode for unsigned shorts (in the same
endian mode as the machine where you work on) is I;16. A F;16 mode
denotes a 16 bit float, which obviously is wrong.
> I'm trying to write a PIL plug-in for a in-house image
> format. It's a very simple format with a 1024-byte header followed by
> unsigned shorts in big-endian byte order. I tried using both the 'raw'
> and 'bit' decoders without much success - while 'L' works with 'raw',
> in that I don't get any errors, my images look weird when
> processed. Using "F;16B" in the parameters tuple for 'raw' mode (what
> should self.mode be in this case ?), results in a 'ValueError :
> unrecognized mode' exception being thrown. Could someone throw light
> on what self.mode needs to be set to and the correct tile descriptor
> values to use ? Thanks,
> Image-SIG maillist - Image-SIG@python.org
Klamer Schutte, E-mail: Schutte@fel.tno.nl
Electro-Optical Systems, TNO Physics and Electronics Laboratory
Tel: +31-70-3740469 -- Fax: +31-70-3740654 -- Mobile: +31-6-51316671