unicode to ascii converting
"Martin v. Löwis"
martin at v.loewis.de
Fri Aug 6 21:46:01 CEST 2004
Peter Wilkinson wrote:
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0:
> ordinal not in range(128)
That error actually says what happened: You have the byte with the
numeric value 0xff in the input, and the ASCII (American Standard
Code for Information Interchange) converter cannot convert that
into a Unicode character. This is because ASCII is a 7-bit character
set, i.e. it goes from 0..127. 0xFF is 255, so it is out of range.
Now, the line triggering this is
and it invokes *encode*, not *decode*. Why would it give a decode error
decode: take a byte string, return a Unicode string
encode: take a Unicode string, take a byte string
So line should be a Unicode string, for .encode to be a meaningful thing
to do. Unfortunately, Python supports .encode also for byte strings.
If new_encode defines a character encoding, this does
def encode(self, encoding):
unistr = unicode(self)
So it first tries to convert the current string into unicode, which
uses the system default encoding, which is us-ascii. Hence the error.
More information about the Python-list