[Python-Dev] Python3 "complexity"

Paul Moore p.f.moore at gmail.com
Thu Jan 9 14:24:53 CET 2014

On 9 January 2014 13:00, Kristján Valur Jónsson <kristjan at ccpgames.com> wrote:
>> You don't say what problems, but I assume encoding/decoding errors. So the
>> files apparently weren't in the system encoding. OK, at that point I'd
>> probably say to heck with it and use latin-1. Assuming I was sure that (a) I'd
>> never hit a non-ascii compatible file (e.g., UTF16) and
>> (b) I didn't have a decent means of knowing the encoding.
> Right.  But even latin-1, or better, cp1252 (on windows) does not solve it because these have undefined
> code points.  So you need 'surrogateescape' error handling as well.  Something that I didn't know at
> the time, having just come from python 2 and knowing its Unicode model well.

>>> bin = bytes(range(256))
>>> bin
x1d\x1e\x1f !"#$%&\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\x7f\
>>> bin.decode('latin-1')
1d\x1e\x1f !"#$%&\'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\x7f\x

No undefined bytes there. If you mean that latin-1 can't encode all of
the Unicode code points, then how did those code points get in there?
Presumably you put them in, and so you're not just playing with the
ASCII text parts. And you *do* need to understand encodings.

>> One thing that genuinely is difficult is that because disk files don't have any
>> out-of-band data defining their encoding, it *can* be hard to know what
>> encoding to use in an environment where more than one encoding is
>> common. But this isn't really a Python issue - as I say, I've hit it with GNU
>> tools, and I've had to explain the issue to colleagues using Java on many
>> occasions. The key difference is that with grep, people blame the file,
>> whereas with Python people blame the language :-) (Of course, with Java,
>> people expect this sort of problem so they blame the perverseness of the
>> universe as a whole... ;-))
> Which reminds me, can Python3 read text files with BOM automatically yet?

If by "automatically" you mean "reads the BOM and chooses an
appropriate encoding based on it" then I don't know, but I suspect
not. But unless you're worried about 2-byte encodings (see! you need
to understand encodings again!) latin-1 will still work.

It sounds to me like what you *really* want is something that
autodetects encodings on Windows in the same sort of way as other
Windows tools like Notepad does. That's a fair thing to want, but no,
Python doesn't provide it (nor did Python 2). I suspect that it would
be possible to write a codec to do this, though. Maybe there's even
one on PyPI.


More information about the Python-Dev mailing list