[Python-Dev] Why can't I encode/decode base64 without importing a module?
ncoghlan at gmail.com
Tue Apr 23 16:31:04 CEST 2013
On Wed, Apr 24, 2013 at 12:16 AM, R. David Murray <rdmurray at bitdance.com> wrote:
> On Tue, 23 Apr 2013 22:29:33 +0900, "Stephen J. Turnbull" <stephen at xemacs.org> wrote:
>> R. David Murray writes:
>> > You transform *into* the encoding, and untransform *out* of the
>> > encoding. Do you have an example where that would be ambiguous?
>> In the bytes-to-bytes case, any pair of character encodings (eg, UTF-8
>> and ISO-8859-15) would do. Or how about in text, ReST to HTML?
> If I write:
> that would indeed be ambiguous, but only because I haven't named the
> source encoding of the bytestring. So the above is obviously
> nonsense, and the easiest "fix" is to have the things that are currently
> bytes-to-text or text-to-bytes character set transformations *only*
> work with encode/decode, and not transform/untransform.
And that's where it all falls down - to make that work, you need to
engineer a complex system into the codecs module to say "this codec
can be used with that API, but not with this one". I designed such a
beast in http://bugs.python.org/issue7475 and I now think it's a *bad
By contrast, the convenience function approach dispenses with all
that, and simply says:
1. If you just want to deal with text encodings, use str.encode (which
always produces bytes), along with bytes.decode and bytearray.decode
(which always produce str)
2. If you want to use arbitrary codecs without any additional type
constraints, do "from codecs import encode, decode"
I think there's value in hiding the arbitrary codec support behind an
import barrier (as they definitely have potential to be an attractive
nuisance that makes it harder to grasp the nature of Unicode and text
encodings, particularly for those coming from Python 2.x), but I'm not
hugely opposed to providing them as builtins either.
Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
More information about the Python-Dev