encode and decode builtins
ben+python at benfinney.id.au
Sun Nov 16 08:50:45 CET 2014
Garrett Berg <googberg at gmail.com> writes:
> I made the switch to python 3 about two months ago, and I have to say
> I love everything about it, *especially* the change to using only
> bytes and str (no more unicode! or... everything is unicode!) As
> someone who works with embedded devices, it is great to know what data
> I am working with.
THanks! It is great to hear from people directly benefiting from this
> However, there are times that I do not care what data I am working
> with, and I find myself writing something like:
> if isinstance(data, bytes): data = data.decode()
Why are you in a position where ‘data’ is not known to be bytes? If you
want ‘unicode’ objects, isn't the API guaranteeing to provide them?
> This is tedious and breaks the pythonic method of not caring about
> what your input is.
I wouldn't call that Pythonic. Rather, in the face of ambiguity (“is
this text or bytes?”), Pythonic code refuses the temptation to guess:
you need to clarify what you have as early as possible in the process.
> If I expect that my input can always be decoded into valid data, then
> why do I have to write this?
I don't know. Why do you have to?
\ “God was invented to explain mystery. God is always invented to |
`\ explain those things that you do not understand.” —Richard P. |
_o__) Feynman, 1988 |
More information about the Python-list