<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div class="">On Sep 16, 2010, at 4:51 PM, R. David Murray wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><span style="border-collapse: separate; font-family: Menlo; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; " class="Apple-style-span"><span style="font-family: monospace; " class="Apple-style-span">Given a message, there are many times you want to serialize it as text<br class="">(for example, for presentation in a UI). You could provide alternate<br class="">serialization methods to get text out on demand....but then what if<br class="">someone wants to push that text representation back in to email to<br class="">rebuild a model of the message?</span></span></blockquote><div><br class=""></div><div>You tell them "too bad, make some bytes out of that text." Leave it up to the application. Period, the end, it's not the library's job. If you pushed the text out to a 'view message source' UI representation, then the vicissitudes of the system clipboard and other encoding and decoding things may corrupt it in inscrutable ways. You can't fix it. Don't try.</div><br class=""><blockquote type="cite"><span style="border-collapse: separate; font-family: Menlo; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; " class="Apple-style-span"><span style="font-family: monospace; " class="Apple-style-span">So now we have both a bytes parser and a string parser.<br class=""></span></span></blockquote></div><br class=""><div class="">Why do so many messages on this subject take this for granted? It's wrong for the email module just like it's wrong for every other package.</div><div class=""><br></div><div class="">There are plenty of other (better) ways to deal with this problem. Let the application decide how to fudge the encoding of the characters back into bytes that can be parsed. "In the face of ambiguity, refuse the temptation to guess" and all that. The application has more of an idea of what's going on than the library here, so let it make encoding decisions.</div><div class=""><br></div><div class="">Put another way, there's nothing wrong with having a text parser, as long as it just encodes the text according to some known encoding and then parses the bytes :).</div><div class=""><br class=""></div><div class=""><font color="#144FAE" class="Apple-style-span"><blockquote type="cite"></blockquote></font><div class=""><font color="#144FAE" class="Apple-style-span"><blockquote type="cite">So, after much discussion, what we arrived at (so far!) is a model<br class="">that mimics the Python3 split between bytes and strings. If you<br class="">start with bytes input, you end up with a BytesMessage object.<br class="">If you start with string input to the parser, you end up with a<br class="">StringMessage.</blockquote><br></font></div><div class="">That may be a handy way to deal with some grotty internal implementation details, but having a 'decode()' method is broken. The thing I care about, as a consumer of this API, is that there is a clearly defined "Message" interface, which gives me a uniform-looking place where I can ask for either characters (if I'm displaying them to the user) or bytes (if I'm putting them on the wire). I don't particularly care where those bytes came from. I don't care what decoding tricks were necessary to produce the characters.</div><div class=""><br></div><div class="">Now, it may be worthwhile to have specific normalization / debrokenifying methods which deal with specific types of corrupt data from the wire; encoding-guessing, replacement-character insertion or whatever else are fine things to try. It may also be helpful to keep around a list of errors in the message, for inspection. But as we know, there are lots of ways that MIME data can go bad other than encoding, so that's just one variety of error that we might want to keep around.</div><div class=""><br></div><div class="">(Looking at later messages as I'm about to post this, I think this all sounds pretty similar to Antoine's suggestions, with respect to keeping the implementation within a single class, and not having BytesMessage/UnicodeMessage at the same abstraction level.)</div></div></body></html>