Re: [Python-Dev] Re: PendingDeprecationWarning
Having both repr AND the peculiar `...` operator is another one of those things which I find _really_ hard to explain without handwaving and without sounding negative about Python, a language about which I'm actually enthusiastic.
Give yourself a bit of credit, Alex. Surely you can talk marketing-speak if you want to! FWIW, I consider `...` a historic wart and a failed experiment (even though I use it frequently myself). It appeared in a different form in ABC, where you could use `...` inside a string literal to do variable substitution (expression substitution actually). I think I found it to hard to implement that in the parser, so I decided that `...` would be a separate construct that you could embed in a string using regular string concatenation. But that means that the equivalent of "The sum of $a and $b is $c" would have to be written as "The sum of " + `a` + " and " + `b` + " is " + `c` which contains way too much interpunction. The other killer is that it uses repr() rather than str(). Python 0.0 only had `...`, it had neither str() not repr().
I'd rather lose the `...` (via pendingsilentdeprecation or whatever).
+0
As to repr itself, I'd be +0 on taking it out of the builtins, but that's hardly a major issue, nor of course a realistic prospect.
Why get rid of repr()? It's very useful. In error messages I often want to show an object relevant to the message, but if that object is (or may be) a string, I don't want newlines and other control characters in the string to mess up the formatting of the error message. repr() is perfect for that.
str is obviously quite a different kettle of fish -- it's a TYPE and thus it cannot be substituted by whatever operator. It would also be the natural way to put a format-with-whatever-number-base functionality (2-arg str, just like parsing with arbitrary base is 2-arg int -- perfect!).
How useful is formatting numbers with arbitrary bases? I think decimal and hex cover all current use cases; even octal is mostly historic. (I still read octal more quickly than hex, but that's showing my age more than anything else -- I grew up around a CDC mainframe that dumped in octal.) --Guido van Rossum (home page: http://www.python.org/~guido/)
On Wednesday 29 May 2002 09:42 pm, Guido van Rossum wrote:
Having both repr AND the peculiar `...` operator is another one of those things which I find _really_ hard to explain without handwaving and without sounding negative about Python, a language about which I'm actually enthusiastic.
Give yourself a bit of credit, Alex. Surely you can talk marketing-speak if you want to!
I'm pretty good at marketing _stricto sensu_, but not at obfuscation (not deliberate one, I mean!-).
As to repr itself, I'd be +0 on taking it out of the builtins, but that's hardly a major issue, nor of course a realistic prospect.
Why get rid of repr()? It's very useful. In error messages I often want to show an object relevant to the message, but if that object is (or may be) a string, I don't want newlines and other control characters in the string to mess up the formatting of the error message. repr() is perfect for that.
Actually, in error messages I most often want to show objects relevant to the message AND other strings or data too, so I almost invariably end up using the % operator to format the message string. Thus, I use the %r formatting specifier in the format string, not a separate call to repr(), of course. The 'representation' _functionality_ is indeed precious, although often it's better supplied by MODULE repr (if I have a string in error to show, I don't want to show an UNLIMITED amount of characters from it -- module repr helps by limiting the amount of data shown). I don't particularly care to have that very nice functionality made available in three ways -- `...`, repr, and %r -- I'd rather have just one. But hey, I'll settle for two:-).
How useful is formatting numbers with arbitrary bases? I think
Roughly as useful as _parsing_ numbers in arbitrary bases, as offered by 2-arguments int() -- i.e., not very. It just DOES feel a little weird to have it trivially easy to parse numbers in strange bases but no corresponding ease in _output_.
decimal and hex cover all current use cases; even octal is mostly
Almost all, yes. I've found myself using binary and (once) ternary, but mostly in tricky ways rather than as plain I/O.
historic. (I still read octal more quickly than hex, but that's showing my age more than anything else -- I grew up around a CDC mainframe that dumped in octal.)
We're roughly the same age, I think, and I grew up with DEC & HP minis, CDC mainframes, and Intel and Zilog micros, which had _strong_ octal bias (reading 8080 machine-code dumps in octal was easy -- the typical 1-byte commands had fields of 2-3-3 bits...). The difference is that then I moved to IBM, where I was thoroughly indoctrinated in the beauty of hex (try reading a _370_ machine-code dump in octal...!-). Even IBM's main scripting language *rhymed* with 'hex'...!-) Alex
[Guido van Rossum]
FWIW, I consider `...` a historic wart and a failed experiment (even though I use it frequently myself).
Hey, hey! Glad to hear you do not like it so much yourself :-). I keep saying to people around me that `print' and backquotes are merely debugging devices, that we should not really keep in production code. Nobody speaks about `input()', of course. :-)
I grew up around a CDC mainframe that dumped in octal.)
Hi there! I could probably debug an octal dump even today, the CPU codes are rather easy to remember :-). Many members of the PDP series were also favouring octal. Isn't the Cray which was using a mix of hexadecimal and octal in dumps (or do I mix it with something else)? But this is history. I would prefer decimal everywhere nowadays. Too bad that Unicode pushed so strongly on hexadecimal, this is a bit anachronical. -- François Pinard http://www.iro.umontreal.ca/~pinard
[François Pinard]
... Hi there! I could probably debug an octal dump even today, the CPU codes are rather easy to remember :-). Many members of the PDP series were also favouring octal. Isn't the Cray which was using a mix of hexadecimal and octal in dumps (or do I mix it with something else)?
The early Cray software used only octal, since everyone there came from CDC, and loved octal from 60-bit words, 18-bit address registers, and 6-bit characters (http://www.cwi.nl/~dik/english/codes/intern.html). Octal proved surprisingly pleasant for 64-bit words too! It left the sign bit off by itself in the 22nd octal digit, and it was said that Seymour made the exponent field in Cray floats 15 bits wide so that it would be easy to read off from octal dumps too. Octal was so deeply ingrained in Cray culture that a coworker filled out her timesheet in octal once, 10 hours per day for her 2-week vacation, summing to 120 hours. Our boss signed off on it because it looked fine to him. This is the same boss who loved to tell the story of taking his family out for a drive, and excitedly exclaiming "Hey, kids! Look!! The odometer is about to flip over to 40000!". Of course it read 37777 at the time, and when it flipped to 37778 "they looked at me funny, and my family life was never the same again". Then Cray hired a bunch of young crybabies (like me), who-- with some justification --pointed out that octal dumps were really hard to scan for character data, given that Cray had moved to 8-bit characters.
But this is history.
It doesn't have to be. Unicode surely has nothing going for it over CDC Display Code <wink>.
I would prefer decimal everywhere nowadays. Too bad that Unicode pushed so strongly on hexadecimal, this is a bit anachronical.
I suggested to Guido today that we deprecate decimal literals in Python, in favor of octal everywhere. A killer advantage is that every binary floating-point number can be printed exactly with a few dozen octal digits, and that should squash a lot of newbie complaints about confusing floating-point rounding errors. it's-all-about-doing-what's-best-for-the-children-ly y'rs - tim
[Tim Peters]
Then Cray hired a bunch of young crybabies (like me), who-- with some justification --pointed out that octal dumps were really hard to scan for character data, given that Cray had moved to 8-bit characters.
Seymour Cray once told that story. He hired many young engineers (hardware and software) and was instructing them about his own designs with passion, sharing his findings and choices with great enthusiasm. But Seymour noticed that they were listening with rather dull eyes, and no special pleasure. What was discovery for him was flatly part of their curriculum, or almost. He said something like: "Through their questions, they were criticising our works or raising suggestions and -- damned! -- often they were _right_!". Maybe he was speaking about you, Tim, who knows? :-) -- François Pinard http://www.iro.umontreal.ca/~pinard
participants (4)
-
Alex Martelli
-
Guido van Rossum
-
pinard@iro.umontreal.ca
-
Tim Peters