PEP 292, Simpler String Substitutions
"I'd rather have a notation that's less error-prone than a better way to check for errors. (Not that PyChecker 2 isn't a great idea. :-)" Percent notation "%s" notation already exists for strings. Backquote notation is already in python, though I think, little used. The $ notation reeks of obscure languages such as perl and shell. Why not simply add backquote notation to python strings. I read in a recent email from Timbot, I think, that the backquote notation was originally intended for string interpolation too. name = "guido" country = "the netherlands" height = 1.92 "`name` is from `country`".sub() -> "guido is from the netherlands" "`name.capitalize()` is from `country`" -> "Guido is from the netherlands" "`name` is %`height`4.1f meters tall".sub() -> "guido is 1.9 meters tall" "`name.capitalize()` can jump `height*1.7` meters".sub() -> "guido can jump 3.264 meters" You could probably also compile these interpolation strings as well. Another thought: One of the main problems with the "%(name)4.2f" notation is that the format comes after the variable name. Is easy to forget adding the actual format specifier in after the name. Why not alter the notation to allow the format specifier to come before the name part. "%4.2f(height)" I think would be a whole lot less error prone, and would allow for the format specifier to default to "s" where omitted. "%(height)" is also less error prone, though it is ambiguous in the current scheme. Ive seen a nice class which evaluates the string used in the name part. class itpl: def __getitem__(self, s): return eval(s, globals())
The $ notation reeks of obscure languages such as perl and shell.
Sigh. Please grow up.
Why not simply add backquote notation to python strings. I read in a recent email from Timbot, I think, that the backquote notation was originally intended for string interpolation too.
Unfortunately, backquotes are often hard to see, or mistaken for forward quotes. I think that disqualifies it. --Guido van Rossum (home page: http://www.python.org/~guido/)
From: "Damien Morton"
"`name.capitalize()` can jump `height*1.7` meters".sub() -> "guido can jump 3.264 meters"
I love this suggestion. It's the sort of thing you can't do in C++ ;-) I suspect the arguments against will run to efficiency and complexity, since you need to compile the backquoted expressions (in some context). Hmm, here they are... Nope, I'm wrong -Dave
I love this suggestion. It's the sort of thing you can't do in C++ ;-) I suspect the arguments against will run to efficiency and complexity, since you need to compile the backquoted expressions (in some context).
Actually, I had planned a secret feature that skips matching nested {...} inside ${...}, so that you could write a magic dict whose keys were eval()'ed in the caller's context. The %(...) parser does this (skipping nested (...)) because someone wanted to do that. --Guido van Rossum (home page: http://www.python.org/~guido/)
From: "Guido van Rossum"
I love this suggestion. It's the sort of thing you can't do in C++ ;-) I suspect the arguments against will run to efficiency and complexity, since you need to compile the backquoted expressions (in some context).
Actually, I had planned a secret feature that skips matching nested {...} inside ${...}, so that you could write a magic dict whose keys were eval()'ed in the caller's context. The %(...) parser does this (skipping nested (...)) because someone wanted to do that.
Ooh, magic and secrets! Maybe a little too magical for me to understand easily. Is the stuff between ${...} allowed to be any valid expression? harry-potter's-got-nothing-on-you-ly y'rs, dave
David wrote:
Ooh, magic and secrets! Maybe a little too magical for me to understand easily. Is the stuff between ${...} allowed to be any valid expression?
not according to the PEP, but nothing stops you from using a magic dictionary: class magic_dict: def __getitem__(self, value): return str(eval(value)) d = magic_dict() print "%(__import__('os').system('echo hello'))s" % d print replacevars("${__import__('os').system('echo hello')}", d) # for extra fun, replace 'echo hello' with 'rm -rf ~') </F>
From: "Damien Morton"
Why not simply add backquote notation to python strings. I read in a recent email from Timbot, I think, that the backquote notation was originally intended for string interpolation too.
"`name` is from `country`".sub() "`name.capitalize()` is from `country "`name` is %`height`4.1f meters tall".sub() "`name.capitalize()` can jump `height*1.7` meters".sub()
I'll bet this style would be brutal to read with the accented letters in French. Raymond Hettinger
On Wed, Jun 19, 2002 at 11:35:32PM -0400, Damien Morton wrote:
Why not simply add backquote notation to python strings. I read in a recent email from Timbot, I think, that the backquote notation was originally intended for string interpolation too.
See http://tothink.com/python/embedpp Oren
How come you never submitted this PEP to the PEPmeister? I can't comment on what I don't know. It certainly comes closest to the original ABC feature. (The main problem with `...` is that many people can't distinguish between ` and ', as user testing has shown.) --Guido van Rossum (home page: http://www.python.org/~guido/)
guido wrote:
How come you never submitted this PEP to the PEPmeister?
iirc, that's because Oren did the why would e"X=`x`, Y=`calc_y(x)`." be a vast improvement over: e("X=", x, ", Y=", calc_y(x), ".") test and his answer was not "I18N" (for obvious reasons ;-) (but I think we called the function "I" at that time) </F>
On Thu, Jun 20, 2002 at 01:30:47PM -0400, Guido van Rossum wrote:
How come you never submitted this PEP to the PEPmeister? I can't comment on what I don't know. It certainly comes closest to the original ABC feature. (The main problem with `...` is that many people can't distinguish between ` and ', as user testing has shown.)
I guess I got a bit discouraged by the response on python-list back then. Now I know better :-) Oren
[Guido]
... (The main problem with `...` is that many people can't distinguish between ` and ', as user testing has shown.)
Including Tim testing, which is dear to my heart. The editor I usually use allows defining styles (font, size, color, etc) for syntactic elements, and for Python files I set it up so that the backtick has its own style, 1.5x bigger than all other characters. This makes it very easy to see the backticks as such, but mostly(!) because it forces extra vertical space above a line containing one. That's more emphasis than even a Tim needs. OTOH, I have no trouble seeing lowercase "x" <wink>. xabcx==repr(abc)-ly y'rs - tim
On Thu, 20 Jun 2002, Oren Tirosh wrote:
Hi Oren, Your proposal brings up some valid concerns with PEP 215: 1. run-time vs. compile-time parsing 2. how to decide what's an expression 3. balanced quoting instead of $ PEP 215 actually agrees with you on point #1. That is, the intent (though poorly explained) was that the interpolated strings would be turned into bytecode by the compiler. That is why the PEP insists on having the interpolated expressions in the literal itself -- they can be taken apart at compile time. However, i don't necessarily agree with PEP 215. (I mentioned this once before, but it might not hurt to reiterate that i didn't write the PEP because i desperately wanted string interpolation. I wrote it because i wanted to try to get one local optimum written down in a PEP, so there would be something for discussion.) Using compile-time parsing, as in PEP 215, has the advantage that it avoids any possible security problems; but it also eliminates the possibility of using this for internationalization. I see this as the key tension in the string interpolation issue (aside from all the syntax stuff -- which is naturally controversial). -- ?!ng "Computers are useless. They can only give you answers." -- Pablo Picasso
Using compile-time parsing, as in PEP 215, has the advantage that it avoids any possible security problems;
It is also the only way to properly support nested scopes. It would be confusing and inconsistent if you can use a variable from a nested scope in an expression but not in a "string display" (which I think is a cute name for strings with embedded expressions).
but it also eliminates the possibility of using this for internationalization. I see this as the key tension in the string interpolation issue (aside from all the syntax stuff -- which is naturally controversial).
Yes, I believe that Barry's main purpose is i18n. But I think i18n should not be approached in a cavalier way. If you need i18n of your application, you have to be very disciplined anyway. I think collecting the variable available for interpolation in a dict and passing them explicitly to an interpolation function is the way to go here. Also, in i18n the interpolation syntax must be usable for translators who are not necessarily programmers. I believe the $ notation with only simple variables is entirely adequate for that purpose -- and Barry can implement it in a few lines. (We just adopted this for Zope3, and while there are all sorts of open issues, $ interpolation is not one of them.) --Guido van Rossum (home page: http://www.python.org/~guido/)
Guido van Rossum wrote:
Using compile-time parsing, as in PEP 215, has the advantage that it avoids any possible security problems;
It is also the only way to properly support nested scopes. It would be confusing and inconsistent if you can use a variable from a nested scope in an expression but not in a "string display" (which I think is a cute name for strings with embedded expressions). ... Yes, I believe that Barry's main purpose is i18n. But I think i18n should not be approached in a cavalier way. If you need i18n of your application, you have to be very disciplined anyway. I think collecting the variable available for interpolation in a dict and passing them explicitly to an interpolation function is the way to go here.
I think that what I hear you saying is that interpolation should ideally be done at a compile time for simple uses and at runtime for i18n. The compile-time version should have the ability to do full expressions (array indexes and self.members at the very least) and will have access to nested scopes. The runtime version should only work with dictionaries. I think you also said that they should both use named parameters instead of positional parameters. And presumably just for simplicity they would use similar syntax although one would be triggered at compile time and one at runtime. If "%" survives, it would be used for positional parameters, instead of named parameters. Is that your current thinking on the matter? I think we are making progress if we're coming to understand that the two different problem domains (simple scripts versus i18n) have different needs and that there is probably no one solution that fits both. Paul Prescod
[Paul Prescod]
I think that what I hear you saying is that interpolation should ideally be done at a compile time for simple uses and at runtime for i18n. [...] If "%" survives, it would be used for positional parameters, instead of named parameters. [...] I think we are making progress if we're coming to understand that the two different problem domains (simple scripts versus i18n) have different needs and that there is probably no one solution that fits both.
[Moore, Paul]
The internationalisation issue is clearly important. However, it has very different characteristics insofar as the template string is (of necessity) handled at runtime, so issues of compilation and security become relevant. I'm no I18N expert, so I can't comment on details, but I *do* think it's worth separating out the I18N issues from the "simple interpolation" issues...
You know, the ultimate goal of internationalisation, for a non English speaking user and even programmer, is to see his/her own language all over the screen. This means from the shell, from the system libraries, from all applications, big or small, everything. For what is provided by other programmers or maintainers, this may occur sooner and later, depending on the language, the interest of the maintainer, and the development dynamic. The far-reaching hope is that it will eventually occur. For what a user/programmer writes little things himself/herself, and this is where Python pops up, there are two ways. The simplest is to write all strings in native language. The other way, meant to help exchange with various friends or get feedback from a wider community, is to do things properly, and internationalise even small scripts from the start. It is easy to develop such an attitude, yet currently, examples do not abound. I surely had it for a few languages, despite it was rather demanding on me, at the time `gettext' was not yet available -- and in fact, my works were used to benchmark various ideas before `gettext' was first written. The mantra I repeated all along had two key points: 1) internationalisation will only be successful if designed to be unobtrusive, otherwise average maintainers and implementors will resist it. 2) programmer duties and translation duties are to be kept separate, so these activities could be done asynchronously from one another.[1] I really, really think that with enough and proper care, Python could be set so internationalisation of Python scripts is just unobtrusive routine. There should not be one way to write Python when one does not internationalise, and another different way to use it when one internationalises. The full power and facilities of Python should be available at all times, unrelated to internationalisation intents. Non-English people should not have to pay a penalty, or if they do, the penalty should be minimised As Much As Possible. Our BDFL, Guido, should favour internationalisation as a principle in the evolution for the language, that is, more than a random negligible feature. I sincerely hope he will do. For many people, internationalisation issues cannot be separated out that simply, or otherwise dismissed. We should rather learn to collaborate at properly addressing and solving them at each evolutionary step, so Python really remains a language for everybody. -------------------- [1] In practice, we've met those two goals only partly. For C programs, the character overhead per localised string is low -- the three characters "_()", while exceptionally _not_ obeying the GNU standard about a space before the opening parenthesis. The glue code is still small -- yet not as small as I would have wanted. I wrote the Emacs PO mode so marking strings in a C project can be done rather quickly by maintainers, and so translators can do their job alone. These are on the positive side. But I think we failed at the level of release engineering, as the combined complexity of Automake, Autoconf, Libtool and Gettext installation scripts is merely frightening, and very discouraging for the casual user. There were reasons behind "releng" choices, but they would make a long story. :-) Also, people in the development allowed more fundamental unneeded complexities, and which had to sad effect of anchoring the original plans to the point of being stuck. On the other hand, people not understanding where we were aiming, are happily unaware of what we are missing. (Maintainers may become incredibly stubborn, when having erections. :-) Eh, that's life... Sigh! Python can do better on _all_ fronts. By the way, I hope that `distutils' can be adapted to address internationalisation-related release engineering difficulties, so these merely vanish in practice for Python lovers. We could also have other standard helper tools for non-installed scripts. -- François Pinard http://www.iro.umontreal.ca/~pinard
pinard@iro.umontreal.ca:
I really, really think that with enough and proper care, Python could be set so internationalisation of Python scripts is just unobtrusive routine. There should not be one way to write Python when one does not internationalise, and another different way to use it when one internationalises.
As long as you have a Turing-complete programming language available for constructing strings, there will always be ways to write code that defies any straightforward means of internationalisation. Or in other words, if internationalisation is a goal, you'll always have to keep it in mind when coding, one way or another. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+
[Alex Martelli]
If %(name)s is to be deprecated moving towards Python-3000 (surely it can't be _removed_ before then), $-formatting needs a very rich feature set; otherwise it can't _replace_ %-formatting. [...] The "transition" period will thus inevitably offer different ways to perform the same tasks [...] the old way and the new way MUST both work together for a good while to allow migration.
[Moore, Paul]
I feel that the existing % formatting operator cannot realistically be removed.
I too, like Alex and Paul, have a hard time believing that `%' will effectively fade out in favour of `$'. As a few people tried to stress out (Alex did very well with his anecdote), changes in Python are welcome when they add real new capabilities, but they are less welcome when they merely add diversity over old substance: the language is then hurt each time, loosing bits of simplicity (and even legibility, through the development of Python subsets in user habits). Each individual loss may be seen as insignificant when discussed separately[1], but when the pace of change is high, the losses accumulate, especially if the cleanup does not occur. This is why any change in current string interpolation should be crafted so it fits _very_ naturally with what already exists, and does not look like another feature patched over other features. A forever "transition" period between two interpolation paradigms, foreign to one another, might give exactly that bad impression. -------------------- [1] This is one of the drawback of the PEP system. By concentrating on individual features, we loose the vision of all features taken together. Only Guido has a global vision. :-) -- François Pinard http://www.iro.umontreal.ca/~pinard
[Paul]
I think that what I hear you saying is that interpolation should ideally be done at a compile time for simple uses and at runtime for i18n. The compile-time version should have the ability to do full expressions (array indexes and self.members at the very least) and will have access to nested scopes. The runtime version should only work with dictionaries.
Yes.
I think you also said that they should both use named parameters instead of positional parameters. And presumably just for simplicity they would use similar syntax although one would be triggered at compile time and one at runtime.
Yes.
If "%" survives, it would be used for positional parameters, instead of named parameters.
Yes (in Python 3). I can also see the viewpoint that the printf syntax should be abandoned entirely (in Python 3), in favor of a different (and probably more verbose) way to spell things like "%6.3f" or "%04x". Although there may be application areas (like producing output from numeric programs) where the formatting options are very convenient. In that case Python 3 could retain the positional % syntax but drop the by-name syntax. I'm undecided on this.
Is that your current thinking on the matter?
Yes. But based on a lot of feedback (e.g. Alex's anecdote) I'm inclined to let the matter rest rather than rush to add a new language feature.
I think we are making progress if we're coming to understand that the two different problem domains (simple scripts versus i18n) have different needs and that there is probably no one solution that fits both.
OTOH, there's François's position: [François]
The mantra I repeated all along had two key points:
1) internationalisation will only be successful if designed to be unobtrusive, otherwise average maintainers and implementors will resist it.
2) programmer duties and translation duties are to be kept separate, so these activities could be done asynchronously from one another.[1]
I really, really think that with enough and proper care, Python could be set so internationalisation of Python scripts is just unobtrusive routine. There should not be one way to write Python when one does not internationalise, and another different way to use it when one internationalises. The full power and facilities of Python should be available at all times, unrelated to internationalisation intents. Non-English people should not have to pay a penalty, or if they do, the penalty should be minimised As Much As Possible.
However, he fails to suggest even a glimpse of a solution that satisfies his requirements, so I'm intended to write him off as the crank he usually is. ;-)
Our BDFL, Guido, should favour internationalisation as a principle in the evolution for the language, that is, more than a random negligible feature. I sincerely hope he will do. For many people, internationalisation issues cannot be separated out that simply, or otherwise dismissed. We should rather learn to collaborate at properly addressing and solving them at each evolutionary step, so Python really remains a language for everybody.
To the contrary, I think most users don't care about writing code that can be switched easily from one language to the next. They only care about being able to write code that prints text in their own language (and perhaps about being able to use words in their own language as identifiers). --Guido van Rossum (home page: http://www.python.org/~guido/)
On Friday 12 July 2002 12:23 pm, Guido van Rossum wrote:
(and perhaps about being able to use words in their own language as identifiers).
Beware of possible lookalike characters. I recently learned that it is possible to register for domain name with Unicode characters and since there are indistinguishable character symbols on different code pages (for instance, the Cyrillic 'o' is indistinguishable from the English 'o') this has created an interesting opportunity for domain name exploits. It probably isn't dangerous in the Python source code, but limiting the character set of identifiers to a small number of characters seems prudent.
On Thursday 20 June 2002 06:48 pm, Ka-Ping Yee wrote:
On Thu, 20 Jun 2002, Oren Tirosh wrote:
Hi Oren,
Your proposal brings up some valid concerns with PEP 215:
1. run-time vs. compile-time parsing 2. how to decide what's an expression 3. balanced quoting instead of $
I like Oren's PEP as a replacement for PEP 292. But there is one major problem with his notation. I would change the "`" character to something more readable. I tried examples with "@", "$", "%", "!", and "?". My preference was "?", "@", or "$". (The choice should consider the easy of typing on international keyboards.) The "?" seems like a good choice because the replacement expresssion will answer the question of what will appear in the string at that location. Here is Oren's example using the "?" to quote the expression. print e"X=?x?, Y=?calc_y(x)?." The following example is provided for contrast. It has a larger text to variable substitution ratio. p = e"""A new character prefix "e" is defined for strings. This prefix precedes the 'u' and 'r' prefixes, if present. Capital 'E' is also acceptable. Within an e-string any ?expressions? enclosed in backquotes are evaluated, converted to strings using the equivalent of the ?str()? function and embedded in-place into the In the larger body of text the "?" is clearly visible. I'm not so sure I like the "?" in the smaller example. It may be because the "?" looks too much like letters that can appear in a variable name. The "@" stands out a bit better than "?". This is probably because there are more pixels turned on and the character is fatter. print e"X=@x@, Y=@calc_y(x)@." p = e"""A new character prefix "e" is defined for strings. This prefix precedes the 'u' and 'r' prefixes, if present. Capital 'E' is also acceptable. Within an e-string any @expressions@ enclosed in backquotes are evaluated, converted to strings using the equivalent of the @str()@ function and embedded in-place into the e-string.""" The function of the "$" would be recognizable to people migrating from other languages, but it would be used as a balanced quote, rather than as a starting character in a variable that will be substituted. (Is this character easy to type on non-US keyboards? I thought the "$" was one of the character that is replaced on European keyboards.) If The "@" is available on international keyboards then I think it would be a better choice. print e"X=$x$, Y=$calc_y(x)$." p = e"""A new character prefix "e" is defined for strings. This prefix precedes the 'u' and 'r' prefixes, if present. Capital 'E' is also acceptable. Within an e-string any $expressions$ enclosed in backquotes are evaluated, converted to strings using the equivalent of the $str()$ function and embedded in-place into the e-string."""
On Fri, Jun 21, 2002 at 11:57:50AM -0400, Michael McLay wrote:
On Thursday 20 June 2002 06:48 pm, Ka-Ping Yee wrote:
On Thu, 20 Jun 2002, Oren Tirosh wrote:
Hi Oren,
Your proposal brings up some valid concerns with PEP 215:
1. run-time vs. compile-time parsing 2. how to decide what's an expression 3. balanced quoting instead of $
I like Oren's PEP as a replacement for PEP 292. But there is one major problem with his notation. I would change the "`" character to something more readable.
Expression embedding, unlike interpolation, is done at compile time. This would make it natural to use the same prefix used for inserting other kinds of special stuff into strings at compile-time - the backslash. print "X=\(x), Y=\(calc_y(x))." No need for double backslash. No need for a special string prefix either because \( currently has no meaning. Oren
Oren Tirosh wrote:
... No need for double backslash. No need for a special string prefix either because \( currently has no meaning.
I like this idea but note that \( does have a current meaning:
"\(" '\\(' "\(" =="\\(" 1
I think this is weird but it is inherited from C... So it would take time to phase this in. First we have to warn about \( and then give people time to find instances of it and change them to \\(. Then we could introduce a new meaning for it. Paul Prescod
On Fri, Jun 21, 2002 at 12:54:53PM -0700, Paul Prescod wrote:
No need for double backslash. No need for a special string prefix either because \( currently has no meaning.
I like this idea but note that \( does have a current meaning:
"\(" '\\(' "\(" =="\\(" 1
"""Unlike Standard C, all unrecognized escape sequences are left in the string unchanged, i.e., the backslash is left in the string. (This behavior is useful when debugging: if an escape sequence is mistyped, the resulting output is more easily recognized as broken.) """ In other words, programs that rely on this beaviour are broken. Oren
[Paul Prescod]
I like this idea but note that \( does have a current meaning:
"\(" '\\(' "\(" =="\\(" 1
I think this is weird but it is inherited from C...
C89 doesn't define the effect. C99 specifically forbids this treatment, and requires a diagnostic if \( appears. Guido did this originally to make it easier to write Emacsish regexps; the later raw strings were a better solution to that problem, although 99.7% of Python newbies seem to believe that raw strings are an idiot's attempt to make it easier to embed Windows file path literals (newbies -- gotta love 'em <wink>).
On Thu, Jun 20, 2002 at 03:48:52PM -0700, Ka-Ping Yee wrote:
Using compile-time parsing, as in PEP 215, has the advantage that it avoids any possible security problems; but it also eliminates the possibility of using this for internationalization.
Compile-time parsing may eliminate the possibility of using the same mechanism for internationalization, but not the possibility of using the same syntax. A module may provide a function that interprets the same notation at runtime. The runtime version probably shouldn't support full expression embedding - just simple name substitution.
I see this as the key tension in the string interpolation issue (aside from all the syntax stuff -- which is naturally controversial).
And the security vs. ease-of-use issue. Oren
[Oren Tirosh]
On Thu, Jun 20, 2002 at 03:48:52PM -0700, Ka-Ping Yee wrote:
Using compile-time parsing, as in PEP 215, has the advantage that it avoids any possible security problems; but it also eliminates the possibility of using this for internationalization.
Compile-time parsing may eliminate the possibility of using the same mechanism for internationalization, but not the possibility of using the same syntax.
Parsing must be done at some time. Maybe the solution lies into finding some way so Python could lazily delay the "compilation" of the string to after its translation (at run-time), when it is known beforehand that a given string is internationalised. The `.pyc' would contain byte-code and data slot for driving the laziness. The translation and compilation should occur only once for a particular string, of course, as the internationalised string may appears within a loop, or within a function which gets called often. In threaded contexts, if we allow for spurious re-compilations once in a long while, and with a simple bit of care, locks could be fully avoided.[1] The good in the above approach is that people would write Python about the same way irrelevant to the fact internationalisation is in the picture or not, and would not have to suffer the complexities of "hand" optimisation of string interpolation in internationalised context. It would simple for _everybody_, on the road meant to make internationalisation a breeze. For Python to know at initial compile time if a string is going to be internationalised of not, it has to be modified, but a positive side of this effort is that internationalisation becomes part of the language design. A possible way towards this (suggested a long while ago) could be to use, beside `eru', some `t' prefix letter asking for translation. Two problems are still to be solved, however. First, going from `_("TEXT")' to `t"TEXT"', the translation function (`_' here) and textual domain should have proper defaults, while offering a way to override them for bigger applications needing finer control or tuning. A simple solution might lie, here, into inventing some special module attribute to that purpose. Second, some applications accept switching national language at run-time. So a mechanism is needed to invalidate lazily-compiled strings when such a switch occurs. An avenue would be to use the national language string code as the "done" flag in the lazy compilation process, allowing recompilation to occur on the fly, as needed. -------------------- [1] Temporarily switching locale-related environment variables in threaded contexts may yield pretty surprising results, this is well-known already. It only stresses, in my opinion, that the design has been frozen without having all the vision it would have taken. Many internationalisation devices implement half-hearted solutions for half-thought problems. I'm not at all asserting that it is possible to foresee everything in advance. Yet, we could be more productive by _not_ slavishly sticking to actual "standards". -- François Pinard http://www.iro.umontreal.ca/~pinard
Oren Tirosh wrote:
...
Compile-time parsing may eliminate the possibility of using the same mechanism for internationalization, but not the possibility of using the same syntax. A module may provide a function that interprets the same notation at runtime. The runtime version probably shouldn't support full expression embedding - just simple name substitution.
I think that there are enough benefits for each form (compile time with expressions, runtime without) that we should expect any final solution to support both. Maybe you guys should merge your PEPs! Paul Prescod
"PP" == Paul Prescod
writes:
PP> I think that there are enough benefits for each form (compile PP> time with expressions, runtime without) that we should expect PP> any final solution to support both. Maybe you guys should PP> merge your PEPs! Only two of them are official PEPs currently <294 winks to Oren>. -Barry
participants (14)
-
barry@zope.com
-
Damien Morton
-
David Abrahams
-
Fredrik Lundh
-
Greg Ewing
-
Guido van Rossum
-
Gustavo Niemeyer
-
Ka-Ping Yee
-
Michael McLay
-
Oren Tirosh
-
Paul Prescod
-
pinard@iro.umontreal.ca
-
Raymond Hettinger
-
Tim Peters