[Python-Dev] PEP-498: Literal String Formatting

Guido van Rossum guido at python.org
Mon Aug 17 22:36:07 CEST 2015


On Mon, Aug 17, 2015 at 1:26 PM, Eric V. Smith <eric at trueblade.com> wrote:

> [...]
> I think it would be possible to create a version of this that works for
> both i18n and regular interpolation. I think the open issues are:
>
> 1. Barry wants the substitutions to look like $identifier and possibly
> ${identifier}, and the PEP 498 proposal just uses {}.
>
> 2. There needs to be a way to identify interpolated strings and i18n
> strings, and possibly combinations of those. This leads to PEP 501's i-
> and iu- strings.
>
> 3. A way to enforce identifiers-only, instead of generalized expressions.
>

In an off-list message to Barry and Nick I came up with the same three
points. :-)

I think #2 is the hard one (unless we adopt a solution like Yury just
proposed where you can have an arbitrary identifier in front of a string
literal).


> 4. We need a "safe substitution" mode for str.format_map_simple (from
> above).
>
> #1 is just a matter of preference: there's no technical reason to prefer
> {} over $ or ${}. We can make any decision here. I prefer {} because
> it's the same as str.format.
>
> #2 needs to be decided in concert with the tooling needed to extract the
> strings from the source code. The particular prefixes are up for debate.
> I'm not a big fan of using "u" to have a meaning different from it's
> current "do nothing" interpretation in 3.5. But really any prefixes will
> do, if we decide to use string prefixes. I think that's the question: do
> we want to distinguish among these cases using string prefixes or
> combinations thereof?
>
> #3 is doable, either at runtime or in the tooling that does the string
> extraction.
>
> #4 is simple, as long as we always turn it on for the localized strings.
>
> Personally I can go either way on including i18n. But I agree it's
> beginning to sound like i18n is just too complicated for PEP 498, and I
> think PEP 501 is already too complicated. I'd like to make a decision on
> this one way or the other, so we can move forward.
>

What's the rush? There's plenty of time before Python 3.6.


> >     [...]
> >     > The understanding here is that there are these new types of tokens:
> >     > F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END
> for
> >     > }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...'
> (i.e.
> >     > not containing any substitutions). These token types can then be
> used in
> >     > the grammar. (A complication would be different kinds of string
> quotes;
> >     > I propose to handle that in the lexer, otherwise the number of
> >     > open/close token types would balloon out of proportions.)
> >
> >     This would save a few hundred lines of C code. But a quick glance at
> the
> >     lexer and I can't see how to make the opening quotes agree with the
> >     closing quotes.
> >
> >
> > The lexer would have to develop another stack for this purpose.
>
> I'll give it some thought.
>
> Eric.
>

-- 
--Guido van Rossum (python.org/~guido)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20150817/da27f5ee/attachment.html>


More information about the Python-Dev mailing list