This talk about optimization is confusing me:

These are literals -- they should only get processed once, generally on module import.

If you are putting a long list of literal strings inside a tight loop, you are already not concerned with performance.

Performance is absolutely the LAST reason to consider any proposal like this.

I'm not saying that things like this shouldn't be optimized -- faster import is a good thing, but I am saying it's not a reason to add a language feature.


On Wed, Oct 23, 2019 at 9:42 AM Andrew Barnert via Python-ideas <> wrote:
On Oct 23, 2019, at 03:08, Steven D'Aprano <> wrote:
> It could also be done by a source code preprocessor, or an AST
> transformation, without changing syntax.
> But the advantage of changing the syntax is that it becomes the One
> Obvious Way, and you know it will be efficient whatever version or
> implementation of Python you are using.

The advantage of just optimizing split on a literal is that split becomes the One Obvious Way, and you know it will work and be correct in whatever version of implementation of Python you are using, back to 0.9; it’ll just be faster in CPython 3.9+.

In fact, given that we already use split all over the place, and even offer shorthand for it in places like namedtuple, and people recommend it on python-list and StackOverflow without any pushback, I think it already is TOOWTDI for many cases. So why not optimize it?

And your argument is really an argument against adding any optimizations to CPython. The fact that nested tuple literals are now as fast as constants means someone could be constructing one right in the middle of a bottleneck, making their code appear to work on all Python versions and pass benchmarks in current CPython but then be unacceptably slow when they deploy on CPython 3.4 or uPython or whatever. But would you say that optimization was a mistake, and we should have instead left nested tuple displays slow and invented a new syntax for nested tuple constants that would make it an obvious SyntaxError in 3.4 or uPython, just because it’s possible that one person might run into that unacceptably slow case one day, even though nobody has ever complained about it?

And this is almost certainly the same thing. If someone has a case where they wrote out a long list of strings as a list literal with quotes instead of using split because benchmarking required it, where they would have been misled into using split if it were faster in 3.8 even though some of their deployment targets are 3.7, then we should listen. But I doubt anyone does. The optimization will just be a small QoI thing that adds to Python 3.9 being on average a bit faster than 3.8.

Python-ideas mailing list --
To unsubscribe send an email to
Message archived at
Code of Conduct:

Christopher Barker, PhD

Python Language Consulting
  - Teaching
  - Scientific Software Development
  - Desktop GUI and Web Development
  - wxPython, numpy, scipy, Cython