[Python-Dev] Unicode identifiers in test files?
Eric V. Smith
eric at trueblade.com
Sat May 4 04:10:32 EDT 2019
On 5/4/19 3:54 AM, Eric V. Smith wrote:
> On 5/4/19 2:48 AM, Serhiy Storchaka wrote:
>> 04.05.19 05:46, Eric V. Smith пише:
>>> Is there a policy against using Unicode identifiers in test files?
>>> As part of adding !d to f-strings, there's a code path that's only
>>> executed if the text of the expression is non-ascii. The easiest way
>>> to exercise it, and the way I found a bug, is by using an identifier
>>> with Unicode chars. I know we have a policy against this in Lib/, but
>>> what about Lib/test/?
>>> I could work around this with exec-ing some strings, but that seems
>>> like added confusion that I'd avoid with a real Unicode identifier.
>> Could you use string literals with non-ascii characters? They are more
>> used in tests.
> Hi, Serhiy.
> I could and will, yes: thanks for the suggestion. But for this specific
> feature, I also want to test with simple variable-only expressions. I'm
> either going to use a unicode identifier in the code, or eval one in a
> string. Doing the eval dance just seems like extra work for some sort of
> purity that I don't think is important in a test, unless someone can
> think of a good reason for it.
And I just noticed that PEP 3131 has an exception for tests in its
section that says the stdlib can't contain unicode identifiers:
So, since it's the most direct and probably safest thing to do for a
test, I'm going to use a unicode identier.
Thanks, all, for your ideas. Especially Greg for reminding me to add the
More information about the Python-Dev