[Python-checkins] Minor modernization and readability improvement to the tokenizer example (GH-19558)
Raymond Hettinger
webhook-mailer at python.org
Thu Apr 16 22:54:24 EDT 2020
https://github.com/python/cpython/commit/bf1a81258c0ecc8b52b9dcc53321c066b3ed4a67
commit: bf1a81258c0ecc8b52b9dcc53321c066b3ed4a67
branch: master
author: Raymond Hettinger <rhettinger at users.noreply.github.com>
committer: GitHub <noreply at github.com>
date: 2020-04-16T19:54:13-07:00
summary:
Minor modernization and readability improvement to the tokenizer example (GH-19558)
files:
M Doc/library/re.rst
diff --git a/Doc/library/re.rst b/Doc/library/re.rst
index 7c950bfd5b1fd..9abbd8ba73616 100644
--- a/Doc/library/re.rst
+++ b/Doc/library/re.rst
@@ -1617,10 +1617,14 @@ The text categories are specified with regular expressions. The technique is
to combine those into a single master regular expression and to loop over
successive matches::
- import collections
+ from typing import NamedTuple
import re
- Token = collections.namedtuple('Token', ['type', 'value', 'line', 'column'])
+ class Token(NamedTuple):
+ type: str
+ value: str
+ line: int
+ column: int
def tokenize(code):
keywords = {'IF', 'THEN', 'ENDIF', 'FOR', 'NEXT', 'GOSUB', 'RETURN'}
More information about the Python-checkins
mailing list