[Python-checkins] bpo-5028: fix doc bug for tokenize (GH-11683)

Miss Islington (bot) webhook-mailer at python.org
Thu May 30 15:31:57 EDT 2019


https://github.com/python/cpython/commit/1e36f75d634383eb243aa1798c0f2405c9ceb5d4
commit: 1e36f75d634383eb243aa1798c0f2405c9ceb5d4
branch: master
author: Andrew Carr <andrewnc at users.noreply.github.com>
committer: Miss Islington (bot) <31488909+miss-islington at users.noreply.github.com>
date: 2019-05-30T12:31:51-07:00
summary:

bpo-5028: fix doc bug for tokenize (GH-11683)



https://bugs.python.org/issue5028

files:
M Doc/library/tokenize.rst
M Lib/lib2to3/pgen2/tokenize.py
M Lib/tokenize.py

diff --git a/Doc/library/tokenize.rst b/Doc/library/tokenize.rst
index 111289c767f3..c89d3d4b082f 100644
--- a/Doc/library/tokenize.rst
+++ b/Doc/library/tokenize.rst
@@ -39,7 +39,7 @@ The primary entry point is a :term:`generator`:
    column where the token begins in the source; a 2-tuple ``(erow, ecol)`` of
    ints specifying the row and column where the token ends in the source; and
    the line on which the token was found. The line passed (the last tuple item)
-   is the *logical* line; continuation lines are included.  The 5 tuple is
+   is the *physical* line; continuation lines are included.  The 5 tuple is
    returned as a :term:`named tuple` with the field names:
    ``type string start end line``.
 
diff --git a/Lib/lib2to3/pgen2/tokenize.py b/Lib/lib2to3/pgen2/tokenize.py
index 279d322971da..0f9fde3fb0d5 100644
--- a/Lib/lib2to3/pgen2/tokenize.py
+++ b/Lib/lib2to3/pgen2/tokenize.py
@@ -346,7 +346,7 @@ def generate_tokens(readline):
     column where the token begins in the source; a 2-tuple (erow, ecol) of
     ints specifying the row and column where the token ends in the source;
     and the line on which the token was found. The line passed is the
-    logical line; continuation lines are included.
+    physical line; continuation lines are included.
     """
     lnum = parenlev = continued = 0
     contstr, needcont = '', 0
diff --git a/Lib/tokenize.py b/Lib/tokenize.py
index 0f9d5dd554d5..738fb71d188b 100644
--- a/Lib/tokenize.py
+++ b/Lib/tokenize.py
@@ -415,7 +415,7 @@ def tokenize(readline):
     column where the token begins in the source; a 2-tuple (erow, ecol) of
     ints specifying the row and column where the token ends in the source;
     and the line on which the token was found.  The line passed is the
-    logical line; continuation lines are included.
+    physical line; continuation lines are included.
 
     The first token sequence will always be an ENCODING token
     which tells you which encoding was used to decode the bytes stream.



More information about the Python-checkins mailing list