[3.12] gh-129020: Remove ambiguous sentence from `tokenize.untokenize` docs (GH-129021) (#129036)
https://github.com/python/cpython/commit/22c1d895fc73c3142795d2b5e62579e8a52... commit: 22c1d895fc73c3142795d2b5e62579e8a5225d1e branch: 3.12 author: Miss Islington (bot) <31488909+miss-islington@users.noreply.github.com> committer: AA-Turner <9087854+AA-Turner@users.noreply.github.com> date: 2025-01-20T00:12:39Z summary: [3.12] gh-129020: Remove ambiguous sentence from `tokenize.untokenize` docs (GH-129021) (#129036) gh-129020: Remove ambiguous sentence from `tokenize.untokenize` docs (GH-129021) (cherry picked from commit bca35f0e782848ae2acdcfbfb000cd4a2af49fbd) Co-authored-by: Tomas R <tomas.roun8@gmail.com> files: M Doc/library/tokenize.rst diff --git a/Doc/library/tokenize.rst b/Doc/library/tokenize.rst index f719319a302a23..b80917eae66f8b 100644 --- a/Doc/library/tokenize.rst +++ b/Doc/library/tokenize.rst @@ -91,11 +91,10 @@ write back the modified script. sequences with at least two elements, the token type and the token string. Any additional sequence elements are ignored. - The reconstructed script is returned as a single string. The result is - guaranteed to tokenize back to match the input so that the conversion is - lossless and round-trips are assured. The guarantee applies only to the - token type and token string as the spacing between tokens (column - positions) may change. + The result is guaranteed to tokenize back to match the input so that the + conversion is lossless and round-trips are assured. The guarantee applies + only to the token type and token string as the spacing between tokens + (column positions) may change. It returns bytes, encoded using the :data:`~token.ENCODING` token, which is the first token sequence output by :func:`.tokenize`. If there is no
participants (1)
-
AA-Turner