mirror of
https://github.com/python/cpython.git
synced 2026-01-10 09:22:36 +00:00
Should significantly enhance the utility of the module by supporting the creation of tools that modify the token stream and writeback the modified result.
119 lines
4.6 KiB
TeX
119 lines
4.6 KiB
TeX
\section{\module{tokenize} ---
|
|
Tokenizer for Python source}
|
|
|
|
\declaremodule{standard}{tokenize}
|
|
\modulesynopsis{Lexical scanner for Python source code.}
|
|
\moduleauthor{Ka Ping Yee}{}
|
|
\sectionauthor{Fred L. Drake, Jr.}{fdrake@acm.org}
|
|
|
|
|
|
The \module{tokenize} module provides a lexical scanner for Python
|
|
source code, implemented in Python. The scanner in this module
|
|
returns comments as tokens as well, making it useful for implementing
|
|
``pretty-printers,'' including colorizers for on-screen displays.
|
|
|
|
The primary entry point is a generator:
|
|
|
|
\begin{funcdesc}{generate_tokens}{readline}
|
|
The \function{generate_tokens()} generator requires one argment,
|
|
\var{readline}, which must be a callable object which
|
|
provides the same interface as the \method{readline()} method of
|
|
built-in file objects (see section~\ref{bltin-file-objects}). Each
|
|
call to the function should return one line of input as a string.
|
|
|
|
The generator produces 5-tuples with these members:
|
|
the token type;
|
|
the token string;
|
|
a 2-tuple \code{(\var{srow}, \var{scol})} of ints specifying the
|
|
row and column where the token begins in the source;
|
|
a 2-tuple \code{(\var{erow}, \var{ecol})} of ints specifying the
|
|
row and column where the token ends in the source;
|
|
and the line on which the token was found.
|
|
The line passed is the \emph{logical} line;
|
|
continuation lines are included.
|
|
\versionadded{2.2}
|
|
\end{funcdesc}
|
|
|
|
An older entry point is retained for backward compatibility:
|
|
|
|
\begin{funcdesc}{tokenize}{readline\optional{, tokeneater}}
|
|
The \function{tokenize()} function accepts two parameters: one
|
|
representing the input stream, and one providing an output mechanism
|
|
for \function{tokenize()}.
|
|
|
|
The first parameter, \var{readline}, must be a callable object which
|
|
provides the same interface as the \method{readline()} method of
|
|
built-in file objects (see section~\ref{bltin-file-objects}). Each
|
|
call to the function should return one line of input as a string.
|
|
Alternately, \var{readline} may be a callable object that signals
|
|
completion by raising \exception{StopIteration}.
|
|
\versionchanged[Added StopIteration support]{2.5}
|
|
|
|
The second parameter, \var{tokeneater}, must also be a callable
|
|
object. It is called once for each token, with five arguments,
|
|
corresponding to the tuples generated by \function{generate_tokens()}.
|
|
\end{funcdesc}
|
|
|
|
|
|
All constants from the \refmodule{token} module are also exported from
|
|
\module{tokenize}, as are two additional token type values that might be
|
|
passed to the \var{tokeneater} function by \function{tokenize()}:
|
|
|
|
\begin{datadesc}{COMMENT}
|
|
Token value used to indicate a comment.
|
|
\end{datadesc}
|
|
\begin{datadesc}{NL}
|
|
Token value used to indicate a non-terminating newline. The NEWLINE
|
|
token indicates the end of a logical line of Python code; NL tokens
|
|
are generated when a logical line of code is continued over multiple
|
|
physical lines.
|
|
\end{datadesc}
|
|
|
|
Another function is provided to reverse the tokenization process.
|
|
This is useful for creating tools that tokenize a script, modify
|
|
the token stream, and write back the modified script.
|
|
|
|
\begin{funcdesc}{untokenize}{iterable}
|
|
Converts tokens back into Python source code. The \variable{iterable}
|
|
must return sequences with at least two elements, the token type and
|
|
the token string. Any additional sequence elements are ignored.
|
|
|
|
The reconstructed script is returned as a single string. The
|
|
result is guaranteed to tokenize back to match the input so that
|
|
the conversion is lossless and round-trips are assured. The
|
|
guarantee applies only to the token type and token string as
|
|
the spacing between tokens (column positions) may change.
|
|
\versionadded{2.5}
|
|
\end{funcdesc}
|
|
|
|
Example of a script re-writer that transforms float literals into
|
|
Decimal objects:
|
|
\begin{verbatim}
|
|
def decistmt(s):
|
|
"""Substitute Decimals for floats in a string of statements.
|
|
|
|
>>> from decimal import Decimal
|
|
>>> s = 'print +21.3e-5*-.1234/81.7'
|
|
>>> decistmt(s)
|
|
"print +Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7')"
|
|
|
|
>>> exec(s)
|
|
-3.21716034272e-007
|
|
>>> exec(decistmt(s))
|
|
-3.217160342717258261933904529E-7
|
|
|
|
"""
|
|
result = []
|
|
g = generate_tokens(StringIO(s).readline) # tokenize the string
|
|
for toknum, tokval, _, _, _ in g:
|
|
if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens
|
|
result.extend([
|
|
(NAME, 'Decimal'),
|
|
(OP, '('),
|
|
(STRING, repr(tokval)),
|
|
(OP, ')')
|
|
])
|
|
else:
|
|
result.append((toknum, tokval))
|
|
return untokenize(result)
|
|
\end{verbatim}
|