mirror of
https://github.com/python/cpython.git
synced 2025-12-08 06:10:17 +00:00
Python 3.14.2
This commit is contained in:
parent
4cb6cbb6fa
commit
df793163d5
11 changed files with 399 additions and 246 deletions
|
|
@ -1,4 +1,4 @@
|
|||
# Autogenerated by Sphinx on Tue Dec 2 14:51:32 2025
|
||||
# Autogenerated by Sphinx on Fri Dec 5 18:49:09 2025
|
||||
# as part of the release process.
|
||||
|
||||
topics = {
|
||||
|
|
@ -6260,78 +6260,31 @@ def whats_on_the_telly(penguin=None):
|
|||
"NAME" tokens represent *identifiers*, *keywords*, and *soft
|
||||
keywords*.
|
||||
|
||||
Within the ASCII range (U+0001..U+007F), the valid characters for
|
||||
names include the uppercase and lowercase letters ("A-Z" and "a-z"),
|
||||
the underscore "_" and, except for the first character, the digits "0"
|
||||
through "9".
|
||||
Names are composed of the following characters:
|
||||
|
||||
* uppercase and lowercase letters ("A-Z" and "a-z"),
|
||||
|
||||
* the underscore ("_"),
|
||||
|
||||
* digits ("0" through "9"), which cannot appear as the first
|
||||
character, and
|
||||
|
||||
* non-ASCII characters. Valid names may only contain “letter-like” and
|
||||
“digit-like” characters; see Non-ASCII characters in names for
|
||||
details.
|
||||
|
||||
Names must contain at least one character, but have no upper length
|
||||
limit. Case is significant.
|
||||
|
||||
Besides "A-Z", "a-z", "_" and "0-9", names can also use “letter-like”
|
||||
and “number-like” characters from outside the ASCII range, as detailed
|
||||
below.
|
||||
Formally, names are described by the following lexical definitions:
|
||||
|
||||
All identifiers are converted into the normalization form NFKC while
|
||||
parsing; comparison of identifiers is based on NFKC.
|
||||
NAME: name_start name_continue*
|
||||
name_start: "a"..."z" | "A"..."Z" | "_" | <non-ASCII character>
|
||||
name_continue: name_start | "0"..."9"
|
||||
identifier: <NAME, except keywords>
|
||||
|
||||
Formally, the first character of a normalized identifier must belong
|
||||
to the set "id_start", which is the union of:
|
||||
|
||||
* Unicode category "<Lu>" - uppercase letters (includes "A" to "Z")
|
||||
|
||||
* Unicode category "<Ll>" - lowercase letters (includes "a" to "z")
|
||||
|
||||
* Unicode category "<Lt>" - titlecase letters
|
||||
|
||||
* Unicode category "<Lm>" - modifier letters
|
||||
|
||||
* Unicode category "<Lo>" - other letters
|
||||
|
||||
* Unicode category "<Nl>" - letter numbers
|
||||
|
||||
* {""_""} - the underscore
|
||||
|
||||
* "<Other_ID_Start>" - an explicit set of characters in PropList.txt
|
||||
to support backwards compatibility
|
||||
|
||||
The remaining characters must belong to the set "id_continue", which
|
||||
is the union of:
|
||||
|
||||
* all characters in "id_start"
|
||||
|
||||
* Unicode category "<Nd>" - decimal numbers (includes "0" to "9")
|
||||
|
||||
* Unicode category "<Pc>" - connector punctuations
|
||||
|
||||
* Unicode category "<Mn>" - nonspacing marks
|
||||
|
||||
* Unicode category "<Mc>" - spacing combining marks
|
||||
|
||||
* "<Other_ID_Continue>" - another explicit set of characters in
|
||||
PropList.txt to support backwards compatibility
|
||||
|
||||
Unicode categories use the version of the Unicode Character Database
|
||||
as included in the "unicodedata" module.
|
||||
|
||||
These sets are based on the Unicode standard annex UAX-31. See also
|
||||
**PEP 3131** for further details.
|
||||
|
||||
Even more formally, names are described by the following lexical
|
||||
definitions:
|
||||
|
||||
NAME: xid_start xid_continue*
|
||||
id_start: <Lu> | <Ll> | <Lt> | <Lm> | <Lo> | <Nl> | "_" | <Other_ID_Start>
|
||||
id_continue: id_start | <Nd> | <Pc> | <Mn> | <Mc> | <Other_ID_Continue>
|
||||
xid_start: <all characters in id_start whose NFKC normalization is
|
||||
in (id_start xid_continue*)">
|
||||
xid_continue: <all characters in id_continue whose NFKC normalization is
|
||||
in (id_continue*)">
|
||||
identifier: <NAME, except keywords>
|
||||
|
||||
A non-normative listing of all valid identifier characters as defined
|
||||
by Unicode is available in the DerivedCoreProperties.txt file in the
|
||||
Unicode Character Database.
|
||||
Note that not all names matched by this grammar are valid; see Non-
|
||||
ASCII characters in names for details.
|
||||
|
||||
|
||||
Keywords
|
||||
|
|
@ -6414,6 +6367,101 @@ def whats_on_the_telly(penguin=None):
|
|||
context of a class definition, are re-written to use a mangled form
|
||||
to help avoid name clashes between “private” attributes of base and
|
||||
derived classes. See section Identifiers (Names).
|
||||
|
||||
|
||||
Non-ASCII characters in names
|
||||
=============================
|
||||
|
||||
Names that contain non-ASCII characters need additional normalization
|
||||
and validation beyond the rules and grammar explained above. For
|
||||
example, "ř_1", "蛇", or "साँप" are valid names, but "r〰2", "€", or
|
||||
"🐍" are not.
|
||||
|
||||
This section explains the exact rules.
|
||||
|
||||
All names are converted into the normalization form NFKC while
|
||||
parsing. This means that, for example, some typographic variants of
|
||||
characters are converted to their “basic” form. For example,
|
||||
"fiⁿₐˡᵢᶻₐᵗᵢᵒₙ" normalizes to "finalization", so Python treats them as
|
||||
the same name:
|
||||
|
||||
>>> fiⁿₐˡᵢᶻₐᵗᵢᵒₙ = 3
|
||||
>>> finalization
|
||||
3
|
||||
|
||||
Note:
|
||||
|
||||
Normalization is done at the lexical level only. Run-time functions
|
||||
that take names as *strings* generally do not normalize their
|
||||
arguments. For example, the variable defined above is accessible at
|
||||
run time in the "globals()" dictionary as
|
||||
"globals()["finalization"]" but not "globals()["fiⁿₐˡᵢᶻₐᵗᵢᵒₙ"]".
|
||||
|
||||
Similarly to how ASCII-only names must contain only letters, digits
|
||||
and the underscore, and cannot start with a digit, a valid name must
|
||||
start with a character in the “letter-like” set "xid_start", and the
|
||||
remaining characters must be in the “letter- and digit-like” set
|
||||
"xid_continue".
|
||||
|
||||
These sets based on the *XID_Start* and *XID_Continue* sets as defined
|
||||
by the Unicode standard annex UAX-31. Python’s "xid_start"
|
||||
additionally includes the underscore ("_"). Note that Python does not
|
||||
necessarily conform to UAX-31.
|
||||
|
||||
A non-normative listing of characters in the *XID_Start* and
|
||||
*XID_Continue* sets as defined by Unicode is available in the
|
||||
DerivedCoreProperties.txt file in the Unicode Character Database. For
|
||||
reference, the construction rules for the "xid_*" sets are given
|
||||
below.
|
||||
|
||||
The set "id_start" is defined as the union of:
|
||||
|
||||
* Unicode category "<Lu>" - uppercase letters (includes "A" to "Z")
|
||||
|
||||
* Unicode category "<Ll>" - lowercase letters (includes "a" to "z")
|
||||
|
||||
* Unicode category "<Lt>" - titlecase letters
|
||||
|
||||
* Unicode category "<Lm>" - modifier letters
|
||||
|
||||
* Unicode category "<Lo>" - other letters
|
||||
|
||||
* Unicode category "<Nl>" - letter numbers
|
||||
|
||||
* {""_""} - the underscore
|
||||
|
||||
* "<Other_ID_Start>" - an explicit set of characters in PropList.txt
|
||||
to support backwards compatibility
|
||||
|
||||
The set "xid_start" then closes this set under NFKC normalization, by
|
||||
removing all characters whose normalization is not of the form
|
||||
"id_start id_continue*".
|
||||
|
||||
The set "id_continue" is defined as the union of:
|
||||
|
||||
* "id_start" (see above)
|
||||
|
||||
* Unicode category "<Nd>" - decimal numbers (includes "0" to "9")
|
||||
|
||||
* Unicode category "<Pc>" - connector punctuations
|
||||
|
||||
* Unicode category "<Mn>" - nonspacing marks
|
||||
|
||||
* Unicode category "<Mc>" - spacing combining marks
|
||||
|
||||
* "<Other_ID_Continue>" - another explicit set of characters in
|
||||
PropList.txt to support backwards compatibility
|
||||
|
||||
Again, "xid_continue" closes this set under NFKC normalization.
|
||||
|
||||
Unicode categories use the version of the Unicode Character Database
|
||||
as included in the "unicodedata" module.
|
||||
|
||||
See also:
|
||||
|
||||
* **PEP 3131** – Supporting Non-ASCII Identifiers
|
||||
|
||||
* **PEP 672** – Unicode-related Security Considerations for Python
|
||||
''',
|
||||
'if': r'''The "if" statement
|
||||
******************
|
||||
|
|
@ -10859,119 +10907,56 @@ class is used in a class pattern with positional arguments, each
|
|||
|
||||
Added in version 3.6.
|
||||
|
||||
Changed in version 3.7: The "await" and "async for" can be used in
|
||||
expressions within f-strings.
|
||||
|
||||
Changed in version 3.8: Added the debug specifier ("=")
|
||||
|
||||
Changed in version 3.12: Many restrictions on expressions within
|
||||
f-strings have been removed. Notably, nested strings, comments, and
|
||||
backslashes are now permitted.
|
||||
|
||||
A *formatted string literal* or *f-string* is a string literal that is
|
||||
prefixed with ‘"f"’ or ‘"F"’. These strings may contain replacement
|
||||
fields, which are expressions delimited by curly braces "{}". While
|
||||
other string literals always have a constant value, formatted strings
|
||||
are really expressions evaluated at run time.
|
||||
prefixed with ‘"f"’ or ‘"F"’. Unlike other string literals, f-strings
|
||||
do not have a constant value. They may contain *replacement fields*
|
||||
delimited by curly braces "{}". Replacement fields contain expressions
|
||||
which are evaluated at run time. For example:
|
||||
|
||||
Escape sequences are decoded like in ordinary string literals (except
|
||||
when a literal is also marked as a raw string). After decoding, the
|
||||
grammar for the contents of the string is:
|
||||
>>> who = 'nobody'
|
||||
>>> nationality = 'Spanish'
|
||||
>>> f'{who.title()} expects the {nationality} Inquisition!'
|
||||
'Nobody expects the Spanish Inquisition!'
|
||||
|
||||
f_string: (literal_char | "{{" | "}}" | replacement_field)*
|
||||
replacement_field: "{" f_expression ["="] ["!" conversion] [":" format_spec] "}"
|
||||
f_expression: (conditional_expression | "*" or_expr)
|
||||
("," conditional_expression | "," "*" or_expr)* [","]
|
||||
| yield_expression
|
||||
conversion: "s" | "r" | "a"
|
||||
format_spec: (literal_char | replacement_field)*
|
||||
literal_char: <any code point except "{", "}" or NULL>
|
||||
Any doubled curly braces ("{{" or "}}") outside replacement fields are
|
||||
replaced with the corresponding single curly brace:
|
||||
|
||||
The parts of the string outside curly braces are treated literally,
|
||||
except that any doubled curly braces "'{{'" or "'}}'" are replaced
|
||||
with the corresponding single curly brace. A single opening curly
|
||||
bracket "'{'" marks a replacement field, which starts with a Python
|
||||
expression. To display both the expression text and its value after
|
||||
evaluation, (useful in debugging), an equal sign "'='" may be added
|
||||
after the expression. A conversion field, introduced by an exclamation
|
||||
point "'!'" may follow. A format specifier may also be appended,
|
||||
introduced by a colon "':'". A replacement field ends with a closing
|
||||
curly bracket "'}'".
|
||||
>>> print(f'{{...}}')
|
||||
{...}
|
||||
|
||||
Other characters outside replacement fields are treated like in
|
||||
ordinary string literals. This means that escape sequences are decoded
|
||||
(except when a literal is also marked as a raw string), and newlines
|
||||
are possible in triple-quoted f-strings:
|
||||
|
||||
>>> name = 'Galahad'
|
||||
>>> favorite_color = 'blue'
|
||||
>>> print(f'{name}:\\t{favorite_color}')
|
||||
Galahad: blue
|
||||
>>> print(rf"C:\\Users\\{name}")
|
||||
C:\\Users\\Galahad
|
||||
>>> print(f\'\'\'Three shall be the number of the counting
|
||||
... and the number of the counting shall be three.\'\'\')
|
||||
Three shall be the number of the counting
|
||||
and the number of the counting shall be three.
|
||||
|
||||
Expressions in formatted string literals are treated like regular
|
||||
Python expressions surrounded by parentheses, with a few exceptions.
|
||||
An empty expression is not allowed, and both "lambda" and assignment
|
||||
expressions ":=" must be surrounded by explicit parentheses. Each
|
||||
expression is evaluated in the context where the formatted string
|
||||
literal appears, in order from left to right. Replacement expressions
|
||||
can contain newlines in both single-quoted and triple-quoted f-strings
|
||||
and they can contain comments. Everything that comes after a "#"
|
||||
inside a replacement field is a comment (even closing braces and
|
||||
quotes). In that case, replacement fields must be closed in a
|
||||
different line.
|
||||
Python expressions. Each expression is evaluated in the context where
|
||||
the formatted string literal appears, in order from left to right. An
|
||||
empty expression is not allowed, and both "lambda" and assignment
|
||||
expressions ":=" must be surrounded by explicit parentheses:
|
||||
|
||||
>>> f"abc{a # This is a comment }"
|
||||
... + 3}"
|
||||
'abc5'
|
||||
|
||||
Changed in version 3.7: Prior to Python 3.7, an "await" expression and
|
||||
comprehensions containing an "async for" clause were illegal in the
|
||||
expressions in formatted string literals due to a problem with the
|
||||
implementation.
|
||||
|
||||
Changed in version 3.12: Prior to Python 3.12, comments were not
|
||||
allowed inside f-string replacement fields.
|
||||
|
||||
When the equal sign "'='" is provided, the output will have the
|
||||
expression text, the "'='" and the evaluated value. Spaces after the
|
||||
opening brace "'{'", within the expression and after the "'='" are all
|
||||
retained in the output. By default, the "'='" causes the "repr()" of
|
||||
the expression to be provided, unless there is a format specified.
|
||||
When a format is specified it defaults to the "str()" of the
|
||||
expression unless a conversion "'!r'" is declared.
|
||||
|
||||
Added in version 3.8: The equal sign "'='".
|
||||
|
||||
If a conversion is specified, the result of evaluating the expression
|
||||
is converted before formatting. Conversion "'!s'" calls "str()" on
|
||||
the result, "'!r'" calls "repr()", and "'!a'" calls "ascii()".
|
||||
|
||||
The result is then formatted using the "format()" protocol. The
|
||||
format specifier is passed to the "__format__()" method of the
|
||||
expression or conversion result. An empty string is passed when the
|
||||
format specifier is omitted. The formatted result is then included in
|
||||
the final value of the whole string.
|
||||
|
||||
Top-level format specifiers may include nested replacement fields.
|
||||
These nested fields may include their own conversion fields and format
|
||||
specifiers, but may not include more deeply nested replacement fields.
|
||||
The format specifier mini-language is the same as that used by the
|
||||
"str.format()" method.
|
||||
|
||||
Formatted string literals may be concatenated, but replacement fields
|
||||
cannot be split across literals.
|
||||
|
||||
Some examples of formatted string literals:
|
||||
|
||||
>>> name = "Fred"
|
||||
>>> f"He said his name is {name!r}."
|
||||
"He said his name is 'Fred'."
|
||||
>>> f"He said his name is {repr(name)}." # repr() is equivalent to !r
|
||||
"He said his name is 'Fred'."
|
||||
>>> width = 10
|
||||
>>> precision = 4
|
||||
>>> value = decimal.Decimal("12.34567")
|
||||
>>> f"result: {value:{width}.{precision}}" # nested fields
|
||||
'result: 12.35'
|
||||
>>> today = datetime(year=2017, month=1, day=27)
|
||||
>>> f"{today:%B %d, %Y}" # using date format specifier
|
||||
'January 27, 2017'
|
||||
>>> f"{today=:%B %d, %Y}" # using date format specifier and debugging
|
||||
'today=January 27, 2017'
|
||||
>>> number = 1024
|
||||
>>> f"{number:#0x}" # using integer format specifier
|
||||
'0x400'
|
||||
>>> foo = "bar"
|
||||
>>> f"{ foo = }" # preserves whitespace
|
||||
" foo = 'bar'"
|
||||
>>> line = "The mill's closed"
|
||||
>>> f"{line = }"
|
||||
'line = "The mill\\'s closed"'
|
||||
>>> f"{line = :20}"
|
||||
"line = The mill's closed "
|
||||
>>> f"{line = !r:20}"
|
||||
'line = "The mill\\'s closed" '
|
||||
>>> f'{(half := 1/2)}, {half * 42}'
|
||||
'0.5, 21.0'
|
||||
|
||||
Reusing the outer f-string quoting type inside a replacement field is
|
||||
permitted:
|
||||
|
|
@ -10980,10 +10965,6 @@ class is used in a class pattern with positional arguments, each
|
|||
>>> f"abc {a["x"]} def"
|
||||
'abc 2 def'
|
||||
|
||||
Changed in version 3.12: Prior to Python 3.12, reuse of the same
|
||||
quoting type of the outer f-string inside a replacement field was not
|
||||
possible.
|
||||
|
||||
Backslashes are also allowed in replacement fields and are evaluated
|
||||
the same way as in any other context:
|
||||
|
||||
|
|
@ -10994,21 +10975,84 @@ class is used in a class pattern with positional arguments, each
|
|||
b
|
||||
c
|
||||
|
||||
Changed in version 3.12: Prior to Python 3.12, backslashes were not
|
||||
permitted inside an f-string replacement field.
|
||||
It is possible to nest f-strings:
|
||||
|
||||
Formatted string literals cannot be used as docstrings, even if they
|
||||
do not include expressions.
|
||||
>>> name = 'world'
|
||||
>>> f'Repeated:{f' hello {name}' * 3}'
|
||||
'Repeated: hello world hello world hello world'
|
||||
|
||||
Portable Python programs should not use more than 5 levels of nesting.
|
||||
|
||||
**CPython implementation detail:** CPython does not limit nesting of
|
||||
f-strings.
|
||||
|
||||
Replacement expressions can contain newlines in both single-quoted and
|
||||
triple-quoted f-strings and they can contain comments. Everything that
|
||||
comes after a "#" inside a replacement field is a comment (even
|
||||
closing braces and quotes). This means that replacement fields with
|
||||
comments must be closed in a different line:
|
||||
|
||||
>>> a = 2
|
||||
>>> f"abc{a # This comment }" continues until the end of the line
|
||||
... + 3}"
|
||||
'abc5'
|
||||
|
||||
After the expression, replacement fields may optionally contain:
|
||||
|
||||
* a *debug specifier* – an equal sign ("="), optionally surrounded by
|
||||
whitespace on one or both sides;
|
||||
|
||||
* a *conversion specifier* – "!s", "!r" or "!a"; and/or
|
||||
|
||||
* a *format specifier* prefixed with a colon (":").
|
||||
|
||||
See the Standard Library section on f-strings for details on how these
|
||||
fields are evaluated.
|
||||
|
||||
As that section explains, *format specifiers* are passed as the second
|
||||
argument to the "format()" function to format a replacement field
|
||||
value. For example, they can be used to specify a field width and
|
||||
padding characters using the Format Specification Mini-Language:
|
||||
|
||||
>>> number = 14.3
|
||||
>>> f'{number:20.7f}'
|
||||
' 14.3000000'
|
||||
|
||||
Top-level format specifiers may include nested replacement fields:
|
||||
|
||||
>>> field_size = 20
|
||||
>>> precision = 7
|
||||
>>> f'{number:{field_size}.{precision}f}'
|
||||
' 14.3000000'
|
||||
|
||||
These nested fields may include their own conversion fields and format
|
||||
specifiers:
|
||||
|
||||
>>> number = 3
|
||||
>>> f'{number:{field_size}}'
|
||||
' 3'
|
||||
>>> f'{number:{field_size:05}}'
|
||||
'00000000000000000003'
|
||||
|
||||
However, these nested fields may not include more deeply nested
|
||||
replacement fields.
|
||||
|
||||
Formatted string literals cannot be used as *docstrings*, even if they
|
||||
do not include expressions:
|
||||
|
||||
>>> def foo():
|
||||
... f"Not a docstring"
|
||||
...
|
||||
>>> foo.__doc__ is None
|
||||
True
|
||||
>>> print(foo.__doc__)
|
||||
None
|
||||
|
||||
See also **PEP 498** for the proposal that added formatted string
|
||||
literals, and "str.format()", which uses a related format string
|
||||
mechanism.
|
||||
See also:
|
||||
|
||||
* **PEP 498** – Literal String Interpolation
|
||||
|
||||
* **PEP 701** – Syntactic formalization of f-strings
|
||||
|
||||
* "str.format()", which uses a related format string mechanism.
|
||||
|
||||
|
||||
t-strings
|
||||
|
|
@ -11017,34 +11061,90 @@ class is used in a class pattern with positional arguments, each
|
|||
Added in version 3.14.
|
||||
|
||||
A *template string literal* or *t-string* is a string literal that is
|
||||
prefixed with ‘"t"’ or ‘"T"’. These strings follow the same syntax and
|
||||
evaluation rules as formatted string literals, with the following
|
||||
differences:
|
||||
prefixed with ‘"t"’ or ‘"T"’. These strings follow the same syntax
|
||||
rules as formatted string literals. For differences in evaluation
|
||||
rules, see the Standard Library section on t-strings
|
||||
|
||||
* Rather than evaluating to a "str" object, template string literals
|
||||
evaluate to a "string.templatelib.Template" object.
|
||||
|
||||
* The "format()" protocol is not used. Instead, the format specifier
|
||||
and conversions (if any) are passed to a new "Interpolation" object
|
||||
that is created for each evaluated expression. It is up to code that
|
||||
processes the resulting "Template" object to decide how to handle
|
||||
format specifiers and conversions.
|
||||
Formal grammar for f-strings
|
||||
============================
|
||||
|
||||
* Format specifiers containing nested replacement fields are evaluated
|
||||
eagerly, prior to being passed to the "Interpolation" object. For
|
||||
instance, an interpolation of the form "{amount:.{precision}f}" will
|
||||
evaluate the inner expression "{precision}" to determine the value
|
||||
of the "format_spec" attribute. If "precision" were to be "2", the
|
||||
resulting format specifier would be "'.2f'".
|
||||
F-strings are handled partly by the *lexical analyzer*, which produces
|
||||
the tokens "FSTRING_START", "FSTRING_MIDDLE" and "FSTRING_END", and
|
||||
partly by the parser, which handles expressions in the replacement
|
||||
field. The exact way the work is split is a CPython implementation
|
||||
detail.
|
||||
|
||||
* When the equals sign "'='" is provided in an interpolation
|
||||
expression, the text of the expression is appended to the literal
|
||||
string that precedes the relevant interpolation. This includes the
|
||||
equals sign and any surrounding whitespace. The "Interpolation"
|
||||
instance for the expression will be created as normal, except that
|
||||
"conversion" will be set to ‘"r"’ ("repr()") by default. If an
|
||||
explicit conversion or format specifier are provided, this will
|
||||
override the default behaviour.
|
||||
Correspondingly, the f-string grammar is a mix of lexical and
|
||||
syntactic definitions.
|
||||
|
||||
Whitespace is significant in these situations:
|
||||
|
||||
* There may be no whitespace in "FSTRING_START" (between the prefix
|
||||
and quote).
|
||||
|
||||
* Whitespace in "FSTRING_MIDDLE" is part of the literal string
|
||||
contents.
|
||||
|
||||
* In "fstring_replacement_field", if "f_debug_specifier" is present,
|
||||
all whitespace after the opening brace until the
|
||||
"f_debug_specifier", as well as whitespace immediatelly following
|
||||
"f_debug_specifier", is retained as part of the expression.
|
||||
|
||||
**CPython implementation detail:** The expression is not handled in
|
||||
the tokenization phase; it is retrieved from the source code using
|
||||
locations of the "{" token and the token after "=".
|
||||
|
||||
The "FSTRING_MIDDLE" definition uses negative lookaheads ("!") to
|
||||
indicate special characters (backslash, newline, "{", "}") and
|
||||
sequences ("f_quote").
|
||||
|
||||
fstring: FSTRING_START fstring_middle* FSTRING_END
|
||||
|
||||
FSTRING_START: fstringprefix ("'" | '"' | "\'\'\'" | '"""')
|
||||
FSTRING_END: f_quote
|
||||
fstringprefix: <("f" | "fr" | "rf"), case-insensitive>
|
||||
f_debug_specifier: '='
|
||||
f_quote: <the quote character(s) used in FSTRING_START>
|
||||
|
||||
fstring_middle:
|
||||
| fstring_replacement_field
|
||||
| FSTRING_MIDDLE
|
||||
FSTRING_MIDDLE:
|
||||
| (!"\\" !newline !'{' !'}' !f_quote) source_character
|
||||
| stringescapeseq
|
||||
| "{{"
|
||||
| "}}"
|
||||
| <newline, in triple-quoted f-strings only>
|
||||
fstring_replacement_field:
|
||||
| '{' f_expression [f_debug_specifier] [fstring_conversion]
|
||||
[fstring_full_format_spec] '}'
|
||||
fstring_conversion:
|
||||
| "!" ("s" | "r" | "a")
|
||||
fstring_full_format_spec:
|
||||
| ':' fstring_format_spec*
|
||||
fstring_format_spec:
|
||||
| FSTRING_MIDDLE
|
||||
| fstring_replacement_field
|
||||
f_expression:
|
||||
| ','.(conditional_expression | "*" or_expr)+ [","]
|
||||
| yield_expression
|
||||
|
||||
Note:
|
||||
|
||||
In the above grammar snippet, the "f_quote" and "FSTRING_MIDDLE"
|
||||
rules are context-sensitive – they depend on the contents of
|
||||
"FSTRING_START" of the nearest enclosing "fstring".Constructing a
|
||||
more traditional formal grammar from this template is left as an
|
||||
exercise for the reader.
|
||||
|
||||
The grammar for t-strings is identical to the one for f-strings, with
|
||||
*t* instead of *f* at the beginning of rule and token names and in the
|
||||
prefix.
|
||||
|
||||
tstring: TSTRING_START tstring_middle* TSTRING_END
|
||||
|
||||
<rest of the t-string grammar is omitted; see above>
|
||||
''',
|
||||
'subscriptions': r'''Subscriptions
|
||||
*************
|
||||
|
|
@ -11603,7 +11703,7 @@ def foo():
|
|||
Sets
|
||||
These represent a mutable set. They are created by the built-in
|
||||
"set()" constructor and can be modified afterwards by several
|
||||
methods, such as "add".
|
||||
methods, such as "add()".
|
||||
|
||||
Frozen sets
|
||||
These represent an immutable set. They are created by the built-in
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue