Merge branch 'main' into feat/gc-gen-3.15+heap_size

This commit is contained in:
Sergey Miryanov 2026-05-03 17:05:09 +05:00
commit da497d1cbd
178 changed files with 7136 additions and 2419 deletions

View file

@ -26,9 +26,16 @@ apt-get -yq --no-install-recommends install \
xvfb \
zlib1g-dev
# Workaround missing libmpdec-dev on ubuntu 24.04:
# https://launchpad.net/~ondrej/+archive/ubuntu/php
# https://deb.sury.org/
sudo add-apt-repository ppa:ondrej/php
apt-get update
apt-get -yq --no-install-recommends install libmpdec-dev
# Workaround missing libmpdec-dev on ubuntu 24.04 by building mpdecimal
# from source. ppa:ondrej/php (launchpad.net) are unreliable
# (https://status.canonical.com) so fetch the tarball directly
# from the upstream host.
# https://www.bytereef.org/mpdecimal/
MPDECIMAL_VERSION=4.0.1
curl -fsSL "https://www.bytereef.org/software/mpdecimal/releases/mpdecimal-${MPDECIMAL_VERSION}.tar.gz" \
| tar -xz -C /tmp
(cd "/tmp/mpdecimal-${MPDECIMAL_VERSION}" \
&& ./configure --prefix=/usr/local \
&& make -j"$(nproc)" \
&& make install)
ldconfig

1
.gitignore vendored
View file

@ -140,6 +140,7 @@ Tools/unicode/data/
/.ccache
/cross-build*/
/jit_stencils*.h
/jit_unwind_info*.h
/platform
/profile-clean-stamp
/profile-run-stamp

View file

@ -31,7 +31,7 @@ Note that holding an :term:`attached thread state` is not required for these API
or ``-2`` on failure to create a lock. Check ``errno`` for more information
about the cause of a failure.
.. c:function:: int PyUnstable_WritePerfMapEntry(const void *code_addr, unsigned int code_size, const char *entry_name)
.. c:function:: int PyUnstable_WritePerfMapEntry(const void *code_addr, size_t code_size, const char *entry_name)
Write one single entry to the ``/tmp/perf-$pid.map`` file. This function is
thread safe. Here is what an example entry looks like::

View file

@ -38,3 +38,8 @@ Pending removal in Python 3.20
- :mod:`zlib`
(Contributed by Hugo van Kemenade and Stan Ulbrych in :gh:`76007`.)
* :mod:`ast`:
* Creating instances of abstract AST nodes (such as :class:`ast.AST`
or :class:`!ast.expr`) is deprecated and will raise an error in Python 3.20.

View file

@ -1924,7 +1924,7 @@ correctly using identity tests:
.. code-block:: python
_sentinel = object()
_sentinel = sentinel('_sentinel')
def pop(self, key, default=_sentinel):
if key in self:

View file

@ -39,10 +39,11 @@ Glossary
ABCs with the :mod:`abc` module.
annotate function
A function that can be called to retrieve the :term:`annotations <annotation>`
of an object. This function is accessible as the :attr:`~object.__annotate__`
attribute of functions, classes, and modules. Annotate functions are a
subset of :term:`evaluate functions <evaluate function>`.
A callable that can be called to retrieve the :term:`annotations <annotation>` of
an object. Annotate functions are usually :term:`functions <function>`,
automatically generated as the :attr:`~object.__annotate__` attribute of functions,
classes, and modules. Annotate functions are a subset of
:term:`evaluate functions <evaluate function>`.
annotation
A label associated with a variable, a class

View file

@ -594,7 +594,7 @@ a pure Python equivalent:
def object_getattribute(obj, name):
"Emulate PyObject_GenericGetAttr() in Objects/object.c"
null = object()
null = sentinel('null')
objtype = type(obj)
cls_var = find_name_in_mro(objtype, name, null)
descr_get = getattr(type(cls_var), '__get__', null)
@ -1635,12 +1635,12 @@ by member descriptors:
.. testcode::
null = object()
null = sentinel('null')
class Member:
def __init__(self, name, clsname, offset):
'Emulate PyMemberDef in Include/structmember.h'
'Emulate PyMemberDef in Include/descrobject.h'
# Also see descr_new() in Objects/descrobject.c
self.name = name
self.clsname = clsname

View file

@ -217,8 +217,9 @@ Example, using the :mod:`sys` APIs in file :file:`example.py`:
How to obtain the best results
------------------------------
For best results, Python should be compiled with
``CFLAGS="-fno-omit-frame-pointer -mno-omit-leaf-frame-pointer"`` as this allows
For best results, keep frame pointers enabled. On supported GCC-compatible
toolchains, CPython builds itself with ``-fno-omit-frame-pointer`` and, when
available, ``-mno-omit-leaf-frame-pointer`` by default. These flags allow
profilers to unwind using only the frame pointer and not on DWARF debug
information. This is because as the code that is interposed to allow ``perf``
support is dynamically generated it doesn't have any DWARF debugging information

View file

@ -510,6 +510,81 @@ annotations from the class and puts them in a separate attribute:
return typ
Creating a custom callable annotate function
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Custom :term:`annotate functions <annotate function>` may be literal functions like those
automatically generated for functions, classes, and modules. Or, they may wish to utilise
the encapsulation provided by classes, in which case any :term:`callable` can be used as
an :term:`annotate function`.
To provide the :attr:`~Format.VALUE`, :attr:`~Format.STRING`, or
:attr:`~Format.FORWARDREF` formats directly, an :term:`annotate function` must provide
the following attribute:
* A callable ``__call__`` with signature ``__call__(format, /) -> dict``, that does not
raise a :exc:`NotImplementedError` when called with a supported format.
To provide the :attr:`~Format.VALUE_WITH_FAKE_GLOBALS` format, which is used to
automatically generate :attr:`~Format.STRING` or :attr:`~Format.FORWARDREF` if they are
not supported directly, :term:`annotate functions <annotate function>` must provide the
following attributes:
* A callable ``__call__`` with signature ``__call__(format, /) -> dict``, that does not
raise a :exc:`NotImplementedError` when called with
:attr:`~Format.VALUE_WITH_FAKE_GLOBALS`.
* A :ref:`code object <code-objects>` ``__code__`` containing the compiled code for the
annotate function.
* Optional: A tuple of the function's positional defaults ``__kwdefaults__``, if the
function represented by ``__code__`` uses any positional defaults.
* Optional: A dict of the function's keyword defaults ``__defaults__``, if the function
represented by ``__code__`` uses any keyword defaults.
* Optional: All other :ref:`function attributes <inspect-types>`.
.. code-block:: python
class Annotate:
called_formats = []
def __call__(self, format=None, /, *, _self=None):
# When called with fake globals, `_self` will be the
# actual self value, and `self` will be the format.
if _self is not None:
self, format = _self, self
self.called_formats.append(format)
if format <= 2: # VALUE or VALUE_WITH_FAKE_GLOBALS
return {"x": MyType}
raise NotImplementedError
__code__ = __call__.__code__
__defaults__ = (None,)
__kwdefaults__ = property(lambda self: dict(_self=self))
__globals__ = {}
__builtins__ = {}
__closure__ = None
This can then be called with:
.. code-block:: pycon
>>> from annotationlib import call_annotate_function, Format
>>> call_annotate_function(Annotate(), format=Format.STRING)
{'x': 'MyType'}
Or used as the annotate function for an object:
.. code-block:: pycon
>>> from annotationlib import get_annotations, Format
>>> class C:
... pass
>>> C.__annotate__ = Annotate()
>>> get_annotations(Annotate(), format=Format.STRING)
{'x': 'MyType'}
Limitations of the ``STRING`` format
------------------------------------

View file

@ -42,7 +42,7 @@ Node classes
.. class:: AST
This is the base of all AST node classes. The actual node classes are
This is the abstract base of all AST node classes. The actual node classes are
derived from the :file:`Parser/Python.asdl` file, which is reproduced
:ref:`above <abstract-grammar>`. They are defined in the :mod:`!_ast` C
module and re-exported in :mod:`!ast`.
@ -168,6 +168,15 @@ Node classes
arguments that were set as attributes of the AST node, even if they did not
match any of the fields of the AST node. These cases now raise a :exc:`TypeError`.
.. deprecated-removed:: next 3.20
In the :ref:`grammar above <abstract-grammar>`, the AST node classes that
correspond to production rules with variants (aka "sums") are abstract
classes. Previous versions of Python allowed for the creation of direct
instances of these abstract node classes. This behavior is deprecated and
will be removed in Python 3.20.
.. note::
The descriptions of the specific node classes displayed here
were initially adapted from the fantastic `Green Tree
@ -271,18 +280,25 @@ Root nodes
Literals
^^^^^^^^
.. class:: Constant(value)
.. class:: Constant(value, kind)
A constant value. The ``value`` attribute of the ``Constant`` literal contains the
Python object it represents. The values represented can be instances of :class:`str`,
:class:`bytes`, :class:`int`, :class:`float`, :class:`complex`, and :class:`bool`,
and the constants :data:`None` and :data:`Ellipsis`.
The ``kind`` attribute is an optional string. For string literals with a
``u`` prefix, ``kind`` is set to ``'u'``. For all other
constants, ``kind`` is ``None``.
.. doctest::
>>> print(ast.dump(ast.parse('123', mode='eval'), indent=4))
Expression(
body=Constant(value=123))
>>> print(ast.dump(ast.parse("u'hello'", mode='eval'), indent=4))
Expression(
body=Constant(value='hello', kind='u'))
.. class:: FormattedValue(value, conversion, format_spec)
@ -2536,6 +2552,20 @@ and classes for traversing abstract syntax trees:
Added the *color* parameter.
.. function:: compare(a, b, /, *, compare_attributes=False)
Recursively compares two ASTs.
*compare_attributes* affects whether AST attributes are considered
in the comparison. If *compare_attributes* is ``False`` (default), then
attributes are ignored. Otherwise they must all be equal. This
option is useful to check whether the ASTs are structurally equal but
differ in whitespace or similar details. Attributes include line numbers
and column offsets.
.. versionadded:: 3.14
.. _ast-compiler-flags:
Compiler flags
@ -2571,20 +2601,6 @@ effects on the compilation of a program:
.. versionadded:: 3.8
.. function:: compare(a, b, /, *, compare_attributes=False)
Recursively compares two ASTs.
*compare_attributes* affects whether AST attributes are considered
in the comparison. If *compare_attributes* is ``False`` (default), then
attributes are ignored. Otherwise they must all be equal. This
option is useful to check whether the ASTs are structurally equal but
differ in whitespace or similar details. Attributes include line numbers
and column offsets.
.. versionadded:: 3.14
.. _ast-cli:
Command-line usage

View file

@ -331,10 +331,14 @@ Compressing and decompressing data in memory
If *max_length* is non-negative, the method returns at most *max_length*
bytes of decompressed data. If this limit is reached and further
output can be produced, the :attr:`~.needs_input` attribute will
be set to ``False``. In this case, the next call to
output can be produced (or EOF is reached), the :attr:`~.needs_input`
attribute will be set to ``False``. In this case, the next call to
:meth:`~.decompress` may provide *data* as ``b''`` to obtain
more of the output.
more of the output. The full content can thus be read like::
process_output(d.decompress(data, max_length))
while not d.eof and not d.needs_input:
process_output(d.decompress(b"", max_length))
If all of the input data was decompressed and returned (either
because this was less than *max_length* bytes, or because

View file

@ -403,11 +403,26 @@ added matters. To illustrate::
.. attribute:: utf8
If ``False``, follow :rfc:`5322`, supporting non-ASCII characters in
headers by encoding them as "encoded words". If ``True``, follow
:rfc:`6532` and use ``utf-8`` encoding for headers. Messages
headers by encoding them as :rfc:`2047` "encoded words". If ``True``,
follow :rfc:`6532` and use ``utf-8`` encoding for headers. Messages
formatted in this way may be passed to SMTP servers that support
the ``SMTPUTF8`` extension (:rfc:`6531`).
When ``False``, the generator will raise
:exc:`~email.errors.HeaderWriteError` if any header includes non-ASCII
characters in a context where :rfc:`2047` does not permit encoded words.
This particularly applies to mailboxes ("addr-spec") with non-ASCII
characters, which can be created via
:class:`~email.headerregistry.Address`. To use a mailbox with a non-ASCII
domain name with ``utf8=False``, first encode the domain using the
third-party :pypi:`idna` or :pypi:`uts46` module or with
:mod:`encodings.idna`. It is not possible to use a non-ASCII username
("local-part") in a mailbox when ``utf8=False``.
.. versionchanged:: 3.15
Can trigger the raising of :exc:`~email.errors.HeaderWriteError`.
(Earlier versions incorrectly applied :rfc:`2047` in certain contexts,
mostly notably in addr-specs.)
.. attribute:: refold_source

View file

@ -366,7 +366,8 @@ instantiation, of which this module provides three different variants:
delays, it now always returns the IP address.
.. class:: SimpleHTTPRequestHandler(request, client_address, server, directory=None)
.. class:: SimpleHTTPRequestHandler(request, client_address, server, \
*, directory=None, extra_response_headers=None)
This class serves files from the directory *directory* and below,
or the current directory if *directory* is not provided, directly
@ -378,6 +379,9 @@ instantiation, of which this module provides three different variants:
.. versionchanged:: 3.9
The *directory* parameter accepts a :term:`path-like object`.
.. versionchanged:: next
Added *extra_response_headers* parameter.
A lot of the work, such as parsing the request, is done by the base class
:class:`BaseHTTPRequestHandler`. This class implements the :func:`do_GET`
and :func:`do_HEAD` functions.
@ -408,6 +412,15 @@ instantiation, of which this module provides three different variants:
This dictionary is no longer filled with the default system mappings,
but only contains overrides.
.. attribute:: extra_response_headers
A sequence of ``(name, value)`` pairs containing user-defined extra HTTP
response headers to add to each successful HTTP status 200 response. These
headers are not included in other status code responses.
Headers that the server sends automatically such as ``Content-Type``
will not be overwritten by :attr:`!extra_response_headers`.
The :class:`SimpleHTTPRequestHandler` class defines the following methods:
.. method:: do_HEAD()
@ -440,6 +453,9 @@ instantiation, of which this module provides three different variants:
followed by a ``'Content-Length:'`` header with the file's size and a
``'Last-Modified:'`` header with the file's modification time.
The instance attribute :attr:`extra_response_headers` is a sequence of
``(name, value)`` pairs containing user-defined extra response headers.
Then follows a blank line signifying the end of the headers, and then the
contents of the file are output.
@ -581,6 +597,15 @@ The following options are accepted:
.. versionadded:: 3.14
.. option:: -H, --header <header> <value>
Specify an additional extra HTTP Response Header to send on successful HTTP
200 responses. Can be used multiple times to send additional custom response
headers. Headers that are sent automatically by the server (for instance
Content-Type) will not be overwritten by the server.
.. versionadded:: next
.. _http.server-security:

View file

@ -713,7 +713,7 @@ However, for reading convenience, most of the examples show sorted sequences.
.. function:: covariance(x, y, /)
Return the sample covariance of two inputs *x* and *y*. Covariance
Return the sample covariance of two sequence inputs *x* and *y*. Covariance
is a measure of the joint variability of two inputs.
Both inputs must be of the same length (no less than two), otherwise
@ -739,7 +739,7 @@ However, for reading convenience, most of the examples show sorted sequences.
Return the `Pearson's correlation coefficient
<https://en.wikipedia.org/wiki/Pearson_correlation_coefficient>`_
for two inputs. Pearson's correlation coefficient *r* takes values
for two sequence inputs. Pearson's correlation coefficient *r* takes values
between -1 and +1. It measures the strength and direction of a linear
relationship.
@ -802,7 +802,7 @@ However, for reading convenience, most of the examples show sorted sequences.
(it is equal to the difference between predicted and actual values
of the dependent variable).
Both inputs must be of the same length (no less than two), and
Both inputs must be sequences of the same length (no less than two), and
the independent variable *x* cannot be constant;
otherwise a :exc:`StatisticsError` is raised.

View file

@ -142,6 +142,10 @@ Some facts and figures:
a Zstandard dictionary used to improve compression of smaller amounts of
data.
For modes ``'w:gz'`` and ``'w|gz'``, :func:`tarfile.open` accepts the
keyword argument *mtime* to create a gzip archive header with that mtime. By
default, the mtime is set to the time of creation of the archive.
For special purposes, there is a second format for *mode*:
``'filemode|[compression]'``. :func:`tarfile.open` will return a :class:`TarFile`
object that processes its data as a stream of blocks. No random seeking will

View file

@ -1436,3 +1436,159 @@ is equivalent to::
Currently, :class:`Lock`, :class:`RLock`, :class:`Condition`,
:class:`Semaphore`, and :class:`BoundedSemaphore` objects may be used as
:keyword:`with` statement context managers.
Iterator synchronization
------------------------
By default, Python iterators do not support concurrent access. Most iterators make
no guarantees when accessed simultaneously from multiple threads. Generator
iterators, for example, raise :exc:`ValueError` if one of their iterator methods
is called while the generator is already executing. The tools in this section
allow reliable concurrency support to be added to ordinary iterators and
iterator-producing callables.
The :class:`serialize_iterator` wrapper lets multiple threads share a single iterator and
take turns consuming from it. While one thread is running ``__next__()``, the
others block until the iterator becomes available. Each value produced by the
underlying iterator is delivered to exactly one caller.
The :func:`concurrent_tee` function lets multiple threads each receive the full
stream of values from one underlying iterator. It creates independent iterators
that all draw from the same source. Values are buffered until consumed by all
of the derived iterators.
.. class:: serialize_iterator(iterable)
Return an iterator wrapper that serializes concurrent calls to
:meth:`~iterator.__next__` using a lock.
If the wrapped iterator also defines :meth:`~generator.send`,
:meth:`~generator.throw`, or :meth:`~generator.close`, those calls
are serialized as well.
This makes it possible to share a single iterator, including a generator
iterator, between multiple threads. A lock ensures that calls are handled
one at a time. No values are duplicated or skipped by the wrapper itself.
Each item from the underlying iterator is given to exactly one caller.
This wrapper does not copy or buffer values. Threads that call
:func:`next` while another thread is already advancing the iterator will
block until the active call completes.
Example:
.. code-block:: python
import threading
def squares(n):
for x in range(n):
yield x * x
def consume(name, iterable):
for item in iterable:
print(name, item)
source = threading.serialize_iterator(squares(5))
t1 = threading.Thread(target=consume, args=("left", source))
t2 = threading.Thread(target=consume, args=("right", source))
t1.start()
t2.start()
t1.join()
t2.join()
In this example, each number is printed exactly once, but the work is shared
between the two threads.
.. versionadded:: next
.. function:: synchronized_iterator(func)
Wrap an iterator-producing callable so that each iterator it returns is
automatically passed through :class:`serialize_iterator`.
This is especially useful as a :term:`decorator` for generator functions,
allowing their generator-iterators to be consumed from multiple threads.
Example:
.. code-block:: python
import threading
@threading.synchronized_iterator
def squares(n):
for x in range(n):
yield x * x
def consume(name, iterable):
for item in iterable:
print(name, item)
source = squares(5)
t1 = threading.Thread(target=consume, args=("left", source))
t2 = threading.Thread(target=consume, args=("right", source))
t1.start()
t2.start()
t1.join()
t2.join()
The returned wrapper preserves the metadata of *func*, such as its name and
wrapped function reference.
.. versionadded:: next
.. function:: concurrent_tee(iterable, n=2)
Return *n* independent iterators from a single input *iterable*, with
guaranteed behavior when the derived iterators are consumed concurrently.
This function is similar to :func:`itertools.tee`, but is intended for cases
where the source iterator may feed consumers running in different threads.
Each returned iterator yields every value from the underlying iterable, in
the same order.
Internally, values are buffered until every derived iterator has consumed
them.
The returned iterators share the same underlying synchronization lock. Each
individual derived iterator is intended to be consumed by one thread at a
time. If a single derived iterator must itself be shared by multiple
threads, wrap it with :class:`serialize_iterator`.
If *n* is ``0``, return an empty tuple. If *n* is negative, raise
:exc:`ValueError`.
Example:
.. code-block:: python
import threading
def squares(n):
for x in range(n):
yield x * x
def consume(name, iterable):
for item in iterable:
print(name, item)
source = squares(5)
left, right = threading.concurrent_tee(source)
t1 = threading.Thread(target=consume, args=("left", left))
t2 = threading.Thread(target=consume, args=("right", right))
t1.start()
t2.start()
t1.join()
t2.join()
In this example, both consumer threads see the full sequence of squares
from a single generator expression.
.. versionadded:: next

View file

@ -2361,6 +2361,12 @@ without the dedicated syntax, as documented below.
>>> Alias.__module__
'__main__'
This attribute is writable.
.. versionchanged:: 3.15
The attribute is now writable.
.. attribute:: __type_params__
The type parameters of the type alias, or an empty tuple if the alias is

View file

@ -1095,6 +1095,13 @@ Test cases
self.assertIn('myfile.py', cm.filename)
self.assertEqual(320, cm.lineno)
The context managers can be nested to test that multiple different
warnings are emitted::
with (self.assertWarns(SomeWarning),
self.assertWarns(OtherWarning)):
do_something()
This method works regardless of the warning filters in place when it
is called.
@ -1103,6 +1110,10 @@ Test cases
.. versionchanged:: 3.3
Added the *msg* keyword argument when used as a context manager.
.. versionchanged:: next
Warnings that do not match the specified category are no longer
swallowed.
Nested context managers are now supported.
.. method:: assertWarnsRegex(warning, regex, callable, *args, **kwds)
assertWarnsRegex(warning, regex, *, msg=None)
@ -1121,11 +1132,23 @@ Test cases
with self.assertWarnsRegex(RuntimeWarning, 'unsafe frobnicating'):
frobnicate('/etc/passwd')
The context managers can be nested to test that multiple different
warnings are emitted::
with (self.assertWarns(SomeWarning, regex1),
self.assertWarns(OtherWarning, regex2)):
do_something()
.. versionadded:: 3.2
.. versionchanged:: 3.3
Added the *msg* keyword argument when used as a context manager.
.. versionchanged:: next
Warnings that do not match the specified category or regex are
no longer swallowed.
Nested context managers are now supported.
.. method:: assertLogs(logger=None, level=None, formatter=None)
A context manager to test that at least one message is logged on

View file

@ -308,6 +308,11 @@ Decompression objects support the following methods and attributes:
:attr:`unconsumed_tail`. This bytestring must be passed to a subsequent call to
:meth:`decompress` if decompression is to continue. If *max_length* is zero
then the whole input is decompressed, and :attr:`unconsumed_tail` is empty.
For example, the full content could be read like::
process_output(d.decompress(data, max_length))
while chunk := d.decompress(d.unconsumed_tail, max_length):
process_output(chunk)
.. versionchanged:: 3.6
*max_length* can be used as a keyword argument.

View file

@ -780,6 +780,24 @@ also be used to improve performance.
.. versionadded:: 3.14
.. option:: --without-frame-pointers
Disable frame pointers, which are enabled by default (see :pep:`831`).
By default, the build appends ``-fno-omit-frame-pointer`` (and
``-mno-omit-leaf-frame-pointer`` when the compiler supports it) to
``BASECFLAGS`` so profilers, debuggers, and system tracing tools
(``perf``, ``eBPF``, ``dtrace``, ``gdb``) can walk the C call stack
without DWARF metadata. The flags propagate to third-party C
extensions through :mod:`sysconfig`. On compilers that do not
understand them, the build silently skips them.
Downstream packagers and authors of native libraries built with
custom build systems should set the same flags so the unwind chain
stays unbroken across all native frames.
.. versionadded:: 3.15
.. option:: --without-mimalloc
Disable the fast :ref:`mimalloc <mimalloc>` allocator

View file

@ -953,10 +953,24 @@ when a module is imported) will still emit the syntax warning.
(Contributed by Irit Katriel in :gh:`130080`.)
.. _incremental-garbage-collection:
.. _whatsnew314-incremental-gc:
Incremental garbage collection
------------------------------
Garbage collection
------------------
**From Python 3.14.5 onwards:**
The garbage collector (GC) has changed in Python 3.14.5.
Python 3.14.0-3.14.4 shipped with a new incremental GC.
However, due to a number of `reports
<https://github.com/python/cpython/issues/142516>`__
of significant memory pressure in production environments,
it has been reverted back to the generational GC from 3.13.
This is the GC now used in Python 3.14.5 and later.
**Previously in Python 3.14.0-3.14.4:**
The cycle garbage collector is now incremental.
This means that maximum pause times are reduced
@ -2203,7 +2217,18 @@ difflib
gc
--
* The new :ref:`incremental garbage collector <whatsnew314-incremental-gc>`
* **From Python 3.14.5 onwards:**
Python 3.14.0-3.14.4 shipped with a new incremental garbage collector.
However, due to a number of `reports
<https://github.com/python/cpython/issues/142516>`__
of significant memory pressure in production environments,
it has been reverted back to the generational GC from 3.13.
This is the GC now used in Python 3.14.5 and later.
* **Previously in Python 3.14.0-3.14.4:**
The new :ref:`incremental garbage collector <whatsnew314-incremental-gc>`
means that maximum pause times are reduced
by an order of magnitude or more for larger heaps.
@ -3447,3 +3472,17 @@ Changes in the C API
functions on Python 3.13 and older.
.. _pythoncapi-compat project: https://github.com/python/pythoncapi-compat/
Notable changes in 3.14.5
=========================
gc
--
* The incremental garbage collector shipped in Python 3.14.0-3.14.4 has been
reverted back to the generational garbage collector from 3.13,
due to a number of `reports
<https://github.com/python/cpython/issues/142516>`__
of significant memory pressure in production environments.
See :ref:`whatsnew314-incremental-gc` for details.

View file

@ -86,6 +86,7 @@ Summary -- Release highlights
* :pep:`782`: :ref:`A new PyBytesWriter C API to create a Python bytes object
<whatsnew315-pybyteswriter>`
* :pep:`803`: :ref:`Stable ABI for Free-Threaded Builds <whatsnew315-abi3t>`
* :pep:`831`: :ref:`Frame pointers everywhere <whatsnew315-frame-pointers>`
* :ref:`The JIT compiler has been significantly upgraded <whatsnew315-jit>`
* :ref:`Improved error messages <whatsnew315-improved-error-messages>`
* :ref:`The official Windows 64-bit binaries now use the tail-calling interpreter
@ -914,6 +915,16 @@ faulthandler
(Contributed by Eric Froemling in :gh:`149085`.)
email
-----
* Email generators now raise an error when an :class:`.EmailMessage` cannot be
accurately flattened due to a non-ASCII email address (mailbox) in an address
header. Options for supporting Email Address Internationalization (EAI) are
discussed in :attr:`.EmailPolicy.utf8`.
(Contributed by R David Murray and Mike Edmunds in :gh:`122540`.)
functools
---------
@ -963,6 +974,15 @@ http.server
for files with unknown extensions.
(Contributed by John Comeau and Hugo van Kemenade in :gh:`113471`.)
* Add a new ``extra_response_headers`` keyword argument to
:class:`~http.server.SimpleHTTPRequestHandler` to support custom headers in
HTTP responses.
(Contributed by Anton I. Sipos in :gh:`135057`.)
* Add a ``-H/--header`` option to the :program:`python -m http.server`
command-line interface to support custom headers in HTTP responses.
(Contributed by Anton I. Sipos in :gh:`135057`.)
inspect
-------
@ -1268,6 +1288,16 @@ tarfile
(Contributed by Christoph Walcher in :gh:`57911`.)
threading
---------
* Added :class:`~threading.serialize_iterator`,
:func:`~threading.synchronized_iterator`,
and :func:`~threading.concurrent_tee` to support concurrent access to
generators and iterators.
(Contributed by Raymond Hettinger in :gh:`124397`.)
timeit
------
@ -1450,10 +1480,16 @@ unicodedata
unittest
--------
* :func:`unittest.TestCase.assertLogs` will now accept a formatter
* :meth:`unittest.TestCase.assertLogs` will now accept a formatter
to control how messages are formatted.
(Contributed by Garry Cairns in :gh:`134567`.)
* :meth:`unittest.TestCase.assertWarns` and
:meth:`unittest.TestCase.assertWarnsRegex` no longer swallow warnings that
do not match the specified category or regex.
Nested context managers are now supported.
(Contributed by Serhiy Storchaka in :gh:`143231`.)
urllib.parse
------------
@ -1579,11 +1615,11 @@ Upgraded JIT compiler
Results from the `pyperformance <https://github.com/python/pyperformance>`__
benchmark suite report
`6-7% <https://www.doesjitgobrrr.com/run/2026-04-01>`__
`8-9% <https://www.doesjitgobrrr.com/run/2026-04-29>`__
geometric mean performance improvement for the JIT over the standard CPython
interpreter built with all optimizations enabled on x86-64 Linux. On AArch64
macOS, the JIT has a
`12-13% <https://www.doesjitgobrrr.com/run/2026-04-01>`__
`12-13% <https://www.doesjitgobrrr.com/run/2026-04-29>`__
speedup over the :ref:`tail calling interpreter <whatsnew314-tail-call-interpreter>`
with all optimizations enabled. The speedups for JIT
builds versus no JIT builds range from roughly 15% slowdown to over
@ -1825,6 +1861,13 @@ Deprecated
New deprecations
----------------
* :mod:`ast`
* Creating instances of abstract AST nodes (such as :class:`ast.AST`
or :class:`!ast.expr`) is deprecated and will raise an error in Python 3.20.
(Contributed by Brian Schubert in :gh:`116021`.)
* :mod:`base64`:
* Accepting the ``+`` and ``/`` characters with an alternative alphabet in
@ -2252,6 +2295,16 @@ Build changes
and :option:`-X dev <-X>` is passed to the Python or Python is built in :ref:`debug mode <debug-build>`.
(Contributed by Donghee Na in :gh:`141770`.)
.. _whatsnew315-frame-pointers:
* CPython is now built with frame pointers enabled by default
(:pep:`831`). Pass :option:`--without-frame-pointers` to opt out.
Authors of C extensions and native libraries built with custom build
systems should add ``-fno-omit-frame-pointer`` and
``-mno-omit-leaf-frame-pointer`` to their own ``CFLAGS`` to keep the
unwind chain intact.
(Contributed by Pablo Galindo Salgado and Savannah Ostrowski in :gh:`149201`.)
.. _whatsnew315-windows-tail-calling-interpreter:
* 64-bit builds using Visual Studio 2026 (MSVC 18) may now use the new
@ -2314,3 +2367,11 @@ that may require changes to your code.
with argument ``altchars=b'-_'`` (this works with older Python versions)
to make padding required.
(Contributed by Serhiy Storchaka in :gh:`73613`.)
* Since :meth:`unittest.TestCase.assertWarns` and
:meth:`unittest.TestCase.assertWarnsRegex` no longer swallow warnings that
do not match the specified category or regex, your tests may start leaking
some warnings that were previously masked.
Use warning filters to silence them or additional :meth:`!assertWarns*`
to catch and check them.
(Contributed by Serhiy Storchaka in :gh:`143231`.)

View file

@ -38,7 +38,7 @@ typedef struct {
PyAPI_FUNC(int) PyUnstable_PerfMapState_Init(void);
PyAPI_FUNC(int) PyUnstable_WritePerfMapEntry(
const void *code_addr,
unsigned int code_size,
size_t code_size,
const char *entry_name);
PyAPI_FUNC(void) PyUnstable_PerfMapState_Fini(void);
PyAPI_FUNC(int) PyUnstable_CopyPerfMapFile(const char* parent_filename);

View file

@ -161,6 +161,7 @@ struct ast_state {
PyObject *__module__;
PyObject *_attributes;
PyObject *_fields;
PyObject *abstract_types;
PyObject *alias_type;
PyObject *annotation;
PyObject *arg;

View file

@ -94,7 +94,7 @@ typedef struct {
void* (*init_state)(void);
// Callback to register every trampoline being created
void (*write_state)(void* state, const void *code_addr,
unsigned int code_size, PyCodeObject* code);
size_t code_size, PyCodeObject* code);
// Callback to free the trampoline state
int (*free_state)(void* state);
} _PyPerf_Callbacks;
@ -108,6 +108,10 @@ extern PyStatus _PyPerfTrampoline_AfterFork_Child(void);
#ifdef PY_HAVE_PERF_TRAMPOLINE
extern _PyPerf_Callbacks _Py_perfmap_callbacks;
extern _PyPerf_Callbacks _Py_perfmap_jit_callbacks;
extern void _PyPerfJit_WriteNamedCode(const void *code_addr,
size_t code_size,
const char *entry,
const char *filename);
#endif
static inline PyObject*

View file

@ -215,6 +215,7 @@ typedef struct _Py_DebugOffsets {
uint64_t state;
uint64_t length;
uint64_t asciiobject_size;
uint64_t compactunicodeobject_size;
} unicode_object;
// GC runtime state offset;
@ -370,6 +371,7 @@ typedef struct _Py_DebugOffsets {
.state = offsetof(PyUnicodeObject, _base._base.state), \
.length = offsetof(PyUnicodeObject, _base._base.length), \
.asciiobject_size = sizeof(PyASCIIObject), \
.compactunicodeobject_size = sizeof(PyCompactUnicodeObject), \
}, \
.gc = { \
.size = sizeof(struct _gc_runtime_state), \

View file

@ -1582,6 +1582,7 @@ _PyStaticObjects_CheckRefcnt(PyInterpreterState *interp) {
_PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(alias));
_PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(align));
_PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(all));
_PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(all_interpreters));
_PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(all_threads));
_PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(allow_code));
_PyStaticObject_CheckRefcnt((PyObject *)&_Py_ID(alphabet));

View file

@ -305,6 +305,7 @@ struct _Py_global_strings {
STRUCT_FOR_ID(alias)
STRUCT_FOR_ID(align)
STRUCT_FOR_ID(all)
STRUCT_FOR_ID(all_interpreters)
STRUCT_FOR_ID(all_threads)
STRUCT_FOR_ID(allow_code)
STRUCT_FOR_ID(alphabet)

View file

@ -69,7 +69,7 @@ struct code_arena_st;
struct trampoline_api_st {
void* (*init_state)(void);
void (*write_state)(void* state, const void *code_addr,
unsigned int code_size, PyCodeObject* code);
size_t code_size, PyCodeObject* code);
int (*free_state)(void* state);
void *state;
Py_ssize_t code_padding;
@ -538,8 +538,13 @@ struct _py_func_state {
/****** type state *********/
/* For now we hard-code this to a value for which we are confident
all the static builtin types will fit (for all builds). */
#define _Py_MAX_MANAGED_STATIC_BUILTIN_TYPES 203
all the static builtin types will fit (for all builds).
If you add a new static type to the standard library, you may have to
update one of these numbers.
*/
#define _Py_NUM_MANAGED_PREINITIALIZED_TYPES 120
#define _Py_MAX_MANAGED_STATIC_BUILTIN_TYPES \
(_Py_NUM_MANAGED_PREINITIALIZED_TYPES + 83)
#define _Py_MAX_MANAGED_STATIC_EXT_TYPES 10
#define _Py_MAX_MANAGED_STATIC_TYPES \
(_Py_MAX_MANAGED_STATIC_BUILTIN_TYPES + _Py_MAX_MANAGED_STATIC_EXT_TYPES)

View file

@ -23,7 +23,7 @@ typedef _Py_CODEUNIT *(*jit_func)(
_PyStackRef _tos_cache0, _PyStackRef _tos_cache1, _PyStackRef _tos_cache2
);
_Py_CODEUNIT *_PyJIT(
_Py_CODEUNIT *_PyJIT_Entry(
_PyExecutorObject *executor, _PyInterpreterFrame *frame,
_PyStackRef *stack_pointer, PyThreadState *tstate
);

View file

@ -0,0 +1,68 @@
#ifndef Py_INTERNAL_JIT_UNWIND_H
#define Py_INTERNAL_JIT_UNWIND_H
#ifndef Py_BUILD_CORE
# error "this header requires Py_BUILD_CORE define"
#endif
#include <stddef.h>
#include <stdint.h>
#if defined(_Py_JIT) && defined(__linux__) && defined(__ELF__)
# define PY_HAVE_JIT_GDB_UNWIND
#endif
#if defined(PY_HAVE_PERF_TRAMPOLINE) || defined(PY_HAVE_JIT_GDB_UNWIND)
#if defined(PY_HAVE_JIT_GDB_UNWIND)
extern PyMutex _Py_jit_debug_mutex;
#endif
/* DWARF exception-handling pointer encodings shared by JIT unwind users. */
enum {
DWRF_EH_PE_absptr = 0x00,
DWRF_EH_PE_omit = 0xff,
/* Data type encodings */
DWRF_EH_PE_uleb128 = 0x01,
DWRF_EH_PE_udata2 = 0x02,
DWRF_EH_PE_udata4 = 0x03,
DWRF_EH_PE_udata8 = 0x04,
DWRF_EH_PE_sleb128 = 0x09,
DWRF_EH_PE_sdata2 = 0x0a,
DWRF_EH_PE_sdata4 = 0x0b,
DWRF_EH_PE_sdata8 = 0x0c,
DWRF_EH_PE_signed = 0x08,
/* Reference type encodings */
DWRF_EH_PE_pcrel = 0x10,
DWRF_EH_PE_textrel = 0x20,
DWRF_EH_PE_datarel = 0x30,
DWRF_EH_PE_funcrel = 0x40,
DWRF_EH_PE_aligned = 0x50,
DWRF_EH_PE_indirect = 0x80
};
/* Return the size of the generated .eh_frame data for the given encoding. */
size_t _PyJitUnwind_EhFrameSize(int absolute_addr);
/*
* Build DWARF .eh_frame data for JIT code; returns size written or 0 on error.
* absolute_addr selects the FDE address encoding:
* - 0: PC-relative offsets (perf jitdump synthesized DSO).
* - nonzero: absolute addresses (GDB JIT in-memory ELF).
*/
size_t _PyJitUnwind_BuildEhFrame(uint8_t *buffer, size_t buffer_size,
const void *code_addr, size_t code_size,
int absolute_addr);
void *_PyJitUnwind_GdbRegisterCode(const void *code_addr,
size_t code_size,
const char *entry,
const char *filename);
void _PyJitUnwind_GdbUnregisterCode(void *handle);
#endif // defined(PY_HAVE_PERF_TRAMPOLINE) || defined(PY_HAVE_JIT_GDB_UNWIND)
#endif // Py_INTERNAL_JIT_UNWIND_H

View file

@ -1236,7 +1236,7 @@ const struct opcode_metadata _PyOpcode_opcode_metadata[267] = {
[LIST_APPEND] = { true, INSTR_FMT_IB, HAS_ARG_FLAG | HAS_ERROR_FLAG },
[LIST_EXTEND] = { true, INSTR_FMT_IB, HAS_ARG_FLAG | HAS_ERROR_FLAG | HAS_ERROR_NO_POP_FLAG | HAS_ESCAPES_FLAG },
[LOAD_ATTR] = { true, INSTR_FMT_IBC00000000, HAS_ARG_FLAG | HAS_NAME_FLAG | HAS_ERROR_FLAG | HAS_ESCAPES_FLAG },
[LOAD_ATTR_CLASS] = { true, INSTR_FMT_IBC00000000, HAS_ARG_FLAG | HAS_EXIT_FLAG | HAS_ESCAPES_FLAG },
[LOAD_ATTR_CLASS] = { true, INSTR_FMT_IBC00000000, HAS_ARG_FLAG | HAS_EXIT_FLAG | HAS_ESCAPES_FLAG | HAS_RECORDS_VALUE_FLAG },
[LOAD_ATTR_CLASS_WITH_METACLASS_CHECK] = { true, INSTR_FMT_IBC00000000, HAS_ARG_FLAG | HAS_EXIT_FLAG | HAS_ESCAPES_FLAG | HAS_RECORDS_VALUE_FLAG },
[LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN] = { true, INSTR_FMT_IBC00000000, HAS_ARG_FLAG | HAS_NAME_FLAG | HAS_DEOPT_FLAG | HAS_EXIT_FLAG | HAS_SYNC_SP_FLAG | HAS_NEEDS_GUARD_IP_FLAG | HAS_RECORDS_VALUE_FLAG },
[LOAD_ATTR_INSTANCE_VALUE] = { true, INSTR_FMT_IBC00000000, HAS_ARG_FLAG | HAS_DEOPT_FLAG | HAS_EXIT_FLAG | HAS_ESCAPES_FLAG | HAS_RECORDS_VALUE_FLAG },
@ -1386,7 +1386,7 @@ _PyOpcode_macro_expansion[256] = {
[BUILD_STRING] = { .nuops = 1, .uops = { { _BUILD_STRING, OPARG_SIMPLE, 0 } } },
[BUILD_TEMPLATE] = { .nuops = 1, .uops = { { _BUILD_TEMPLATE, OPARG_SIMPLE, 0 } } },
[BUILD_TUPLE] = { .nuops = 1, .uops = { { _BUILD_TUPLE, OPARG_SIMPLE, 0 } } },
[CALL_ALLOC_AND_ENTER_INIT] = { .nuops = 6, .uops = { { _RECORD_CALLABLE, OPARG_SIMPLE, 0 }, { _CHECK_PEP_523, OPARG_SIMPLE, 1 }, { _CHECK_OBJECT, 2, 1 }, { _ALLOCATE_OBJECT, OPARG_SIMPLE, 3 }, { _CREATE_INIT_FRAME, OPARG_SIMPLE, 3 }, { _PUSH_FRAME, OPARG_SIMPLE, 3 } } },
[CALL_ALLOC_AND_ENTER_INIT] = { .nuops = 7, .uops = { { _RECORD_CALLABLE, OPARG_SIMPLE, 0 }, { _CHECK_PEP_523, OPARG_SIMPLE, 1 }, { _CHECK_OBJECT, 2, 1 }, { _CHECK_RECURSION_REMAINING, OPARG_SIMPLE, 3 }, { _ALLOCATE_OBJECT, OPARG_SIMPLE, 3 }, { _CREATE_INIT_FRAME, OPARG_SIMPLE, 3 }, { _PUSH_FRAME, OPARG_SIMPLE, 3 } } },
[CALL_BOUND_METHOD_EXACT_ARGS] = { .nuops = 11, .uops = { { _RECORD_BOUND_METHOD, OPARG_SIMPLE, 0 }, { _CHECK_PEP_523, OPARG_SIMPLE, 1 }, { _CHECK_CALL_BOUND_METHOD_EXACT_ARGS, OPARG_SIMPLE, 1 }, { _INIT_CALL_BOUND_METHOD_EXACT_ARGS, OPARG_SIMPLE, 1 }, { _CHECK_FUNCTION_VERSION, 2, 1 }, { _CHECK_FUNCTION_EXACT_ARGS, OPARG_SIMPLE, 3 }, { _CHECK_STACK_SPACE, OPARG_SIMPLE, 3 }, { _CHECK_RECURSION_REMAINING, OPARG_SIMPLE, 3 }, { _INIT_CALL_PY_EXACT_ARGS, OPARG_SIMPLE, 3 }, { _SAVE_RETURN_OFFSET, OPARG_SAVE_RETURN_OFFSET, 3 }, { _PUSH_FRAME, OPARG_SIMPLE, 3 } } },
[CALL_BOUND_METHOD_GENERAL] = { .nuops = 8, .uops = { { _RECORD_BOUND_METHOD, OPARG_SIMPLE, 0 }, { _CHECK_PEP_523, OPARG_SIMPLE, 1 }, { _CHECK_METHOD_VERSION, 2, 1 }, { _EXPAND_METHOD, OPARG_SIMPLE, 3 }, { _CHECK_RECURSION_REMAINING, OPARG_SIMPLE, 3 }, { _PY_FRAME_GENERAL, OPARG_SIMPLE, 3 }, { _SAVE_RETURN_OFFSET, OPARG_SAVE_RETURN_OFFSET, 3 }, { _PUSH_FRAME, OPARG_SIMPLE, 3 } } },
[CALL_BUILTIN_CLASS] = { .nuops = 6, .uops = { { _RECORD_CALLABLE, OPARG_SIMPLE, 0 }, { _GUARD_CALLABLE_BUILTIN_CLASS, OPARG_SIMPLE, 3 }, { _CALL_BUILTIN_CLASS, OPARG_SIMPLE, 3 }, { _POP_TOP_OPARG, OPARG_SIMPLE, 3 }, { _POP_TOP, OPARG_SIMPLE, 3 }, { _CHECK_PERIODIC_AT_END, OPARG_REPLACED, 3 } } },
@ -1460,8 +1460,8 @@ _PyOpcode_macro_expansion[256] = {
[LIST_APPEND] = { .nuops = 1, .uops = { { _LIST_APPEND, OPARG_SIMPLE, 0 } } },
[LIST_EXTEND] = { .nuops = 2, .uops = { { _LIST_EXTEND, OPARG_SIMPLE, 0 }, { _POP_TOP, OPARG_SIMPLE, 0 } } },
[LOAD_ATTR] = { .nuops = 1, .uops = { { _LOAD_ATTR, OPARG_SIMPLE, 8 } } },
[LOAD_ATTR_CLASS] = { .nuops = 3, .uops = { { _CHECK_ATTR_CLASS, 2, 1 }, { _LOAD_ATTR_CLASS, 4, 5 }, { _PUSH_NULL_CONDITIONAL, OPARG_SIMPLE, 9 } } },
[LOAD_ATTR_CLASS_WITH_METACLASS_CHECK] = { .nuops = 5, .uops = { { _RECORD_TOS_TYPE, OPARG_SIMPLE, 1 }, { _GUARD_TYPE_VERSION, 2, 1 }, { _CHECK_ATTR_CLASS, 2, 3 }, { _LOAD_ATTR_CLASS, 4, 5 }, { _PUSH_NULL_CONDITIONAL, OPARG_SIMPLE, 9 } } },
[LOAD_ATTR_CLASS] = { .nuops = 4, .uops = { { _RECORD_TOS, OPARG_SIMPLE, 1 }, { _CHECK_ATTR_CLASS, 2, 1 }, { _LOAD_ATTR_CLASS, 4, 5 }, { _PUSH_NULL_CONDITIONAL, OPARG_SIMPLE, 9 } } },
[LOAD_ATTR_CLASS_WITH_METACLASS_CHECK] = { .nuops = 5, .uops = { { _RECORD_TOS, OPARG_SIMPLE, 1 }, { _GUARD_TYPE_VERSION, 2, 1 }, { _CHECK_ATTR_CLASS, 2, 3 }, { _LOAD_ATTR_CLASS, 4, 5 }, { _PUSH_NULL_CONDITIONAL, OPARG_SIMPLE, 9 } } },
[LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN] = { .nuops = 7, .uops = { { _RECORD_TOS_TYPE, OPARG_SIMPLE, 1 }, { _GUARD_TYPE_VERSION, 2, 1 }, { _CHECK_PEP_523, OPARG_SIMPLE, 3 }, { _LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN_FRAME, 2, 3 }, { _LOAD_ATTR_GETATTRIBUTE_OVERRIDDEN_FRAME, OPERAND1_4, 5 }, { _SAVE_RETURN_OFFSET, OPARG_SAVE_RETURN_OFFSET, 9 }, { _PUSH_FRAME, OPARG_SIMPLE, 9 } } },
[LOAD_ATTR_INSTANCE_VALUE] = { .nuops = 6, .uops = { { _RECORD_TOS_TYPE, OPARG_SIMPLE, 1 }, { _GUARD_TYPE_VERSION, 2, 1 }, { _CHECK_MANAGED_OBJECT_HAS_VALUES, OPARG_SIMPLE, 3 }, { _LOAD_ATTR_INSTANCE_VALUE, 1, 3 }, { _POP_TOP, OPARG_SIMPLE, 4 }, { _PUSH_NULL_CONDITIONAL, OPARG_SIMPLE, 9 } } },
[LOAD_ATTR_METHOD_LAZY_DICT] = { .nuops = 4, .uops = { { _RECORD_TOS_TYPE, OPARG_SIMPLE, 1 }, { _GUARD_TYPE_VERSION, 2, 1 }, { _CHECK_ATTR_METHOD_LAZY_DICT, 1, 3 }, { _LOAD_ATTR_METHOD_LAZY_DICT, 4, 5 } } },

View file

@ -75,7 +75,12 @@ extern "C" {
#define CONSTANT_BUILTIN_ANY 4
#define CONSTANT_BUILTIN_LIST 5
#define CONSTANT_BUILTIN_SET 6
#define NUM_COMMON_CONSTANTS 7
#define CONSTANT_NONE 7
#define CONSTANT_EMPTY_STR 8
#define CONSTANT_TRUE 9
#define CONSTANT_FALSE 10
#define CONSTANT_MINUS_ONE 11
#define NUM_COMMON_CONSTANTS 12
/* Values used in the oparg for RESUME */
#define RESUME_AT_FUNC_START 0

View file

@ -198,6 +198,7 @@ typedef struct _PyExecutorObject {
uint32_t code_size;
size_t jit_size;
void *jit_code;
void *jit_gdb_handle;
_PyExitData exits[1];
} _PyExecutorObject;

View file

@ -1580,6 +1580,7 @@ extern "C" {
INIT_ID(alias), \
INIT_ID(align), \
INIT_ID(all), \
INIT_ID(all_interpreters), \
INIT_ID(all_threads), \
INIT_ID(allow_code), \
INIT_ID(alphabet), \

View file

@ -1000,6 +1000,10 @@ _PyUnicode_InitStaticStrings(PyInterpreterState *interp) {
_PyUnicode_InternStatic(interp, &string);
assert(_PyUnicode_CheckConsistency(string, 1));
assert(PyUnicode_GET_LENGTH(string) != 1);
string = &_Py_ID(all_interpreters);
_PyUnicode_InternStatic(interp, &string);
assert(_PyUnicode_CheckConsistency(string, 1));
assert(PyUnicode_GET_LENGTH(string) != 1);
string = &_Py_ID(all_threads);
_PyUnicode_InternStatic(interp, &string);
assert(_PyUnicode_CheckConsistency(string, 1));

File diff suppressed because it is too large Load diff

View file

@ -397,6 +397,7 @@ const uint32_t _PyUop_Flags[MAX_UOP_ID+1] = {
[_CHECK_VALIDITY] = HAS_DEOPT_FLAG,
[_LOAD_CONST_INLINE] = HAS_PURE_FLAG,
[_LOAD_CONST_INLINE_BORROW] = HAS_PURE_FLAG,
[_RROT_3] = HAS_PURE_FLAG,
[_START_EXECUTOR] = HAS_DEOPT_FLAG,
[_MAKE_WARM] = 0,
[_FATAL_ERROR] = 0,
@ -3699,6 +3700,15 @@ const _PyUopCachingInfo _PyUop_Caching[MAX_UOP_ID+1] = {
{ -1, -1, -1 },
},
},
[_RROT_3] = {
.best = { 0, 1, 2, 3 },
.entries = {
{ 3, 0, _RROT_3_r03 },
{ 3, 1, _RROT_3_r13 },
{ 3, 2, _RROT_3_r23 },
{ 3, 3, _RROT_3_r33 },
},
},
[_START_EXECUTOR] = {
.best = { 0, 0, 0, 0 },
.entries = {
@ -4695,6 +4705,10 @@ const uint16_t _PyUop_Uncached[MAX_UOP_REGS_ID+1] = {
[_LOAD_CONST_INLINE_BORROW_r01] = _LOAD_CONST_INLINE_BORROW,
[_LOAD_CONST_INLINE_BORROW_r12] = _LOAD_CONST_INLINE_BORROW,
[_LOAD_CONST_INLINE_BORROW_r23] = _LOAD_CONST_INLINE_BORROW,
[_RROT_3_r03] = _RROT_3,
[_RROT_3_r13] = _RROT_3,
[_RROT_3_r23] = _RROT_3,
[_RROT_3_r33] = _RROT_3,
[_START_EXECUTOR_r00] = _START_EXECUTOR,
[_MAKE_WARM_r00] = _MAKE_WARM,
[_MAKE_WARM_r11] = _MAKE_WARM,
@ -5896,6 +5910,11 @@ const char *const _PyOpcode_uop_name[MAX_UOP_REGS_ID+1] = {
[_RETURN_GENERATOR_r01] = "_RETURN_GENERATOR_r01",
[_RETURN_VALUE] = "_RETURN_VALUE",
[_RETURN_VALUE_r11] = "_RETURN_VALUE_r11",
[_RROT_3] = "_RROT_3",
[_RROT_3_r03] = "_RROT_3_r03",
[_RROT_3_r13] = "_RROT_3_r13",
[_RROT_3_r23] = "_RROT_3_r23",
[_RROT_3_r33] = "_RROT_3_r33",
[_SAVE_RETURN_OFFSET] = "_SAVE_RETURN_OFFSET",
[_SAVE_RETURN_OFFSET_r00] = "_SAVE_RETURN_OFFSET_r00",
[_SAVE_RETURN_OFFSET_r11] = "_SAVE_RETURN_OFFSET_r11",
@ -6808,6 +6827,8 @@ int _PyUop_num_popped(int opcode, int oparg)
return 0;
case _LOAD_CONST_INLINE_BORROW:
return 0;
case _RROT_3:
return 0;
case _START_EXECUTOR:
return 0;
case _MAKE_WARM:

View file

@ -553,6 +553,7 @@ extern "C" {
# if !defined(_Py_MEMORY_SANITIZER)
# define _Py_MEMORY_SANITIZER
# define _Py_NO_SANITIZE_MEMORY __attribute__((no_sanitize_memory))
# define _Py_MSAN_UNPOISON(PTR, SIZE) (__msan_unpoison(PTR, SIZE))
# endif
# endif
# if __has_feature(address_sanitizer)
@ -591,6 +592,9 @@ extern "C" {
#ifndef _Py_NO_SANITIZE_MEMORY
# define _Py_NO_SANITIZE_MEMORY
#endif
#ifndef _Py_MSAN_UNPOISON
# define _Py_MSAN_UNPOISON(PTR, SIZE)
#endif
/* AIX has __bool__ redefined in it's system header file. */
#if defined(_AIX) && defined(__bool__)

View file

@ -819,6 +819,13 @@
$ python -m pegen python <PATH TO YOUR GRAMMAR FILE>
```
> [!CAUTION]
> Python's grammar (the `Grammar/python.gram` file) is written for the
> C backend. To experiment, you will need to write a grammar
> without C-specific parts like actions and the trailer.
> See [#133560](https://github.com/python/cpython/issues/133560)
> and [#96424](https://github.com/python/cpython/issues/96424) for more information.
This will generate a file called `parse.py` in the same directory that you
can use to parse some input:

View file

@ -620,17 +620,18 @@ def warn_explicit(message, category, filename, lineno,
linecache.getlines(filename, module_globals)
# Print message and context
msg = _wm.WarningMessage(message, category, filename, lineno, source=source)
msg = _wm.WarningMessage(message, category, filename, lineno,
module=module, source=source)
_wm._showwarnmsg(msg)
class WarningMessage(object):
_WARNING_DETAILS = ("message", "category", "filename", "lineno", "file",
"line", "source")
"line", "source", "module")
def __init__(self, message, category, filename, lineno, file=None,
line=None, source=None):
line=None, source=None, module=None):
self.message = message
self.category = category
self.filename = filename
@ -638,12 +639,14 @@ def __init__(self, message, category, filename, lineno, file=None,
self.file = file
self.line = line
self.source = source
self.module = module
self._category_name = category.__name__ if category else None
def __str__(self):
return ("{message : %r, category : %r, filename : %r, lineno : %s, "
"line : %r}" % (self.message, self._category_name,
self.filename, self.lineno, self.line))
return ("{message : %r, category : %r, module : %r, "
"filename : %r, lineno : %s, line : %r}" % (
self.message, self._category_name, self.module,
self.filename, self.lineno, self.line))
def __repr__(self):
return f'<{type(self).__qualname__} {self}>'

View file

@ -38,12 +38,13 @@
)
from .layout import LayoutMap, LayoutResult, LayoutRow, WrappedRow, layout_content_lines
from .render import RenderCell, RenderLine, RenderedScreen, ScreenOverlay
from .utils import ANSI_ESCAPE_SEQUENCE, THEME, StyleRef, wlen, gen_colors
from .utils import ANSI_ESCAPE_SEQUENCE, ColorSpan, THEME, StyleRef, wlen, gen_colors
from .trace import trace
# types
Command = commands.Command
from collections.abc import Callable, Iterator
from .types import (
Callback,
CommandName,
@ -304,6 +305,7 @@ class Reader:
lxy: CursorXY = field(init=False)
scheduled_commands: list[CommandName] = field(default_factory=list)
can_colorize: bool = False
gen_colors: Callable[[str], Iterator[ColorSpan]] = gen_colors
threading_hook: Callback | None = None
invalidation: RefreshInvalidation = field(init=False)
@ -534,7 +536,7 @@ def _build_content_lines(
prompt_from_cache: bool,
) -> tuple[ContentLine, ...]:
if self.can_colorize:
colors = list(gen_colors(self.get_unicode()))
colors = list(self.gen_colors(self.get_unicode()))
else:
colors = None
trace("colors = {colors}", colors=colors)

View file

@ -643,6 +643,7 @@ def get_argval_argrepr(self, op, arg, offset):
argrepr = _intrinsic_2_descs[arg]
elif deop == LOAD_COMMON_CONSTANT:
obj = _common_constants[arg]
argval = obj
if isinstance(obj, type):
argrepr = obj.__name__
else:
@ -692,10 +693,15 @@ def _get_const_value(op, arg, co_consts):
Otherwise (if it is a LOAD_CONST and co_consts is not
provided) returns the dis.UNKNOWN sentinel.
"""
assert op in hasconst or op == LOAD_SMALL_INT
assert op in hasconst or op == LOAD_SMALL_INT or op == LOAD_COMMON_CONSTANT
if op == LOAD_SMALL_INT:
return arg
if op == LOAD_COMMON_CONSTANT:
# Opargs 0-6 are callables; 7-11 are literal values.
if 7 <= arg <= 11:
return _common_constants[arg]
return UNKNOWN
argval = UNKNOWN
if co_consts is not None:
argval = co_consts[arg]
@ -1015,8 +1021,9 @@ def _find_imports(co):
if op == IMPORT_NAME and i >= 2:
from_op = opargs[i-1]
level_op = opargs[i-2]
if (from_op[0] in hasconst and
(level_op[0] in hasconst or level_op[0] == LOAD_SMALL_INT)):
if ((from_op[0] in hasconst or from_op[0] == LOAD_COMMON_CONSTANT) and
(level_op[0] in hasconst or level_op[0] == LOAD_SMALL_INT or
level_op[0] == LOAD_COMMON_CONSTANT)):
level = _get_const_value(level_op[0], level_op[1], consts)
fromlist = _get_const_value(from_op[0], from_op[1], consts)
# IMPORT_NAME encodes lazy/eager flags in bits 0-1,

View file

@ -157,10 +157,7 @@ def all_defects(self):
def startswith_fws(self):
return self[0].startswith_fws()
@property
def as_ew_allowed(self):
"""True if all top level tokens of this part may be RFC2047 encoded."""
return all(part.as_ew_allowed for part in self)
as_ew_allowed = True
@property
def comments(self):
@ -429,6 +426,7 @@ def addr_spec(self):
class AngleAddr(TokenList):
token_type = 'angle-addr'
as_ew_allowed = False
@property
def local_part(self):
@ -639,11 +637,11 @@ def local_part(self):
for tok in self[0] + [DOT]:
if tok.token_type == 'cfws':
continue
if (last_is_tl and tok.token_type == 'dot' and
if (last_is_tl and tok.token_type == 'dot' and last and
last[-1].token_type == 'cfws'):
res[-1] = TokenList(last[:-1])
is_tl = isinstance(tok, TokenList)
if (is_tl and last.token_type == 'dot' and
if (is_tl and last.token_type == 'dot' and tok and
tok[0].token_type == 'cfws'):
res.append(TokenList(tok[1:]))
else:
@ -847,26 +845,22 @@ def params(self):
class ContentType(ParameterizedHeaderValue):
token_type = 'content-type'
as_ew_allowed = False
maintype = 'text'
subtype = 'plain'
class ContentDisposition(ParameterizedHeaderValue):
token_type = 'content-disposition'
as_ew_allowed = False
content_disposition = None
class ContentTransferEncoding(TokenList):
token_type = 'content-transfer-encoding'
as_ew_allowed = False
cte = '7bit'
class HeaderLabel(TokenList):
token_type = 'header-label'
as_ew_allowed = False
class MsgID(TokenList):
@ -1249,8 +1243,7 @@ def get_bare_quoted_string(value):
bare_quoted_string = BareQuotedString()
value = value[1:]
if value and value[0] == '"':
token, value = get_qcontent(value)
bare_quoted_string.append(token)
return bare_quoted_string, value[1:]
while value and value[0] != '"':
if value[0] in WSP:
token, value = get_fws(value)
@ -1510,11 +1503,6 @@ def get_local_part(value):
local_part.defects.append(errors.ObsoleteHeaderDefect(
"local-part is not a dot-atom (contains CFWS)"))
local_part[0] = obs_local_part
try:
local_part.value.encode('ascii')
except UnicodeEncodeError:
local_part.defects.append(errors.NonASCIILocalPartDefect(
"local-part contains non-ASCII characters)"))
return local_part, value
def get_obs_local_part(value):
@ -2836,13 +2824,68 @@ def _steal_trailing_WSP_if_exists(lines):
def _refold_parse_tree(parse_tree, *, policy):
"""Return string of contents of parse_tree folded according to RFC rules.
"""
# max_line_length 0/None means no limit, ie: infinitely long.
maxlen = policy.max_line_length or sys.maxsize
encoding = 'utf-8' if policy.utf8 else 'us-ascii'
lines = [''] # Folded lines to be output
if parse_tree.as_ew_allowed:
_refold_with_ew(parse_tree, lines, maxlen, encoding, policy=policy)
else:
_refold_without_ew(parse_tree, lines, maxlen, encoding, policy=policy)
return policy.linesep.join(lines) + policy.linesep
def _refold_without_ew(parse_tree, lines, maxlen, encoding, *, policy):
parts = list(parse_tree)
while parts:
part = parts.pop(0)
tstr = str(part)
try:
tstr.encode(encoding)
except UnicodeEncodeError:
if any(isinstance(x, errors.UndecodableBytesDefect)
for x in part.all_defects):
# There is garbage data from parsing a message in binary mode,
# just pass it through. Not good, but the best we can do.
pass
elif policy.utf8:
# If this happens, it's a programmer error.
raise
else:
raise errors.HeaderWriteError(
f"Non-ASCII {part.token_type} '{part}' is invalid"
" under current policy setting (utf8=False)"
)
if len(tstr) <= maxlen - len(lines[-1]):
lines[-1] += tstr
continue
# This part is too long to fit. The RFC wants us to break at
# "major syntactic breaks", so unless we don't consider this
# to be one, check if it will fit on the next line by itself.
if (part.syntactic_break and
len(tstr) + 1 <= maxlen):
newline = _steal_trailing_WSP_if_exists(lines)
if newline or part.startswith_fws():
lines.append(newline + tstr)
continue
if not hasattr(part, 'encode'):
# It's not a terminal, try folding the subparts.
newparts = list(part)
parts = newparts + parts
continue
# We can't figure out how to wrap, it, so give up.
newline = _steal_trailing_WSP_if_exists(lines)
if newline or part.startswith_fws():
lines.append(newline + tstr)
else:
# We can't fold it onto the next line either...
lines[-1] += tstr
return
def _refold_with_ew(parse_tree, lines, maxlen, encoding, *, policy):
"""Return string of contents of parse_tree folded according to RFC rules.
"""
last_word_is_ew = False
last_ew = None # if there is an encoded word in the last line of lines,
# points to the encoded word's first character
@ -2856,6 +2899,11 @@ def _refold_parse_tree(parse_tree, *, policy):
if part is end_ew_not_allowed:
wrap_as_ew_blocked -= 1
continue
if part.token_type == 'mime-parameters':
# Mime parameter folding (using RFC2231) is extra special.
_fold_mime_parameters(part, lines, maxlen, encoding)
last_word_is_ew = False
continue
tstr = str(part)
if not want_encoding:
if part.token_type in ('ptext', 'vtext'):
@ -2877,14 +2925,11 @@ def _refold_parse_tree(parse_tree, *, policy):
charset = 'utf-8'
want_encoding = True
if part.token_type == 'mime-parameters':
# Mime parameter folding (using RFC2231) is extra special.
_fold_mime_parameters(part, lines, maxlen, encoding)
last_word_is_ew = False
continue
if want_encoding and not wrap_as_ew_blocked:
if not part.as_ew_allowed:
if any(
not x.as_ew_allowed for x in part
if hasattr(x, 'as_ew_allowed')
):
want_encoding = False
last_ew = None
if part.syntactic_break:
@ -2965,6 +3010,8 @@ def _refold_parse_tree(parse_tree, *, policy):
[ValueTerminal(make_quoted_pairs(p), 'ptext')
for p in newparts] +
[ValueTerminal('"', 'ptext')])
_refold_without_ew(newparts, lines, maxlen, encoding, policy=policy)
continue
if part.token_type == 'comment':
newparts = (
[ValueTerminal('(', 'ptext')] +
@ -2992,7 +3039,7 @@ def _refold_parse_tree(parse_tree, *, policy):
lines[-1] += tstr
last_word_is_ew = last_word_is_ew and not bool(tstr.strip(_WSP))
return policy.linesep.join(lines) + policy.linesep
return
def _fold_as_ew(to_encode, lines, maxlen, last_ew, ew_combine_allowed, charset, last_word_is_ew):
"""Fold string to_encode into lines as encoded word, combining if allowed.

View file

@ -109,9 +109,9 @@ class ObsoleteHeaderDefect(HeaderDefect):
"""Header uses syntax declared obsolete by RFC 5322"""
class NonASCIILocalPartDefect(HeaderDefect):
"""local_part contains non-ASCII characters"""
# This defect only occurs during unicode parsing, not when
# parsing messages decoded from binary.
"""Unused. Note: this error is deprecated and may be removed in the future."""
# RFC 6532 permits a non-ASCII local-part. _header_value_parser previously
# treated this as a parse-time defect (when parsing Unicode, but not bytes).
class InvalidDateDefect(HeaderDefect):
"""Header has unparsable or invalid date"""

View file

@ -551,13 +551,17 @@ def send_response_only(self, code, message=None):
(self.protocol_version, code, message)).encode(
'latin-1', 'strict'))
def send_header(self, keyword, value):
def send_header(self, keyword, value, *, _is_extra=False):
"""Send a MIME header to the headers buffer."""
if self.request_version != 'HTTP/0.9':
if not hasattr(self, '_headers_buffer'):
self._headers_buffer = []
self._headers_buffer.append(
("%s: %s\r\n" % (keyword, value)).encode('latin-1', 'strict'))
if not hasattr(self, '_default_response_headers'):
self._default_response_headers = []
if not _is_extra:
self._default_response_headers.append((keyword, value))
if keyword.lower() == 'connection':
if value.lower() == 'close':
@ -575,6 +579,8 @@ def flush_headers(self):
if hasattr(self, '_headers_buffer'):
self.wfile.write(b"".join(self._headers_buffer))
self._headers_buffer = []
if hasattr(self, '_default_response_headers'):
self._default_response_headers = []
def _colorize_request(self, code, size, t):
try:
@ -736,10 +742,11 @@ class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
'.xz': 'application/x-xz',
}
def __init__(self, *args, directory=None, **kwargs):
def __init__(self, *args, directory=None, extra_response_headers=None, **kwargs):
if directory is None:
directory = os.getcwd()
self.directory = os.fspath(directory)
self.extra_response_headers = extra_response_headers
super().__init__(*args, **kwargs)
def do_GET(self):
@ -757,6 +764,16 @@ def do_HEAD(self):
if f:
f.close()
def _send_extra_response_headers(self):
"""Send the headers stored in self.extra_response_headers."""
if self.extra_response_headers is not None:
default_headers = {h.lower() for h, _ in self._default_response_headers}
for header, value in self.extra_response_headers:
# Don't send the header if it's already sent
# as part of the default response headers
if header.lower() not in default_headers:
self.send_header(header, value, _is_extra=True)
def send_head(self):
"""Common code for GET and HEAD commands.
@ -839,6 +856,7 @@ def send_head(self):
self.send_header("Content-Length", str(fs[6]))
self.send_header("Last-Modified",
self.date_time_string(fs.st_mtime))
self._send_extra_response_headers()
self.end_headers()
return f
except:
@ -903,6 +921,7 @@ def list_directory(self, path):
self.send_response(HTTPStatus.OK)
self.send_header("Content-type", "text/html; charset=%s" % enc)
self.send_header("Content-Length", str(len(encoded)))
self._send_extra_response_headers()
self.end_headers()
return f
@ -1011,6 +1030,22 @@ def _get_best_family(*address):
return family, sockaddr
def _make_server(HandlerClass=BaseHTTPRequestHandler,
ServerClass=ThreadingHTTPServer,
protocol="HTTP/1.0", port=8000, bind=None,
tls_cert=None, tls_key=None, tls_password=None,
default_content_type=SimpleHTTPRequestHandler.default_content_type):
ServerClass.address_family, addr = _get_best_family(bind, port)
HandlerClass.protocol_version = protocol
HandlerClass.default_content_type = default_content_type
if tls_cert:
return ServerClass(addr, HandlerClass, certfile=tls_cert,
keyfile=tls_key, password=tls_password)
else:
return ServerClass(addr, HandlerClass)
def test(HandlerClass=SimpleHTTPRequestHandler,
ServerClass=ThreadingHTTPServer,
protocol="HTTP/1.0", port=8000, bind=None,
@ -1019,19 +1054,13 @@ def test(HandlerClass=SimpleHTTPRequestHandler,
"""Test the HTTP request handler class.
This runs an HTTP server on port 8000 (or the port argument).
"""
ServerClass.address_family, addr = _get_best_family(bind, port)
HandlerClass.protocol_version = protocol
HandlerClass.default_content_type = content_type
if tls_cert:
server = ServerClass(addr, HandlerClass, certfile=tls_cert,
keyfile=tls_key, password=tls_password)
else:
server = ServerClass(addr, HandlerClass)
with server as httpd:
with _make_server(
HandlerClass=HandlerClass, ServerClass=ServerClass,
protocol=protocol, port=port, bind=bind,
tls_cert=tls_cert, tls_key=tls_key, tls_password=tls_password,
default_content_type=content_type,
) as httpd:
host, port = httpd.socket.getsockname()[:2]
url_host = f'[{host}]' if ':' in host else host
protocol = 'HTTPS' if tls_cert else 'HTTP'
@ -1076,6 +1105,10 @@ def _main(args=None):
parser.add_argument('port', default=8000, type=int, nargs='?',
help='bind to this port '
'(default: %(default)s)')
parser.add_argument('-H', '--header', nargs=2, action='append',
metavar=('HEADER', 'VALUE'),
help='Add a custom response header '
'(can be specified multiple times)')
args = parser.parse_args(args)
if not args.tls_cert and args.tls_key:
@ -1104,7 +1137,8 @@ def server_bind(self):
def finish_request(self, request, client_address):
self.RequestHandlerClass(request, client_address, self,
directory=args.directory)
directory=args.directory,
extra_response_headers=args.header)
class HTTPDualStackServer(DualStackServerMixin, ThreadingHTTPServer):
pass

View file

@ -41,7 +41,10 @@
_special_method_names = _opcode.get_special_method_names()
_common_constants = [builtins.AssertionError, builtins.NotImplementedError,
builtins.tuple, builtins.all, builtins.any, builtins.list,
builtins.set]
builtins.set,
# Append-only — must match CONSTANT_* in
# Include/internal/pycore_opcode_utils.h.
None, "", True, False, -1]
_nb_ops = _opcode.get_nb_ops()
hascompare = [opmap["COMPARE_OP"]]

View file

@ -376,7 +376,7 @@ def get_default_backend():
def _pyrepl_available():
"""return whether pdb should use _pyrepl for input"""
if not os.getenv("PYTHON_BASIC_REPL"):
if os.getenv("PYTHON_BASIC_REPL"):
CAN_USE_PYREPL = False
else:
try:

View file

@ -920,17 +920,11 @@ def save_picklebuffer(self, obj):
# Write data in-band
# XXX The C implementation avoids a copy here
buf = m.tobytes()
in_memo = id(buf) in self.memo
if m.readonly:
if in_memo:
self._save_bytes_no_memo(buf)
else:
self.save_bytes(buf)
self._save_bytes_no_memo(buf)
else:
if in_memo:
self._save_bytearray_no_memo(buf)
else:
self.save_bytearray(buf)
self._save_bytearray_no_memo(buf)
self.memoize(obj)
else:
# Write data out-of-band
self.write(NEXT_BUFFER)

View file

@ -58,6 +58,10 @@ def __init__(self, pid, sample_interval_usec, all_threads, *, mode=PROFILING_MOD
try:
self.unwinder = self._new_unwinder(native, gc, opcodes, skip_non_matching_threads)
except RuntimeError as err:
if os.name == "nt" and sys.executable.endswith("python.exe"):
raise SystemExit(
"Running profiling.sampling from virtualenv on Windows platform is not supported"
) from err
raise SystemExit(err) from err
# Track sample intervals and total sample count
self.sample_intervals = deque(maxlen=100)

View file

@ -836,7 +836,11 @@ def binomialvariate(self, n=1, p=0.5):
if not c:
return x
while True:
y += _floor(_log2(random()) / c) + 1
try:
y += _floor(_log2(random()) / c) + 1
except ValueError:
# Reject case where random() returned 0.0
continue
if y > n:
return x
x += 1

View file

@ -102,8 +102,10 @@ def _run_module_code(code, init_globals=None,
# Helper to get the full name, spec and code for a module
def _get_module_details(mod_name, error=ImportError):
# name= is only accepted by ImportError and its subclasses.
kwargs = {"name": mod_name} if issubclass(error, ImportError) else {}
if mod_name.startswith("."):
raise error("Relative module names not supported")
raise error("Relative module names not supported", **kwargs)
pkg_name, _, _ = mod_name.rpartition(".")
if pkg_name:
# Try importing the parent to avoid catching initialization errors
@ -136,12 +138,13 @@ def _get_module_details(mod_name, error=ImportError):
if mod_name.endswith(".py"):
msg += (f". Try using '{mod_name[:-3]}' instead of "
f"'{mod_name}' as the module name.")
raise error(msg.format(mod_name, type(ex).__name__, ex)) from ex
raise error(msg.format(mod_name, type(ex).__name__, ex),
**kwargs) from ex
if spec is None:
raise error("No module named %s" % mod_name)
raise error("No module named %s" % mod_name, **kwargs)
if spec.submodule_search_locations is not None:
if mod_name == "__main__" or mod_name.endswith(".__main__"):
raise error("Cannot use package as __main__ module")
raise error("Cannot use package as __main__ module", **kwargs)
try:
pkg_main_name = mod_name + ".__main__"
return _get_module_details(pkg_main_name, error)
@ -149,17 +152,19 @@ def _get_module_details(mod_name, error=ImportError):
if mod_name not in sys.modules:
raise # No module loaded; being a package is irrelevant
raise error(("%s; %r is a package and cannot " +
"be directly executed") %(e, mod_name))
"be directly executed") %(e, mod_name),
**kwargs)
loader = spec.loader
if loader is None:
raise error("%r is a namespace package and cannot be executed"
% mod_name)
% mod_name,
**kwargs)
try:
code = loader.get_code(mod_name)
except ImportError as e:
raise error(format(e)) from e
raise error(format(e), **kwargs) from e
if code is None:
raise error("No code object available for %s" % mod_name)
raise error("No code object available for %s" % mod_name, **kwargs)
return mod_name, spec, code
class _Error(Exception):
@ -232,6 +237,7 @@ def _get_main_module_details(error=ImportError):
# Also moves the standard __main__ out of the way so that the
# preexisting __loader__ entry doesn't cause issues
main_name = "__main__"
kwargs = {"name": main_name} if issubclass(error, ImportError) else {}
saved_main = sys.modules[main_name]
del sys.modules[main_name]
try:
@ -239,7 +245,8 @@ def _get_main_module_details(error=ImportError):
except ImportError as exc:
if main_name in str(exc):
raise error("can't find %r module in %r" %
(main_name, sys.path[0])) from exc
(main_name, sys.path[0]),
**kwargs) from exc
raise
finally:
sys.modules[main_name] = saved_main

View file

@ -248,7 +248,7 @@ def count_positive(iterable):
elif x == 0.0:
found_zero = True
else:
raise StatisticsError('No negative inputs allowed', x)
raise StatisticsError(f'No negative inputs allowed: {x!r}')
total = fsum(map(log, count_positive(data)))

View file

@ -337,7 +337,7 @@ class _Stream:
"""
def __init__(self, name, mode, comptype, fileobj, bufsize,
compresslevel, preset):
compresslevel, preset, mtime):
"""Construct a _Stream object.
"""
self._extfileobj = True
@ -372,7 +372,7 @@ def __init__(self, name, mode, comptype, fileobj, bufsize,
self.exception = zlib.error
self._init_read_gz()
else:
self._init_write_gz(compresslevel)
self._init_write_gz(compresslevel, mtime)
elif comptype == "bz2":
try:
@ -421,7 +421,7 @@ def __del__(self):
if hasattr(self, "closed") and not self.closed:
self.close()
def _init_write_gz(self, compresslevel):
def _init_write_gz(self, compresslevel, mtime):
"""Initialize for writing with gzip compression.
"""
self.cmp = self.zlib.compressobj(compresslevel,
@ -429,7 +429,9 @@ def _init_write_gz(self, compresslevel):
-self.zlib.MAX_WBITS,
self.zlib.DEF_MEM_LEVEL,
0)
timestamp = struct.pack("<L", int(time.time()))
if mtime is None:
mtime = int(time.time())
timestamp = struct.pack("<L", mtime)
self.__write(b"\037\213\010\010" + timestamp + b"\002\377")
if self.name.endswith(".gz"):
self.name = self.name[:-3]
@ -1745,7 +1747,7 @@ class TarFile(object):
def __init__(self, name=None, mode="r", fileobj=None, format=None,
tarinfo=None, dereference=None, ignore_zeros=None, encoding=None,
errors="surrogateescape", pax_headers=None, debug=None,
errorlevel=None, copybufsize=None, stream=False):
errorlevel=None, copybufsize=None, stream=False, mtime=None):
"""Open an (uncompressed) tar archive 'name'. 'mode' is either 'r' to
read from an existing archive, 'a' to append data to an existing
file or 'w' to create a new file overwriting an existing one. 'mode'
@ -1951,8 +1953,9 @@ def not_compressed(comptype):
compresslevel = kwargs.pop("compresslevel", 6)
preset = kwargs.pop("preset", None)
mtime = kwargs.pop("mtime", None)
stream = _Stream(name, filemode, comptype, fileobj, bufsize,
compresslevel, preset)
compresslevel, preset, mtime)
try:
t = cls(name, filemode, stream, **kwargs)
except:
@ -1988,7 +1991,8 @@ def gzopen(cls, name, mode="r", fileobj=None, compresslevel=6, **kwargs):
raise CompressionError("gzip module is not available") from None
try:
fileobj = GzipFile(name, mode + "b", compresslevel, fileobj)
mtime = kwargs.pop("mtime", None)
fileobj = GzipFile(name, mode + "b", compresslevel, fileobj, mtime=mtime)
except OSError as e:
if fileobj is not None and mode == 'r':
raise ReadError("not a gzip file") from e

View file

@ -3100,6 +3100,51 @@ def test_bytearray_memoization(self):
self.assertIsNot(b2a, b2b)
self.assert_is_copy(b2a, b2b)
def test_picklebuffer_memoization(self):
if self.py_version < (3, 8):
self.skipTest('not supported in Python < 3.8')
array_types = [bytes, bytearray]
for proto in range(5, pickle.HIGHEST_PROTOCOL + 1):
for array_type in array_types:
for s in b'', b'xyz', b'xyz'*100:
with self.subTest(proto=proto, array_type=array_type, s=s, independent=False):
b = pickle.PickleBuffer(array_type(s))
p = self.dumps((b, b), proto)
b1, b2 = self.loads(p)
self.assertIs(b1, b2)
with self.subTest(proto=proto, array_type=array_type, s=s, independent=True):
b = array_type(s)
b1a = pickle.PickleBuffer(b)
b2a = pickle.PickleBuffer(b)
p = self.dumps((b1a, b2a), proto)
b1b, b2b = self.loads(p)
if array_type is not bytes:
self.assertIsNot(b1b, b2b)
self.assert_is_copy(b1b, b)
self.assert_is_copy(b2b, b)
def test_empty_picklebuffer_memoization(self):
# gh-148914: Empty writable PickleBuffer memoized an empty bytearray
# with the id of b'' (a singleton in CPython).
if self.py_version < (3, 8):
self.skipTest('not supported in Python < 3.8')
for proto in range(5, pickle.HIGHEST_PROTOCOL + 1):
for readonly in False, True:
with self.subTest(proto=proto, readonly=readonly):
b = b''
ba = bytearray()
buf = pickle.PickleBuffer(b if readonly else ba)
p = self.dumps((buf, b, ba), proto)
buf, b, ba = self.loads(p)
array_type = bytes if readonly else bytearray
self.assertIsInstance(buf, array_type)
self.assertIsInstance(b, bytes)
self.assertIsInstance(ba, bytearray)
self.assertEqual(buf, b'')
self.assertEqual(b, b'')
self.assertEqual(ba, b'')
def test_ints(self):
for proto in protocols:
n = sys.maxsize

View file

@ -1619,6 +1619,84 @@ def annotate(format, /):
# Some non-Format value
annotationlib.call_annotate_function(annotate, 7)
def test_basic_non_function_annotate(self):
class Annotate:
def __call__(self, format, /, __Format=Format,
__NotImplementedError=NotImplementedError):
if format == __Format.VALUE:
return {'x': str}
elif format == __Format.VALUE_WITH_FAKE_GLOBALS:
return {'x': int}
elif format == __Format.STRING:
return {'x': "float"}
else:
raise __NotImplementedError(format)
annotations = annotationlib.call_annotate_function(Annotate(), Format.VALUE)
self.assertEqual(annotations, {"x": str})
annotations = annotationlib.call_annotate_function(Annotate(), Format.STRING)
self.assertEqual(annotations, {"x": "float"})
with self.assertRaises(AttributeError) as cm:
annotations = annotationlib.call_annotate_function(
Annotate(), Format.FORWARDREF
)
self.assertEqual(cm.exception.name, "__builtins__")
self.assertIsInstance(cm.exception.obj, Annotate)
def test_full_non_function_annotate(self):
def outer():
local = str
class Annotate:
called_formats = []
def __call__(self, format=None, *, _self=None):
nonlocal local
if _self is not None:
self, format = _self, self
self.called_formats.append(format)
if format == 1: # VALUE
return {"x": MyClass, "y": int, "z": local}
if format == 2: # VALUE_WITH_FAKE_GLOBALS
return {"w": unknown, "x": MyClass, "y": int, "z": local}
raise NotImplementedError
__globals__ = {"MyClass": MyClass}
__builtins__ = {"int": int}
__closure__ = (types.CellType(str),)
__defaults__ = (None,)
__kwdefaults__ = property(lambda self: dict(_self=self))
__code__ = property(lambda self: self.__call__.__code__)
return Annotate()
annotate = outer()
self.assertEqual(
annotationlib.call_annotate_function(annotate, Format.VALUE),
{"x": MyClass, "y": int, "z": str}
)
self.assertEqual(annotate.called_formats[-1], Format.VALUE)
self.assertEqual(
annotationlib.call_annotate_function(annotate, Format.STRING),
{"w": "unknown", "x": "MyClass", "y": "int", "z": "local"}
)
self.assertIn(Format.STRING, annotate.called_formats)
self.assertEqual(annotate.called_formats[-1], Format.VALUE_WITH_FAKE_GLOBALS)
self.assertEqual(
annotationlib.call_annotate_function(annotate, Format.FORWARDREF),
{"w": support.EqualToForwardRef("unknown"), "x": MyClass, "y": int, "z": str}
)
self.assertIn(Format.FORWARDREF, annotate.called_formats)
self.assertEqual(annotate.called_formats[-1], Format.VALUE_WITH_FAKE_GLOBALS)
def test_error_from_value_raised(self):
# Test that the error from format.VALUE is raised
# if all formats fail

View file

@ -1,4 +1,5 @@
import _ast_unparse
import _ast
import ast
import builtins
import contextlib
@ -85,7 +86,9 @@ def _assertTrueorder(self, ast_node, parent_pos):
self.assertEqual(ast_node._fields, ast_node.__match_args__)
def test_AST_objects(self):
x = ast.AST()
# Directly instantiating abstract node class AST is allowed (but deprecated)
with self.assertWarns(DeprecationWarning):
x = ast.AST()
self.assertEqual(x._fields, ())
x.foobar = 42
self.assertEqual(x.foobar, 42)
@ -94,7 +97,7 @@ def test_AST_objects(self):
with self.assertRaises(AttributeError):
x.vararg
with self.assertRaises(TypeError):
with self.assertRaises(TypeError), self.assertWarns(DeprecationWarning):
# "ast.AST constructor takes 0 positional arguments"
ast.AST(2)
@ -110,15 +113,21 @@ def cleanup():
msg = "type object 'ast.AST' has no attribute '_fields'"
# Both examples used to crash:
with self.assertRaisesRegex(AttributeError, msg):
with (
self.assertRaisesRegex(AttributeError, msg),
self.assertWarns(DeprecationWarning),
):
ast.AST(arg1=123)
with self.assertRaisesRegex(AttributeError, msg):
with (
self.assertRaisesRegex(AttributeError, msg),
self.assertWarns(DeprecationWarning),
):
ast.AST()
def test_AST_garbage_collection(self):
def test_node_garbage_collection(self):
class X:
pass
a = ast.AST()
a = ast.Module()
a.x = X()
a.x.a = a
ref = weakref.ref(a.x)
@ -439,7 +448,15 @@ def _construct_ast_class(self, cls):
elif typ is object:
kwargs[name] = b'capybara'
elif isinstance(typ, type) and issubclass(typ, ast.AST):
kwargs[name] = self._construct_ast_class(typ)
if _ast._is_abstract(typ):
# Use an arbitrary concrete subclass
concrete = next((sub for sub in typ.__subclasses__()
if not _ast._is_abstract(sub)), None)
msg = f"abstract node class {typ} has no concrete subclasses"
self.assertIsNotNone(concrete, msg)
else:
concrete = typ
kwargs[name] = self._construct_ast_class(concrete)
return cls(**kwargs)
def test_arguments(self):
@ -578,6 +595,10 @@ def test_nodeclasses(self):
with self.assertRaisesRegex(TypeError, re.escape(msg)):
ast.BinOp(1, 2, 3, foobarbaz=42)
# Directly instantiating abstract node types is allowed (but deprecated)
self.assertWarns(DeprecationWarning, ast.stmt)
self.assertWarns(DeprecationWarning, ast.expr_context)
def test_no_fields(self):
# this used to fail because Sub._fields was None
x = ast.Sub()
@ -585,7 +606,10 @@ def test_no_fields(self):
def test_invalid_sum(self):
pos = dict(lineno=2, col_offset=3)
m = ast.Module([ast.Expr(ast.expr(**pos), **pos)], [])
with self.assertWarns(DeprecationWarning):
# Creating instances of ast.expr is deprecated
e = ast.expr(**pos)
m = ast.Module([ast.Expr(e, **pos)], [])
with self.assertRaises(TypeError) as cm:
compile(m, "<test>", "exec")
self.assertIn("but got expr()", str(cm.exception))
@ -1107,14 +1131,19 @@ class CopyTests(unittest.TestCase):
def iter_ast_classes():
"""Iterate over the (native) subclasses of ast.AST recursively.
This excludes the special class ast.Index since its constructor
returns an integer.
This excludes:
* abstract AST nodes
* the special class ast.Index, since its constructor returns
an integer.
"""
def do(cls):
if cls.__module__ != 'ast':
return
if cls is ast.Index:
return
# Don't attempt to create instances of abstract AST nodes
if _ast._is_abstract(cls):
return
yield cls
for sub in cls.__subclasses__():
@ -2656,11 +2685,12 @@ def test_get_docstring(self):
def get_load_const(self, tree):
# Compile to bytecode, disassemble and get parameter of LOAD_CONST
# instructions
# and LOAD_COMMON_CONSTANT instructions
co = compile(tree, '<string>', 'exec')
consts = []
for instr in dis.get_instructions(co):
if instr.opcode in dis.hasconst:
if instr.opcode in dis.hasconst or \
instr.opname == 'LOAD_COMMON_CONSTANT':
consts.append(instr.argval)
return consts

View file

@ -3446,6 +3446,27 @@ def testfunc(n):
self.assertIn("_BUILD_LIST", uops)
self.assertNotIn("_LOAD_COMMON_CONSTANT", uops)
def test_load_common_constant_new_literals(self):
def testfunc(n):
x = None
s = ""
t = True
f = False
m = -1
for _ in range(n):
x = None
s = ""
t = True
f = False
m = -1
return x, s, t, f, m
res, ex = self._run_with_optimizer(testfunc, TIER2_THRESHOLD)
self.assertEqual(res, (None, "", True, False, -1))
self.assertIsNotNone(ex)
uops = get_opnames(ex)
self.assertNotIn("_LOAD_COMMON_CONSTANT", uops)
self.assertIn("_LOAD_CONST_INLINE_BORROW", uops)
def test_load_small_int(self):
def testfunc(n):
x = 0

View file

@ -87,7 +87,7 @@
freevars: ()
nlocals: 0
flags: 67108867
consts: ("'doc string'", 'None')
consts: ("'doc string'",)
>>> def keywordonly_args(a,b,*,k1):
... return a,b,k1
@ -161,7 +161,7 @@
freevars: ()
nlocals: 3
flags: 67108995
consts: ("'This is a docstring from async function'", 'None')
consts: ("'This is a docstring from async function'",)
>>> def no_docstring(x, y, z):
... return x + "hello" + y + z + "world"
@ -532,7 +532,7 @@ def test_co_positions_artificial_instructions(self):
],
[
("PUSH_EXC_INFO", None),
("LOAD_CONST", None), # artificial 'None'
("LOAD_COMMON_CONSTANT", None), # artificial 'None'
("STORE_NAME", "e"), # XX: we know the location for this
("DELETE_NAME", "e"),
("RERAISE", 1),

View file

@ -2485,12 +2485,13 @@ def f():
start_line, end_line, _, _ = instr.positions
self.assertEqual(start_line, end_line)
# Expect four `LOAD_CONST None` instructions:
# Expect four `None`-loading instructions:
# three for the no-exception __exit__ call, and one for the return.
# They should all have the locations of the context manager ('xyz').
load_none = [instr for instr in dis.get_instructions(f) if
instr.opname == 'LOAD_CONST' and instr.argval is None]
instr.opname in ('LOAD_CONST', 'LOAD_COMMON_CONSTANT')
and instr.argval is None]
return_value = [instr for instr in dis.get_instructions(f) if
instr.opname == 'RETURN_VALUE']

View file

@ -504,17 +504,25 @@ def check(z, x, y):
with self.assertWarnsRegex(DeprecationWarning,
"argument 'imag' must be a real number, not complex"):
check(complex(0.0, 4.25j), -4.25, 0.0)
with self.assertWarnsRegex(DeprecationWarning,
"argument 'real' must be a real number, not complex"):
with (self.assertWarnsRegex(DeprecationWarning,
"argument 'real' must be a real number, not complex"),
self.assertWarnsRegex(DeprecationWarning,
"argument 'imag' must be a real number, not complex")):
check(complex(4.25+0j, 0j), 4.25, 0.0)
with self.assertWarnsRegex(DeprecationWarning,
"argument 'real' must be a real number, not complex"):
with (self.assertWarnsRegex(DeprecationWarning,
"argument 'real' must be a real number, not complex"),
self.assertWarnsRegex(DeprecationWarning,
"argument 'imag' must be a real number, not complex")):
check(complex(4.25j, 0j), 0.0, 4.25)
with self.assertWarnsRegex(DeprecationWarning,
"argument 'real' must be a real number, not complex"):
with (self.assertWarnsRegex(DeprecationWarning,
"argument 'real' must be a real number, not complex"),
self.assertWarnsRegex(DeprecationWarning,
"argument 'imag' must be a real number, not complex")):
check(complex(0j, 4.25+0j), 0.0, 4.25)
with self.assertWarnsRegex(DeprecationWarning,
"argument 'real' must be a real number, not complex"):
with (self.assertWarnsRegex(DeprecationWarning,
"argument 'real' must be a real number, not complex"),
self.assertWarnsRegex(DeprecationWarning,
"argument 'imag' must be a real number, not complex")):
check(complex(0j, 4.25j), -4.25, 0.0)
check(complex(real=4.25), 4.25, 0.0)

View file

@ -56,7 +56,7 @@ def cm(cls, x):
COMPARE_OP 72 (==)
LOAD_FAST_BORROW 0 (self)
STORE_ATTR 0 (x)
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
""" % (_C.__init__.__code__.co_firstlineno, _C.__init__.__code__.co_firstlineno + 1,)
@ -67,7 +67,7 @@ def cm(cls, x):
COMPARE_OP 72 (==)
LOAD_FAST_BORROW 0
STORE_ATTR 0
LOAD_CONST 1
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
@ -79,7 +79,7 @@ def cm(cls, x):
COMPARE_OP 72 (==)
LOAD_FAST_BORROW 0 (cls)
STORE_ATTR 0 (x)
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
""" % (_C.cm.__code__.co_firstlineno, _C.cm.__code__.co_firstlineno + 2,)
@ -90,7 +90,7 @@ def cm(cls, x):
LOAD_SMALL_INT 1
COMPARE_OP 72 (==)
STORE_FAST 0 (x)
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
""" % (_C.sm.__code__.co_firstlineno, _C.sm.__code__.co_firstlineno + 2,)
@ -182,7 +182,7 @@ def bug708901():
%3d L2: END_FOR
POP_ITER
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
""" % (bug708901.__code__.co_firstlineno,
bug708901.__code__.co_firstlineno + 1,
@ -229,7 +229,7 @@ def bug42562():
dis_bug42562 = """\
RESUME 0
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
@ -282,7 +282,7 @@ def wrap_func_w_kwargs():
LOAD_CONST 1 (('c',))
CALL_KW 3
POP_TOP
LOAD_CONST 2 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
""" % (wrap_func_w_kwargs.__code__.co_firstlineno,
wrap_func_w_kwargs.__code__.co_firstlineno + 1)
@ -295,7 +295,7 @@ def wrap_func_w_kwargs():
IMPORT_NAME 2 (math + eager)
CALL_INTRINSIC_1 2 (INTRINSIC_IMPORT_STAR)
POP_TOP
LOAD_CONST 2 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
@ -322,7 +322,7 @@ def wrap_func_w_kwargs():
%3d LOAD_GLOBAL 0 (spam)
POP_TOP
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
@ -331,19 +331,19 @@ def wrap_func_w_kwargs():
%4d LOAD_GLOBAL 0 (spam)
POP_TOP
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
dis_module_expected_results = """\
Disassembly of f:
4 RESUME 0
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
Disassembly of g:
5 RESUME 0
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
@ -368,7 +368,7 @@ def wrap_func_w_kwargs():
LOAD_SMALL_INT 1
BINARY_OP 0 (+)
STORE_NAME 0 (x)
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
@ -409,7 +409,7 @@ def wrap_func_w_kwargs():
LOAD_SMALL_INT 0
CALL 1
STORE_SUBSCR
LOAD_CONST 2 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
@ -427,7 +427,7 @@ def foo(a: int, b: str) -> str:
MAKE_FUNCTION
SET_FUNCTION_ATTRIBUTE 16 (annotate)
STORE_NAME 0 (foo)
LOAD_CONST 2 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""
@ -477,14 +477,14 @@ def foo(a: int, b: str) -> str:
LOAD_ATTR 2 (__traceback__)
STORE_FAST 1 (tb)
L7: POP_EXCEPT
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
STORE_FAST 0 (e)
DELETE_FAST 0 (e)
%4d LOAD_FAST 1 (tb)
RETURN_VALUE
-- L8: LOAD_CONST 1 (None)
-- L8: LOAD_COMMON_CONSTANT 7 (None)
STORE_FAST 0 (e)
DELETE_FAST 0 (e)
RERAISE 1
@ -554,15 +554,15 @@ def _with(c):
%4d LOAD_SMALL_INT 1
STORE_FAST 1 (x)
%4d L2: LOAD_CONST 1 (None)
LOAD_CONST 1 (None)
LOAD_CONST 1 (None)
%4d L2: LOAD_COMMON_CONSTANT 7 (None)
LOAD_COMMON_CONSTANT 7 (None)
LOAD_COMMON_CONSTANT 7 (None)
CALL 3
POP_TOP
%4d LOAD_SMALL_INT 2
STORE_FAST 2 (y)
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
%4d L3: PUSH_EXC_INFO
@ -579,7 +579,7 @@ def _with(c):
%4d LOAD_SMALL_INT 2
STORE_FAST 2 (y)
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
-- L8: COPY 3
@ -617,7 +617,7 @@ async def _asyncwith(c):
CALL 0
GET_AWAITABLE 1
PUSH_NULL
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
L2: SEND 4 (to L5)
L3: YIELD_VALUE 1
L4: RESUME 3
@ -628,13 +628,13 @@ async def _asyncwith(c):
%4d LOAD_SMALL_INT 1
STORE_FAST 1 (x)
%4d L7: LOAD_CONST 0 (None)
LOAD_CONST 0 (None)
LOAD_CONST 0 (None)
%4d L7: LOAD_COMMON_CONSTANT 7 (None)
LOAD_COMMON_CONSTANT 7 (None)
LOAD_COMMON_CONSTANT 7 (None)
CALL 3
GET_AWAITABLE 2
PUSH_NULL
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
L8: SEND 4 (to L11)
L9: YIELD_VALUE 1
L10: RESUME 3
@ -644,7 +644,7 @@ async def _asyncwith(c):
%4d LOAD_SMALL_INT 2
STORE_FAST 2 (y)
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
%4d L12: CLEANUP_THROW
@ -655,7 +655,7 @@ async def _asyncwith(c):
WITH_EXCEPT_START
GET_AWAITABLE 2
PUSH_NULL
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
L17: SEND 5 (to L21)
L18: YIELD_VALUE 1
L19: RESUME 3
@ -674,7 +674,7 @@ async def _asyncwith(c):
%4d LOAD_SMALL_INT 2
STORE_FAST 2 (y)
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
-- L26: COPY 3
@ -895,7 +895,7 @@ def foo(x):
JUMP_BACKWARD 17 (to L2)
L3: END_FOR
POP_ITER
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
-- L4: CALL_INTRINSIC_1 3 (INTRINSIC_STOPITERATION_ERROR)
@ -933,7 +933,7 @@ def loop_test():
%3d RESUME_CHECK{: <6} 0
%3d BUILD_LIST 0
LOAD_CONST 2 ((1, 2, 3))
LOAD_CONST 1 ((1, 2, 3))
LIST_EXTEND 1
LOAD_SMALL_INT 3
BINARY_OP 5 (*)
@ -949,7 +949,7 @@ def loop_test():
%3d L2: END_FOR
POP_ITER
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
""" % (loop_test.__code__.co_firstlineno,
loop_test.__code__.co_firstlineno + 1,
@ -967,7 +967,7 @@ def extended_arg_quick():
UNPACK_EX 256
POP_TOP
STORE_FAST 0 (_)
LOAD_CONST 1 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
"""% (extended_arg_quick.__code__.co_firstlineno,
extended_arg_quick.__code__.co_firstlineno + 1,)
@ -1084,7 +1084,7 @@ def test_dis_with_some_positions(self):
'',
'2:3-3:15 NOP',
'',
'3:11-3:15 LOAD_CONST 0 (None)',
'3:11-3:15 LOAD_COMMON_CONSTANT 7 (None)',
'3:11-3:15 RETURN_VALUE',
'',
' -- L1: PUSH_EXC_INFO',
@ -1115,7 +1115,7 @@ def test_dis_with_linenos_but_no_columns(self):
'',
'2:5-2:6 LOAD_SMALL_INT 1',
'2:?-2:? STORE_FAST 0 (x)',
'2:?-2:? LOAD_CONST 1 (None)',
'2:?-2:? LOAD_COMMON_CONSTANT 7 (None)',
'2:?-2:? RETURN_VALUE',
'',
])
@ -1128,7 +1128,7 @@ def f():
f.__code__ = f.__code__.replace(co_linetable=b'')
expect = '\n'.join([
' RESUME 0',
' LOAD_CONST 0 (None)',
' LOAD_COMMON_CONSTANT 7 (None)',
' RETURN_VALUE',
'',
])
@ -1533,7 +1533,6 @@ def f(c=c):
Flags: OPTIMIZED, NEWLOCALS, VARARGS, VARKEYWORDS, GENERATOR
Constants:
0: <code object f at (.*), file "(.*)", line (.*)>
1: None
Variable names:
0: a
1: b
@ -1605,7 +1604,6 @@ def f(c=c):
Flags: 0x0
Constants:
0: 1
1: None
Names:
0: x"""
@ -1640,7 +1638,6 @@ async def async_def():
Flags: OPTIMIZED, NEWLOCALS, COROUTINE
Constants:
0: 1
1: None
Names:
0: b
1: c
@ -1790,7 +1787,7 @@ def _prepare_test_cases():
make_inst(opname='MAKE_CELL', arg=0, argval='a', argrepr='a', offset=0, start_offset=0, starts_line=True, line_number=None),
make_inst(opname='MAKE_CELL', arg=1, argval='b', argrepr='b', offset=2, start_offset=2, starts_line=False, line_number=None),
make_inst(opname='RESUME', arg=0, argval=0, argrepr='', offset=4, start_offset=4, starts_line=True, line_number=1, cache_info=[('counter', 1, b'\x00\x00')]),
make_inst(opname='LOAD_CONST', arg=4, argval=(3, 4), argrepr='(3, 4)', offset=8, start_offset=8, starts_line=True, line_number=2),
make_inst(opname='LOAD_CONST', arg=3, argval=(3, 4), argrepr='(3, 4)', offset=8, start_offset=8, starts_line=True, line_number=2),
make_inst(opname='LOAD_FAST_BORROW', arg=0, argval='a', argrepr='a', offset=10, start_offset=10, starts_line=False, line_number=2),
make_inst(opname='LOAD_FAST_BORROW', arg=1, argval='b', argrepr='b', offset=12, start_offset=12, starts_line=False, line_number=2),
make_inst(opname='BUILD_TUPLE', arg=2, argval=2, argrepr='', offset=14, start_offset=14, starts_line=False, line_number=2),
@ -1802,11 +1799,11 @@ def _prepare_test_cases():
make_inst(opname='LOAD_GLOBAL', arg=1, argval='print', argrepr='print + NULL', offset=26, start_offset=26, starts_line=True, line_number=7, cache_info=[('counter', 1, b'\x00\x00'), ('index', 1, b'\x00\x00'), ('module_keys_version', 1, b'\x00\x00'), ('builtin_keys_version', 1, b'\x00\x00')]),
make_inst(opname='LOAD_DEREF', arg=0, argval='a', argrepr='a', offset=36, start_offset=36, starts_line=False, line_number=7),
make_inst(opname='LOAD_DEREF', arg=1, argval='b', argrepr='b', offset=38, start_offset=38, starts_line=False, line_number=7),
make_inst(opname='LOAD_CONST', arg=2, argval='', argrepr="''", offset=40, start_offset=40, starts_line=False, line_number=7),
make_inst(opname='LOAD_COMMON_CONSTANT', arg=8, argval='', argrepr="''", offset=40, start_offset=40, starts_line=False, line_number=7),
make_inst(opname='LOAD_SMALL_INT', arg=1, argval=1, argrepr='', offset=42, start_offset=42, starts_line=False, line_number=7),
make_inst(opname='BUILD_LIST', arg=0, argval=0, argrepr='', offset=44, start_offset=44, starts_line=False, line_number=7),
make_inst(opname='BUILD_MAP', arg=0, argval=0, argrepr='', offset=46, start_offset=46, starts_line=False, line_number=7),
make_inst(opname='LOAD_CONST', arg=3, argval='Hello world!', argrepr="'Hello world!'", offset=48, start_offset=48, starts_line=False, line_number=7),
make_inst(opname='LOAD_CONST', arg=2, argval='Hello world!', argrepr="'Hello world!'", offset=48, start_offset=48, starts_line=False, line_number=7),
make_inst(opname='CALL', arg=7, argval=7, argrepr='', offset=50, start_offset=50, starts_line=False, line_number=7, cache_info=[('counter', 1, b'\x00\x00'), ('func_version', 2, b'\x00\x00\x00\x00')]),
make_inst(opname='POP_TOP', arg=None, argval=None, argrepr='', offset=58, start_offset=58, starts_line=False, line_number=7),
make_inst(opname='LOAD_FAST_BORROW', arg=2, argval='f', argrepr='f', offset=60, start_offset=60, starts_line=True, line_number=8),
@ -1851,7 +1848,7 @@ def _prepare_test_cases():
make_inst(opname='LOAD_FAST_BORROW_LOAD_FAST_BORROW', arg=1, argval=('e', 'f'), argrepr='e, f', offset=24, start_offset=24, starts_line=False, line_number=4),
make_inst(opname='CALL', arg=6, argval=6, argrepr='', offset=26, start_offset=26, starts_line=False, line_number=4, cache_info=[('counter', 1, b'\x00\x00'), ('func_version', 2, b'\x00\x00\x00\x00')]),
make_inst(opname='POP_TOP', arg=None, argval=None, argrepr='', offset=34, start_offset=34, starts_line=False, line_number=4),
make_inst(opname='LOAD_CONST', arg=0, argval=None, argrepr='None', offset=36, start_offset=36, starts_line=False, line_number=4),
make_inst(opname='LOAD_COMMON_CONSTANT', arg=7, argval=None, argrepr='None', offset=36, start_offset=36, starts_line=False, line_number=4),
make_inst(opname='RETURN_VALUE', arg=None, argval=None, argrepr='', offset=38, start_offset=38, starts_line=False, line_number=4),
]
@ -1934,16 +1931,16 @@ def _prepare_test_cases():
make_inst(opname='LOAD_CONST', arg=3, argval='Never reach this', argrepr="'Never reach this'", offset=292, start_offset=292, starts_line=False, line_number=26),
make_inst(opname='CALL', arg=1, argval=1, argrepr='', offset=294, start_offset=294, starts_line=False, line_number=26, cache_info=[('counter', 1, b'\x00\x00'), ('func_version', 2, b'\x00\x00\x00\x00')]),
make_inst(opname='POP_TOP', arg=None, argval=None, argrepr='', offset=302, start_offset=302, starts_line=False, line_number=26),
make_inst(opname='LOAD_CONST', arg=4, argval=None, argrepr='None', offset=304, start_offset=304, starts_line=True, line_number=25),
make_inst(opname='LOAD_CONST', arg=4, argval=None, argrepr='None', offset=306, start_offset=306, starts_line=False, line_number=25),
make_inst(opname='LOAD_CONST', arg=4, argval=None, argrepr='None', offset=308, start_offset=308, starts_line=False, line_number=25),
make_inst(opname='LOAD_COMMON_CONSTANT', arg=7, argval=None, argrepr='None', offset=304, start_offset=304, starts_line=True, line_number=25),
make_inst(opname='LOAD_COMMON_CONSTANT', arg=7, argval=None, argrepr='None', offset=306, start_offset=306, starts_line=False, line_number=25),
make_inst(opname='LOAD_COMMON_CONSTANT', arg=7, argval=None, argrepr='None', offset=308, start_offset=308, starts_line=False, line_number=25),
make_inst(opname='CALL', arg=3, argval=3, argrepr='', offset=310, start_offset=310, starts_line=False, line_number=25, cache_info=[('counter', 1, b'\x00\x00'), ('func_version', 2, b'\x00\x00\x00\x00')]),
make_inst(opname='POP_TOP', arg=None, argval=None, argrepr='', offset=318, start_offset=318, starts_line=False, line_number=25),
make_inst(opname='LOAD_GLOBAL', arg=3, argval='print', argrepr='print + NULL', offset=320, start_offset=320, starts_line=True, line_number=28, label=10, cache_info=[('counter', 1, b'\x00\x00'), ('index', 1, b'\x00\x00'), ('module_keys_version', 1, b'\x00\x00'), ('builtin_keys_version', 1, b'\x00\x00')]),
make_inst(opname='LOAD_CONST', arg=6, argval="OK, now we're done", argrepr='"OK, now we\'re done"', offset=330, start_offset=330, starts_line=False, line_number=28),
make_inst(opname='LOAD_CONST', arg=5, argval="OK, now we're done", argrepr='"OK, now we\'re done"', offset=330, start_offset=330, starts_line=False, line_number=28),
make_inst(opname='CALL', arg=1, argval=1, argrepr='', offset=332, start_offset=332, starts_line=False, line_number=28, cache_info=[('counter', 1, b'\x00\x00'), ('func_version', 2, b'\x00\x00\x00\x00')]),
make_inst(opname='POP_TOP', arg=None, argval=None, argrepr='', offset=340, start_offset=340, starts_line=False, line_number=28),
make_inst(opname='LOAD_CONST', arg=4, argval=None, argrepr='None', offset=342, start_offset=342, starts_line=False, line_number=28),
make_inst(opname='LOAD_COMMON_CONSTANT', arg=7, argval=None, argrepr='None', offset=342, start_offset=342, starts_line=False, line_number=28),
make_inst(opname='RETURN_VALUE', arg=None, argval=None, argrepr='', offset=344, start_offset=344, starts_line=False, line_number=28),
make_inst(opname='PUSH_EXC_INFO', arg=None, argval=None, argrepr='', offset=346, start_offset=346, starts_line=True, line_number=25),
make_inst(opname='WITH_EXCEPT_START', arg=None, argval=None, argrepr='', offset=348, start_offset=348, starts_line=False, line_number=25),
@ -1967,7 +1964,7 @@ def _prepare_test_cases():
make_inst(opname='NOT_TAKEN', arg=None, argval=None, argrepr='', offset=402, start_offset=402, starts_line=False, line_number=22),
make_inst(opname='POP_TOP', arg=None, argval=None, argrepr='', offset=404, start_offset=404, starts_line=False, line_number=22),
make_inst(opname='LOAD_GLOBAL', arg=3, argval='print', argrepr='print + NULL', offset=406, start_offset=406, starts_line=True, line_number=23, cache_info=[('counter', 1, b'\x00\x00'), ('index', 1, b'\x00\x00'), ('module_keys_version', 1, b'\x00\x00'), ('builtin_keys_version', 1, b'\x00\x00')]),
make_inst(opname='LOAD_CONST', arg=5, argval='Here we go, here we go, here we go...', argrepr="'Here we go, here we go, here we go...'", offset=416, start_offset=416, starts_line=False, line_number=23),
make_inst(opname='LOAD_CONST', arg=4, argval='Here we go, here we go, here we go...', argrepr="'Here we go, here we go, here we go...'", offset=416, start_offset=416, starts_line=False, line_number=23),
make_inst(opname='CALL', arg=1, argval=1, argrepr='', offset=418, start_offset=418, starts_line=False, line_number=23, cache_info=[('counter', 1, b'\x00\x00'), ('func_version', 2, b'\x00\x00\x00\x00')]),
make_inst(opname='POP_TOP', arg=None, argval=None, argrepr='', offset=426, start_offset=426, starts_line=False, line_number=23),
make_inst(opname='POP_EXCEPT', arg=None, argval=None, argrepr='', offset=428, start_offset=428, starts_line=False, line_number=23),
@ -1978,7 +1975,7 @@ def _prepare_test_cases():
make_inst(opname='RERAISE', arg=1, argval=1, argrepr='', offset=438, start_offset=438, starts_line=False, line_number=None),
make_inst(opname='PUSH_EXC_INFO', arg=None, argval=None, argrepr='', offset=440, start_offset=440, starts_line=False, line_number=None),
make_inst(opname='LOAD_GLOBAL', arg=3, argval='print', argrepr='print + NULL', offset=442, start_offset=442, starts_line=True, line_number=28, cache_info=[('counter', 1, b'\x00\x00'), ('index', 1, b'\x00\x00'), ('module_keys_version', 1, b'\x00\x00'), ('builtin_keys_version', 1, b'\x00\x00')]),
make_inst(opname='LOAD_CONST', arg=6, argval="OK, now we're done", argrepr='"OK, now we\'re done"', offset=452, start_offset=452, starts_line=False, line_number=28),
make_inst(opname='LOAD_CONST', arg=5, argval="OK, now we're done", argrepr='"OK, now we\'re done"', offset=452, start_offset=452, starts_line=False, line_number=28),
make_inst(opname='CALL', arg=1, argval=1, argrepr='', offset=454, start_offset=454, starts_line=False, line_number=28, cache_info=[('counter', 1, b'\x00\x00'), ('func_version', 2, b'\x00\x00\x00\x00')]),
make_inst(opname='POP_TOP', arg=None, argval=None, argrepr='', offset=462, start_offset=462, starts_line=False, line_number=28),
make_inst(opname='RERAISE', arg=0, argval=0, argrepr='', offset=464, start_offset=464, starts_line=False, line_number=28),
@ -1991,7 +1988,7 @@ def _prepare_test_cases():
def simple(): pass
expected_opinfo_simple = [
make_inst(opname='RESUME', arg=0, argval=0, argrepr='', offset=0, start_offset=0, starts_line=True, line_number=simple.__code__.co_firstlineno),
make_inst(opname='LOAD_CONST', arg=0, argval=None, argrepr='None', offset=4, start_offset=4, starts_line=False, line_number=simple.__code__.co_firstlineno),
make_inst(opname='LOAD_COMMON_CONSTANT', arg=7, argval=None, argrepr='None', offset=4, start_offset=4, starts_line=False, line_number=simple.__code__.co_firstlineno),
make_inst(opname='RETURN_VALUE', arg=None, argval=None, argrepr='', offset=6, start_offset=6, starts_line=False, line_number=simple.__code__.co_firstlineno),
]
@ -2600,7 +2597,7 @@ def test_show_cache(self):
CACHE 0 (func_version: 0)
CACHE 0
POP_TOP
LOAD_CONST 0 (None)
LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
'''
for flag in ['-C', '--show-caches']:
@ -2612,7 +2609,7 @@ def test_show_offsets(self):
expect = '''
0 0 RESUME 0
1 4 LOAD_CONST 0 (None)
1 4 LOAD_COMMON_CONSTANT 7 (None)
6 RETURN_VALUE
'''
for flag in ['-O', '--show-offsets']:
@ -2624,7 +2621,7 @@ def test_show_positions(self):
expect = '''
0:0-1:0 RESUME 0
1:0-1:4 LOAD_CONST 0 (None)
1:0-1:4 LOAD_COMMON_CONSTANT 7 (None)
1:0-1:4 RETURN_VALUE
'''
for flag in ['-P', '--show-positions']:
@ -2636,7 +2633,7 @@ def test_specialized_code(self):
expect = '''
0 RESUME 0
1 LOAD_CONST 0 (None)
1 LOAD_COMMON_CONSTANT 7 (None)
RETURN_VALUE
'''
for flag in ['-S', '--specialized']:

View file

@ -1235,17 +1235,6 @@ def test_get_local_part_valid_and_invalid_qp_in_atom_list(self):
'@example.com')
self.assertEqual(local_part.local_part, r'\example\\ example')
def test_get_local_part_unicode_defect(self):
# Currently this only happens when parsing unicode, not when parsing
# stuff that was originally binary.
local_part = self._test_get_x(parser.get_local_part,
'exámple@example.com',
'exámple',
'exámple',
[errors.NonASCIILocalPartDefect],
'@example.com')
self.assertEqual(local_part.local_part, 'exámple')
# get_dtext
def test_get_dtext_only(self):
@ -3374,10 +3363,12 @@ def test_fold_unfoldable_element_stealing_whitespace(self):
self._test(token, expected, policy=policy)
def test_encoded_word_with_undecodable_bytes(self):
self._test(parser.get_address_list(
' =?utf-8?Q?=E5=AE=A2=E6=88=B6=E6=AD=A3=E8=A6=8F=E4=BA=A4=E7?='
self._test(
parser.get_address_list(
' =?utf-8?Q?=E5=AE=A2=E6=88=B6=E6=AD=A3=E8=A6=8F=E4=BA=A4=E7?='
' <xyz@abc.com>'
)[0],
' =?unknown-8bit?b?5a6i5oi25q2j6KaP5Lqk5w==?=\n',
' =?unknown-8bit?b?5a6i5oi25q2j6KaP5Lqk5w==?= <xyz@abc.com>\n',
)

View file

@ -1,4 +1,5 @@
import io
import re
import textwrap
import unittest
import random
@ -295,6 +296,69 @@ def test_keep_long_encoded_newlines(self):
g.flatten(msg)
self.assertEqual(s.getvalue(), self.typ(expected))
def test_non_ascii_addr_spec_raises(self):
# non-ascii is not permitted in any part of an addr-spec. If the
# programmer generated it, it's an error. (See also
# test_non_ascii_addr_spec_preserved below.)
p = self.policy.clone(utf8=False, max_line_length=20)
g = self.genclass(self.ioclass(), policy=p)
# XXX The particular part detected here isn't part of a behavioral
# spec and may change in the future.
cases = [
('wők@example.com', 'wők', 'local-part'),
('wok@exàmple.com', 'exàmple.com', 'domain'),
('wők@exàmple.com', 'wők', 'local-part'),
(
'"Name, for display" <wők@example.com>',
'wők@example.com',
'addr-spec',
),
(
'Näyttönimi <wők@example.com>',
'wők@example.com',
'addr-spec',
),
(
'"a lőng quoted string as the local part"@example.com',
'a lőng quoted string as the local part',
'local-part',
),
]
for address, badtoken, partname in cases:
with self.subTest(address=address):
msg = EmailMessage()
msg['To'] = address
expected_error = (
fr"(?i)(?=.*non-ascii)"
fr"(?=.*{re.escape(badtoken)})"
fr"(?=.*{partname})"
fr"(?=.*policy.*utf8)"
)
with self.assertRaisesRegex(
email.errors.HeaderWriteError, expected_error
):
g.flatten(msg)
def test_local_part_quoted_string_wrapped_correctly(self):
msg = self.msgmaker(self.typ(textwrap.dedent("""\
To: <"a long local part in a quoted string"@example.com>
Subject: test
None
""")), policy=self.policy.clone(max_line_length=20))
expected = textwrap.dedent("""\
To: <"a long local part in a
quoted string"@example.com>
Subject: test
None
""")
s = self.ioclass()
g = self.genclass(s, policy=self.policy.clone(max_line_length=30))
g.flatten(msg)
self.assertEqual(s.getvalue(), self.typ(expected))
def _test_boundary_detection(self, linesep):
# Generate a boundary token in the same way as _make_boundary
token = random.randrange(sys.maxsize)
@ -515,12 +579,12 @@ def test_cte_type_7bit_transforms_8bit_cte(self):
def test_smtputf8_policy(self):
msg = EmailMessage()
msg['From'] = "Páolo <főo@bar.com>"
msg['From'] = "Páolo <főo@bàr.com>"
msg['To'] = 'Dinsdale'
msg['Subject'] = 'Nudge nudge, wink, wink \u1F609'
msg.set_content("oh là là, know what I mean, know what I mean?")
expected = textwrap.dedent("""\
From: Páolo <főo@bar.com>
From: Páolo <főo@bàr.com>
To: Dinsdale
Subject: Nudge nudge, wink, wink \u1F609
Content-Type: text/plain; charset="utf-8"
@ -555,6 +619,37 @@ def test_smtp_policy(self):
g.flatten(msg)
self.assertEqual(s.getvalue(), expected)
def test_non_ascii_addr_spec_preserved(self):
# A defective non-ASCII addr-spec parsed from the original
# message is left unchanged when flattening.
# (See also test_non_ascii_addr_spec_raises above.)
source = (
'To: jörg@example.com, "But a long name still works with refold_source" <jörg@example.com>'
).encode()
expected = (
b'To: j\xc3\xb6rg@example.com,\n'
b' "But a long name still works with refold_source" <j\xc3\xb6rg@example.com>\n'
b'\n'
)
msg = message_from_bytes(source, policy=policy.default)
s = io.BytesIO()
g = BytesGenerator(s, policy=policy.default)
g.flatten(msg)
self.assertEqual(s.getvalue(), expected)
def test_idna_encoding_preserved(self):
# Nothing tries to decode a pre-encoded IDNA domain.
msg = EmailMessage()
msg["To"] = Address(
username='jörg',
domain='☕.example'.encode('idna').decode() # IDNA 2003
)
expected = 'To: jörg@xn--53h.example\n\n'.encode()
s = io.BytesIO()
g = BytesGenerator(s, policy=policy.default.clone(utf8=True))
g.flatten(msg)
self.assertEqual(s.getvalue(), expected)
if __name__ == '__main__':
unittest.main()

View file

@ -1271,12 +1271,12 @@ class TestAddressHeader(TestHeaderBase):
'example.com',
None),
}
# XXX: Need many more examples, and in particular some with names in
# trailing comments, which aren't currently handled. comments in
# general are not handled yet.
}
def example_as_address(self, source, defects, decoded, display_name,
addr_spec, username, domain, comment):
h = self.make_header('sender', source)
@ -1294,6 +1294,43 @@ def example_as_address(self, source, defects, decoded, display_name,
# XXX: we have no comment support yet.
#self.assertEqual(a.comment, comment)
example_broken_header_params = {
'just_dquote':
('"',
[errors.InvalidHeaderDefect]*2,
'<>',
'',
'<>',
'',
'',
),
}
def example_broken_header_as_address(
self,
source,
defects,
decoded,
display_name,
addr_spec,
username,
domain,
):
h = self.make_header('sender', source)
self.assertEqual(h, decoded)
self.assertDefectsEqual(h.defects, defects)
a = h.address
self.assertEqual(str(a), decoded)
self.assertEqual(len(h.groups), 1)
self.assertEqual([a], list(h.groups[0].addresses))
self.assertEqual([a], list(h.addresses))
self.assertEqual(a.display_name, display_name)
self.assertEqual(a.addr_spec, addr_spec)
self.assertEqual(a.username, username)
self.assertEqual(a.domain, domain)
def example_as_group(self, source, defects, decoded, display_name,
addr_spec, username, domain, comment):
source = 'foo: {};'.format(source)
@ -1506,17 +1543,19 @@ def test_quoting(self):
self.assertEqual(str(a), '"Sara J." <"bad name"@example.com>')
def test_il8n(self):
a = Address('Éric', 'wok', 'exàmple.com')
a = Address('Éric', 'wők', 'exàmple.com')
self.assertEqual(a.display_name, 'Éric')
self.assertEqual(a.username, 'wok')
self.assertEqual(a.username, 'wők')
self.assertEqual(a.domain, 'exàmple.com')
self.assertEqual(a.addr_spec, 'wok@exàmple.com')
self.assertEqual(str(a), 'Éric <wok@exàmple.com>')
self.assertEqual(a.addr_spec, 'wők@exàmple.com')
self.assertEqual(str(a), 'Éric <wők@exàmple.com>')
# XXX: there is an API design issue that needs to be solved here.
#def test_non_ascii_username_raises(self):
# with self.assertRaises(ValueError):
# Address('foo', 'wők', 'example.com')
def test_i18n_in_addr_spec(self):
a = Address(addr_spec='wők@exàmple.com')
self.assertEqual(a.username, 'wők')
self.assertEqual(a.domain, 'exàmple.com')
self.assertEqual(a.addr_spec, 'wők@exàmple.com')
self.assertEqual(str(a), 'wők@exàmple.com')
def test_crlf_in_constructor_args_raises(self):
cases = (
@ -1537,10 +1576,6 @@ def test_crlf_in_constructor_args_raises(self):
with self.subTest(kwargs=kwargs), self.assertRaisesRegex(ValueError, "invalid arguments"):
Address(**kwargs)
def test_non_ascii_username_in_addr_spec_raises(self):
with self.assertRaises(ValueError):
Address('foo', addr_spec='wők@example.com')
def test_address_addr_spec_and_username_raises(self):
with self.assertRaises(TypeError):
Address('foo', username='bing', addr_spec='bar@baz')

View file

@ -559,6 +559,75 @@ def test_self_trace_after_ctypes_import(self):
f"stdout: {result.stdout}\nstderr: {result.stderr}"
)
@skip_if_not_supported
@unittest.skipIf(
sys.platform == "linux" and not PROCESS_VM_READV_SUPPORTED,
"Test only runs on Linux with process_vm_readv support",
)
def test_remote_stack_trace_non_ascii_names(self):
# Exercise each PyUnicode kind (1-byte non-ASCII, 2-byte BMP,
# 4-byte non-BMP) for both the filename and the function name
# reported in the stack trace.
latin1 = "zażółć" # 1-byte non-ASCII (forces non-ASCII path)
bmp = "λάμβδα" # 2-byte BMP
astral = "𐌀𐌁𐌂𐌃" # 4-byte non-BMP (Old Italic; XID, no NFKC fold)
func_name = f"{latin1}_{bmp}_{astral}"
script_basename = f"mod_{latin1}_{bmp}_{astral}"
port = find_unused_port()
script = textwrap.dedent(
f"""\
import socket
import time
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', {port}))
def {func_name}():
sock.sendall(b"ready")
time.sleep(10_000)
{func_name}()
"""
)
with os_helper.temp_dir() as work_dir:
script_dir = os.path.join(work_dir, "script_pkg")
os.mkdir(script_dir)
script_name = _make_test_script(script_dir, script_basename, script)
server_socket = _create_server_socket(port)
client_socket = None
try:
p = subprocess.Popen([sys.executable, script_name])
client_socket, _ = server_socket.accept()
server_socket.close()
_wait_for_signal(client_socket, b"ready")
stack_trace = get_stack_trace(p.pid)
except PermissionError:
self.skipTest("Insufficient permissions to read the stack trace")
finally:
if client_socket is not None:
client_socket.close()
p.kill()
p.wait(timeout=SHORT_TIMEOUT)
frames = [
frame
for interp in stack_trace
for thread in interp.threads
for frame in thread.frame_info
]
target = next(
(f for f in frames if f.funcname == func_name), None
)
self.assertIsNotNone(
target,
f"Frame for {func_name!r} missing; got "
f"{[(f.filename, f.funcname) for f in frames]}",
)
self.assertEqual(target.filename, script_name)
@skip_if_not_supported
@unittest.skipIf(
sys.platform == "linux" and not PROCESS_VM_READV_SUPPORTED,

View file

@ -27,9 +27,8 @@ def _frame_pointers_expected(machine):
)
if "no-omit-frame-pointer" in cflags:
# For example, configure adds -fno-omit-frame-pointer if Python
# has perf trampoline (PY_HAVE_PERF_TRAMPOLINE) and Python is built
# in debug mode.
# For example, configure adds -fno-omit-frame-pointer by default on
# supported GCC-compatible builds.
return True
if "omit-frame-pointer" in cflags:
return False

350
Lib/test/test_gc_stats.py Normal file
View file

@ -0,0 +1,350 @@
import gc
import os
import textwrap
import time
import unittest
from test.support import (
Py_GIL_DISABLED,
import_helper,
requires_gil_enabled,
requires_remote_subprocess_debugging,
)
from test.test_profiling.test_sampling_profiler.helpers import test_subprocess
try:
import _remote_debugging # noqa: F401
except ImportError:
raise unittest.SkipTest(
"Test only runs when _remote_debugging is available"
)
GC_STATS_FIELDS = (
"gen", "iid", "ts_start", "ts_stop", "collections", "collected",
"uncollectable", "candidates", "duration")
def get_interpreter_identifiers(gc_stats) -> tuple[int,...]:
return tuple(sorted({s.iid for s in gc_stats}))
def get_generations(gc_stats) -> tuple[int,int,int]:
generations = set()
for s in gc_stats:
generations.add(s.gen)
return tuple(sorted(generations))
def get_last_item(gc_stats, generation: int, iid: int):
item = None
for s in gc_stats:
if s.gen == generation and s.iid == iid:
if item is None or item.ts_start < s.ts_start:
item = s
return item
def has_local_process_debugging():
try:
return _remote_debugging.is_python_process(os.getpid())
except Exception:
return False
def check_gc_stats_fields(testcase, stats):
testcase.assertIsInstance(stats, list)
testcase.assertGreater(len(stats), 0)
for item in stats:
testcase.assertIsInstance(item, _remote_debugging.GCStatsInfo)
testcase.assertEqual(type(item).__match_args__, GC_STATS_FIELDS)
testcase.assertEqual(len(item), len(GC_STATS_FIELDS))
def gc_stats_counters_advanced(before_stats, after_stats, generations, iid):
for generation in generations:
before = get_last_item(before_stats, generation, iid)
after = get_last_item(after_stats, generation, iid)
if after is None or before is None:
return False
if after.duration <= before.duration:
return False
if after.candidates <= before.candidates:
return False
return True
@unittest.skipUnless(
has_local_process_debugging(), "requires local process debugging")
class TestLocalGCStats(unittest.TestCase):
_main_iid = 0 # main interpreter ID
def test_gc_stats_fields(self):
monitor = _remote_debugging.GCMonitor(os.getpid(), debug=True)
stats = monitor.get_gc_stats(all_interpreters=False)
check_gc_stats_fields(self, stats)
def test_module_get_gc_stats_fields(self):
stats = _remote_debugging.get_gc_stats(
os.getpid(), all_interpreters=False)
check_gc_stats_fields(self, stats)
def test_all_interpreters_filter_for_local_process(self):
interpreters = import_helper.import_module("concurrent.interpreters")
source = """
import gc
objects = []
obj = []
obj.append(obj)
objects.append(obj)
gc.collect(0)
gc.collect(1)
gc.collect(2)
"""
interp = interpreters.create()
try:
interp.exec(textwrap.dedent(source))
for generation in range(3):
gc.collect(generation)
main_stats = _remote_debugging.get_gc_stats(
os.getpid(), all_interpreters=False)
all_stats = _remote_debugging.get_gc_stats(
os.getpid(), all_interpreters=True)
finally:
interp.close()
self.assertEqual(get_interpreter_identifiers(main_stats), (0,))
self.assertIn(0, get_interpreter_identifiers(all_stats))
self.assertGreater(len(get_interpreter_identifiers(all_stats)), 1)
self.assertEqual(get_generations(main_stats), (0, 1, 2))
self.assertEqual(get_generations(all_stats), (0, 1, 2))
for iid in get_interpreter_identifiers(all_stats):
for generation in range(3):
self.assertIsNotNone(get_last_item(all_stats, generation, iid))
@unittest.skipUnless(Py_GIL_DISABLED, "requires free-threaded GC")
def test_gc_stats_counters_for_main_interpreter_free_threaded(self):
generations = (0, 1, 2)
before_stats = _remote_debugging.get_gc_stats(
os.getpid(), all_interpreters=False)
for generation in generations:
self.assertIsNotNone(
get_last_item(before_stats, generation, self._main_iid))
objects = []
for _ in range(1000):
obj = []
obj.append(obj)
objects.append(obj)
for generation in generations:
gc.collect(generation)
after_stats = _remote_debugging.get_gc_stats(
os.getpid(), all_interpreters=False)
self.assertTrue(
gc_stats_counters_advanced(
before_stats, after_stats, generations, self._main_iid),
(before_stats, after_stats)
)
@requires_remote_subprocess_debugging()
class TestGCStats(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls._main_iid = 0 # main interpreter ID
cls._main_interpreter_script = '''
import gc
import time
gc.collect(0)
gc.collect(1)
gc.collect(2)
_test_sock.sendall(b"working")
objects = []
while True:
if len(objects) > 100:
objects = []
obj = []
obj.append(obj)
objects.append(obj)
time.sleep(0.1)
gc.collect(0)
gc.collect(1)
gc.collect(2)
'''
cls._script = '''
import concurrent.interpreters as interpreters
import gc
import time
source = """if True:
import gc
if "objects" not in globals():
objects = []
if len(objects) > 100:
objects = []
obj = []
obj.append(obj)
objects.append(obj)
gc.collect(0)
gc.collect(1)
gc.collect(2)
"""
if {0}:
interp = interpreters.create()
interp.exec(source)
gc.collect(0)
gc.collect(1)
gc.collect(2)
_test_sock.sendall(b"working")
objects = []
while True:
if len(objects) > 100:
objects = []
obj = []
obj.append(obj)
objects.append(obj)
time.sleep(0.1)
if {0}:
interp.exec(source)
gc.collect(0)
gc.collect(1)
gc.collect(2)
'''
def _gc_stats_advanced(self, before_stats, after_stats, generations):
for generation in generations:
before = get_last_item(before_stats, generation, self._main_iid)
after = get_last_item(after_stats, generation, self._main_iid)
if after is None or before is None:
return False
if after.ts_stop <= before.ts_stop:
return False
return True
def _collect_gc_stats(self, script: str, all_interpreters: bool,
generations=(2,)):
with (test_subprocess(script, wait_for_working=True) as subproc):
monitor = _remote_debugging.GCMonitor(subproc.process.pid, debug=True)
before_stats = monitor.get_gc_stats(all_interpreters=all_interpreters)
for generation in generations:
before = get_last_item(before_stats, generation, self._main_iid)
self.assertIsNotNone(before)
after_stats = before_stats
for _ in range(10):
time.sleep(0.5)
after_stats = monitor.get_gc_stats(all_interpreters=all_interpreters)
if self._gc_stats_advanced(before_stats, after_stats, generations):
break
else:
self.fail(
f"GC stats for generations {generations!r} did not "
f"advance: {before_stats!r} -> {after_stats!r}"
)
return before_stats, after_stats
def _check_gc_stats(self, before, after):
self.assertIsNotNone(before)
self.assertIsNotNone(after)
self.assertGreater(after.collections, before.collections, (before, after))
self.assertGreater(after.ts_start, before.ts_start, (before, after))
self.assertGreater(after.ts_stop, before.ts_stop, (before, after))
self.assertGreater(after.duration, before.duration, (before, after))
self.assertGreater(after.candidates, before.candidates, (before, after))
# may not grow
self.assertGreaterEqual(after.collected, before.collected, (before, after))
self.assertGreaterEqual(after.uncollectable, before.uncollectable, (before, after))
def _check_interpreter_gc_stats(self, before_stats, after_stats):
before_iids = get_interpreter_identifiers(before_stats)
after_iids = get_interpreter_identifiers(after_stats)
self.assertEqual(before_iids, after_iids)
self.assertEqual(get_generations(before_stats), (0, 1, 2))
self.assertEqual(get_generations(after_stats), (0, 1, 2))
for iid in after_iids:
with self.subTest(f"interpreter id={iid}"):
before_last_items = (get_last_item(before_stats, 0, iid),
get_last_item(before_stats, 1, iid),
get_last_item(before_stats, 2, iid))
after_last_items = (get_last_item(after_stats, 0, iid),
get_last_item(after_stats, 1, iid),
get_last_item(after_stats, 2, iid))
for before, after in zip(before_last_items, after_last_items):
self._check_gc_stats(before, after)
def test_gc_stats_timestamps_for_main_interpreter(self):
script = textwrap.dedent(self._main_interpreter_script)
before_stats, after_stats = self._collect_gc_stats(
script, False, generations=(0, 1, 2))
for generation in range(3):
with self.subTest(generation=generation):
before = get_last_item(before_stats, generation, self._main_iid)
after = get_last_item(after_stats, generation, self._main_iid)
self.assertIsNotNone(before)
self.assertIsNotNone(after)
self.assertGreater(
after.collections, before.collections,
(before, after))
self.assertGreater(
after.ts_start, before.ts_start,
(before, after))
self.assertGreater(
after.ts_stop, before.ts_stop,
(before, after))
@requires_gil_enabled()
def test_gc_stats_for_main_interpreter(self):
script = textwrap.dedent(self._script.format(False))
before_stats, after_stats = self._collect_gc_stats(script, False)
self._check_interpreter_gc_stats(before_stats, after_stats)
@requires_gil_enabled()
def test_gc_stats_for_main_interpreter_if_subinterpreter_exists(self):
script = textwrap.dedent(self._script.format(True))
before_stats, after_stats = self._collect_gc_stats(script, False)
self._check_interpreter_gc_stats(before_stats, after_stats)
@requires_gil_enabled()
def test_gc_stats_for_all_interpreters(self):
script = textwrap.dedent(self._script.format(True))
before_stats, after_stats = self._collect_gc_stats(script, True)
before_iids = get_interpreter_identifiers(before_stats)
after_iids = get_interpreter_identifiers(after_stats)
self.assertGreater(len(before_iids), 1)
self.assertGreater(len(after_iids), 1)
self.assertEqual(before_iids, after_iids)
self._check_interpreter_gc_stats(before_stats, after_stats)

View file

@ -0,0 +1,27 @@
# Sample script for use by test_gdb.test_jit
import _testinternalcapi
import operator
WARMUP_ITERATIONS = _testinternalcapi.TIER2_THRESHOLD + 10
def jit_bt_hot(depth, warming_up_caller=False):
if depth == 0:
if not warming_up_caller:
id(42)
return
for iteration in range(WARMUP_ITERATIONS):
operator.call(
jit_bt_hot,
depth - 1,
warming_up_caller or iteration + 1 != WARMUP_ITERATIONS,
)
# Warm the shared shim once without hitting builtin_id so the real run uses
# the steady-state shim path when GDB breaks inside id(42).
jit_bt_hot(1, warming_up_caller=True)
jit_bt_hot(1)

View file

@ -0,0 +1,201 @@
import os
import platform
import re
import sys
import unittest
from .util import setup_module, DebuggerTests
JIT_SAMPLE_SCRIPT = os.path.join(os.path.dirname(__file__), "gdb_jit_sample.py")
# In batch GDB, break in builtin_id() while it is running under JIT,
# then repeatedly "finish" until the selected frame is the JIT executor.
# That gives a deterministic backtrace starting with py::jit:executor.
#
# builtin_id() sits only a few helper frames above the JIT entry on this path.
# This bound is just a generous upper limit so the test fails clearly if the
# expected stack shape changes.
MAX_FINISH_STEPS = 20
# After landing on the JIT entry frame, single-step a bounded number of
# instructions further into the blob so the backtrace is taken from JIT code
# itself rather than the immediate helper-return site. The exact number of
# steps is not significant: each step is cross-checked against the selected
# frame's symbol so the test fails loudly if stepping escapes the registered
# JIT region, instead of asserting against a misleading backtrace.
MAX_JIT_ENTRY_STEPS = 4
EVAL_FRAME_RE = r"(_PyEval_EvalFrameDefault|_PyEval_Vector)"
JIT_EXECUTOR_FRAME = "py::jit:executor"
JIT_ENTRY_SYMBOL = "_PyJIT_Entry"
BACKTRACE_FRAME_RE = re.compile(r"^#\d+\s+.*$", re.MULTILINE)
FINISH_TO_JIT_EXECUTOR = (
"python exec(\"import gdb\\n"
f"target = {JIT_EXECUTOR_FRAME!r}\\n"
f"for _ in range({MAX_FINISH_STEPS}):\\n"
" frame = gdb.selected_frame()\\n"
" if frame is not None and frame.name() == target:\\n"
" break\\n"
" gdb.execute('finish')\\n"
"else:\\n"
" raise RuntimeError('did not reach %s' % target)\\n\")"
)
STEP_INSIDE_JIT_EXECUTOR = (
"python exec(\"import gdb\\n"
f"target = {JIT_EXECUTOR_FRAME!r}\\n"
f"for _ in range({MAX_JIT_ENTRY_STEPS}):\\n"
" frame = gdb.selected_frame()\\n"
" if frame is None or frame.name() != target:\\n"
" raise RuntimeError('left JIT region during stepping: '\\n"
" + repr(frame and frame.name()))\\n"
" gdb.execute('si')\\n"
"frame = gdb.selected_frame()\\n"
"if frame is None or frame.name() != target:\\n"
" raise RuntimeError('stepped out of JIT region after si')\\n\")"
)
def setUpModule():
setup_module()
# The GDB JIT interface registration is gated on __linux__ && __ELF__ in
# Python/jit_unwind.c, and the synthetic EH-frame is only implemented for
# x86_64 and AArch64 (a #error fires otherwise). Skip cleanly on other
# platforms or architectures instead of producing timeouts / empty backtraces.
# is_enabled() implies is_available() and also implies that the runtime has
# JIT execution active; interpreter-only tier 2 builds don't hit this path.
@unittest.skipUnless(sys.platform == "linux",
"GDB JIT interface is only implemented for Linux + ELF")
@unittest.skipUnless(platform.machine() in ("x86_64", "aarch64"),
"GDB JIT CFI emitter only supports x86_64 and AArch64")
@unittest.skipUnless(hasattr(sys, "_jit") and sys._jit.is_enabled(),
"requires a JIT-enabled build with JIT execution active")
class JitBacktraceTests(DebuggerTests):
def get_stack_trace(self, **kwargs):
# These tests validate the JIT-relevant part of the backtrace via
# _assert_jit_backtrace_shape, so an unrelated "?? ()" frame below
# the JIT/eval segment (e.g. libc without debug info) is tolerable.
kwargs.setdefault("skip_on_truncation", False)
return super().get_stack_trace(**kwargs)
def _extract_backtrace_frames(self, gdb_output):
frames = BACKTRACE_FRAME_RE.findall(gdb_output)
self.assertGreater(
len(frames), 0,
f"expected at least one GDB backtrace frame in output:\n{gdb_output}",
)
return frames
def _assert_jit_backtrace_shape(self, gdb_output, *, anchor_at_top):
# Shape assertions applied to every JIT backtrace we produce:
# 1. The synthetic JIT symbol appears exactly once. A second
# py::jit:executor frame would mean the unwinder is
# materializing two native frames for a single logical JIT
# region, or failing to unwind out of the region entirely.
# 2. The unwinder must climb directly back out of the JIT region
# into the eval loop. _PyJIT_Entry only exists to establish the
# physical frame; the synthetic executor FDE collapses it away.
# 3. For tests that assert a specific entry PC, the JIT frame
# is also at #0.
frames = self._extract_backtrace_frames(gdb_output)
backtrace = "\n".join(frames)
jit_frames = [frame for frame in frames if JIT_EXECUTOR_FRAME in frame]
jit_count = len(jit_frames)
self.assertEqual(
jit_count, 1,
f"expected exactly 1 {JIT_EXECUTOR_FRAME} frame, got {jit_count}\n"
f"backtrace:\n{backtrace}",
)
eval_frames = [frame for frame in frames if re.search(EVAL_FRAME_RE, frame)]
eval_count = len(eval_frames)
self.assertGreaterEqual(
eval_count, 1,
f"expected at least one _PyEval_* frame, got {eval_count}\n"
f"backtrace:\n{backtrace}",
)
jit_frame_index = next(
i for i, frame in enumerate(frames) if JIT_EXECUTOR_FRAME in frame
)
frames_after_jit = frames[jit_frame_index + 1:]
first_eval_offset = next(
(
i for i, frame in enumerate(frames_after_jit)
if re.search(EVAL_FRAME_RE, frame)
),
None,
)
self.assertIsNotNone(
first_eval_offset,
f"expected an eval frame after the JIT frame\n"
f"backtrace:\n{backtrace}",
)
unexpected_between = frames_after_jit[:first_eval_offset]
self.assertFalse(
unexpected_between,
"expected the executor frame to unwind directly into eval\n"
f"backtrace:\n{backtrace}",
)
relevant_end = max(
i
for i, frame in enumerate(frames)
if (
JIT_EXECUTOR_FRAME in frame
or re.search(EVAL_FRAME_RE, frame)
)
)
truncated_frames = [
frame for frame in frames[: relevant_end + 1]
if " ?? ()" in frame
]
self.assertFalse(
truncated_frames,
"unexpected truncated frame before the validated JIT/eval segment\n"
f"backtrace:\n{backtrace}",
)
if anchor_at_top:
self.assertRegex(
frames[0],
re.compile(rf"^#0\s+{re.escape(JIT_EXECUTOR_FRAME)}"),
)
def test_bt_unwinds_through_jit_frames(self):
gdb_output = self.get_stack_trace(
script=JIT_SAMPLE_SCRIPT,
cmds_after_breakpoint=["bt"],
PYTHON_JIT="1",
)
# The executor should appear as a named JIT frame and unwind back into
# the eval loop.
self._assert_jit_backtrace_shape(gdb_output, anchor_at_top=False)
def test_bt_handoff_from_jit_entry_to_executor(self):
gdb_output = self.get_stack_trace(
script=JIT_SAMPLE_SCRIPT,
breakpoint=JIT_ENTRY_SYMBOL,
cmds_after_breakpoint=[
"delete 1",
"tbreak builtin_id",
"continue",
"bt",
],
PYTHON_JIT="1",
)
# If we stop first in the shim and then continue into the real JIT
# workload, the final backtrace should match the architecture's
# executor unwind contract.
self._assert_jit_backtrace_shape(gdb_output, anchor_at_top=False)
def test_bt_unwinds_from_inside_jit_executor(self):
gdb_output = self.get_stack_trace(
script=JIT_SAMPLE_SCRIPT,
cmds_after_breakpoint=[
FINISH_TO_JIT_EXECUTOR,
STEP_INSIDE_JIT_EXECUTOR,
"bt",
],
PYTHON_JIT="1",
)
# Once the selected PC is inside the JIT executor, we require that GDB
# identifies the JIT frame at #0 and keeps unwinding into _PyEval_*.
self._assert_jit_backtrace_shape(gdb_output, anchor_at_top=True)

View file

@ -20,6 +20,27 @@
PYTHONHASHSEED = '123'
# gh-91960, bpo-40019: gdb reports these when the optimizer has dropped
# python-frame debug info; the test can't read what's not there.
_OPTIMIZED_OUT_PATTERNS = (
'(frame information optimized out)',
'Unable to read information on python frame',
'(unable to read python frame information)',
)
# gdb prints this when the unwinder genuinely failed to walk a frame —
# i.e. the CFI (ours or a library's) is wrong. Treat as a hard failure,
# not a skip, so regressions in our own unwind info don't hide.
_UNWIND_FAILURE_PATTERNS = (
'Backtrace stopped: frame did not save the PC',
)
# gh-104736: " ?? ()" in the bt usually means the unwinder bailed early,
# but can also be unrelated frames without debug info (e.g. libc). Tests
# that validate the JIT-relevant part of the backtrace themselves can
# opt out via skip_on_truncation=False.
_TRUNCATED_BACKTRACE_PATTERNS = (
' ?? ()',
)
def clean_environment():
# Remove PYTHON* environment variables such as PYTHONHOME
@ -160,7 +181,9 @@ def get_stack_trace(self, source=None, script=None,
breakpoint=BREAKPOINT_FN,
cmds_after_breakpoint=None,
import_site=False,
ignore_stderr=False):
ignore_stderr=False,
skip_on_truncation=True,
**env_vars):
'''
Run 'python -c SOURCE' under gdb with a breakpoint.
@ -239,7 +262,7 @@ def get_stack_trace(self, source=None, script=None,
args += [script]
# Use "args" to invoke gdb, capturing stdout, stderr:
out, err = run_gdb(*args, PYTHONHASHSEED=PYTHONHASHSEED)
out, err = run_gdb(*args, PYTHONHASHSEED=PYTHONHASHSEED, **env_vars)
if not ignore_stderr:
for line in err.splitlines():
@ -255,26 +278,20 @@ def get_stack_trace(self, source=None, script=None,
" because the Program Counter is"
" not present")
for pattern in _UNWIND_FAILURE_PATTERNS:
if pattern in out:
raise AssertionError(
f"gdb unwinder failed ({pattern!r}) — CFI bug in our "
f"generated code or in a linked library.\n"
f"Full gdb output:\n{out}"
)
# bpo-40019: Skip the test if gdb failed to read debug information
# because the Python binary is optimized.
for pattern in (
'(frame information optimized out)',
'Unable to read information on python frame',
# gh-91960: On Python built with "clang -Og", gdb gets
# "frame=<optimized out>" for _PyEval_EvalFrameDefault() parameter
'(unable to read python frame information)',
# gh-104736: On Python built with "clang -Og" on ppc64le,
# "py-bt" displays a truncated or not traceback, but "where"
# logs this error message:
'Backtrace stopped: frame did not save the PC',
# gh-104736: When "bt" command displays something like:
# "#1 0x0000000000000000 in ?? ()", the traceback is likely
# truncated or wrong.
' ?? ()',
):
patterns = _OPTIMIZED_OUT_PATTERNS
if skip_on_truncation:
patterns = patterns + _TRUNCATED_BACKTRACE_PATTERNS
for pattern in patterns:
if pattern in out:
raise unittest.SkipTest(f"{pattern!r} found in gdb output")

View file

@ -540,8 +540,16 @@ def test_err(self):
self.assertIn(f"{t.status_client_error}404", lines[1])
class CustomHeaderSimpleHTTPRequestHandler(SimpleHTTPRequestHandler):
extra_response_headers = None
def __init__(self, *args, **kwargs):
kwargs.setdefault('extra_response_headers', self.extra_response_headers)
super().__init__(*args, **kwargs)
class SimpleHTTPServerTestCase(BaseTestCase):
class request_handler(NoLogRequestHandler, SimpleHTTPRequestHandler):
class request_handler(NoLogRequestHandler, CustomHeaderSimpleHTTPRequestHandler):
pass
def setUp(self):
@ -898,6 +906,65 @@ def test_path_without_leading_slash(self):
self.assertEqual(response.getheader("Location"),
self.tempdir_name + "/?hi=1")
def test_extra_response_headers_list_dir(self):
with mock.patch.object(self.request_handler, 'extra_response_headers', [
('X-Test1', 'test1'),
('X-Test2', 'test2'),
]):
response = self.request(self.base_url + '/')
self.assertEqual(response.status, 200)
self.assertEqual(response.getheader("X-Test1"), 'test1')
self.assertEqual(response.getheader("X-Test2"), 'test2')
def test_extra_response_headers_get_file(self):
with mock.patch.object(self.request_handler, 'extra_response_headers', [
('Set-Cookie', 'test1=value1'),
('Set-Cookie', 'test2=value2'),
('X-Test1', 'value3'),
]):
data = b"Dummy index file\r\n"
with open(os.path.join(self.tempdir_name, 'index.html'), 'wb') as f:
f.write(data)
response = self.request(self.base_url + '/')
self.assertEqual(response.status, 200)
self.assertEqual(response.getheader("Set-Cookie"),
'test1=value1, test2=value2')
self.assertEqual(response.getheader("X-Test1"), 'value3')
def test_extra_response_headers_missing_on_404(self):
with mock.patch.object(self.request_handler, 'extra_response_headers', [
('X-Test1', 'value'),
]):
response = self.request(self.base_url + '/missing.html')
self.assertEqual(response.status, 404)
self.assertEqual(response.getheader("X-Test1"), None)
def test_extra_response_headers_dont_overwrite_default_headers(self):
with mock.patch.object(self.request_handler, 'extra_response_headers', [
('Content-Type', 'test/not_allowed'),
('Server', 'not_allowed'),
('Set-Cookie', 'test=allowed'),
]):
# The Content-Type header should not be overwritten by the extra_response_headers
# But cookies in the extra_allowed_duplicate_headers are allowed,
# including Set-Cookie
response = self.request(self.base_url + '/')
self.assertEqual(response.status, 200)
self.assertNotEqual(response.getheader("Content-Type"), 'test/not_allowed')
self.assertNotEqual(response.getheader("Server"), 'not_allowed')
self.assertEqual(response.getheader("Set-Cookie"), 'test=allowed')
def test_multiple_requests_dont_duplicate_extra_response_headers(self):
with mock.patch.object(self.request_handler, 'extra_response_headers', [
('x-test', 'test-value'),
]):
response = self.request(self.base_url + '/')
self.assertEqual(response.status, 200)
self.assertEqual(response.getheader("x-test"), 'test-value')
response = self.request(self.base_url + '/')
self.assertEqual(response.status, 200)
self.assertEqual(response.getheader("x-test"), 'test-value')
class SocketlessRequestHandler(SimpleHTTPRequestHandler):
def __init__(self, directory=None):
@ -1458,6 +1525,21 @@ def test_content_type_flag(self, mock_func):
mock_func.assert_called_once_with(**call_args)
mock_func.reset_mock()
@mock.patch('http.server.test')
def test_header_flag(self, mock_func):
call_args = self.args
self.invoke_httpd('--header', 'h1', 'v1', '-H', 'h2', 'v2')
mock_func.assert_called_once_with(**call_args)
mock_func.reset_mock()
def test_extra_header_flag_too_few_args(self):
with self.assertRaises(SystemExit):
self.invoke_httpd('--header', 'h1')
def test_extra_header_flag_too_many_args(self):
with self.assertRaises(SystemExit):
self.invoke_httpd('--header', 'h1', 'v1', 'h2')
@unittest.skipIf(ssl is None, "requires ssl")
@mock.patch('http.server.test')
def test_tls_cert_and_key_flags(self, mock_func):
@ -1541,6 +1623,30 @@ def test_unknown_flag(self, _):
self.assertEqual(stdout.getvalue(), '')
self.assertIn('error', stderr.getvalue())
@mock.patch('http.server.test')
def test_extra_response_headers_arg(self, mock_test):
# Call the main function with extra response headers cli args
server._main(
['-H', 'Set-Cookie', 'k=v', '-H', 'Set-Cookie', 'k2=v2:v3 v4', '8080']
)
# Get the ServerClass (DualStackServerMixin subclass) that _main()
# passed to test(), and verify its finish_request passes
# extra_response_headers to the handler.
_, kwargs = mock_test.call_args
server_class = kwargs['ServerClass']
mock_handler_class = mock.MagicMock()
mock_server = mock.Mock()
mock_server.RequestHandlerClass = mock_handler_class
server_class.finish_request(mock_server, mock.Mock(), '127.0.0.1')
mock_handler_class.assert_called_once_with(
mock.ANY, mock.ANY, mock_server,
directory=mock.ANY,
extra_response_headers=[
['Set-Cookie', 'k=v'], ['Set-Cookie', 'k2=v2:v3 v4']
]
)
class CommandLineRunTimeTestCase(unittest.TestCase):
served_data = os.urandom(32)

View file

@ -4753,6 +4753,16 @@ def foo(self):
stdout, stderr = self.run_pdb_script(script, commands)
self.assertIn("The specified object 'C.foo' is not a function", stdout)
def test_pyrepl_available(self):
with patch.dict(os.environ, {"PYTHON_BASIC_REPL": "1"}):
self.assertFalse(pdb._pyrepl_available())
with patch.dict(os.environ, {}, clear=True):
mod = types.ModuleType("_pyrepl.main")
mod.CAN_USE_PYREPL = True
with patch.dict("sys.modules", {"_pyrepl.main": mod}):
self.assertTrue(pdb._pyrepl_available())
class ChecklineTests(unittest.TestCase):
def setUp(self):

View file

@ -106,7 +106,7 @@ def test_elim_inversion_of_is_or_in(self):
self.check_lnotab(code)
def test_global_as_constant(self):
# LOAD_GLOBAL None/True/False --> LOAD_CONST None/True/False
# LOAD_GLOBAL None/True/False --> LOAD_COMMON_CONSTANT None/True/False
def f():
x = None
x = None
@ -121,7 +121,7 @@ def h():
for func, elem in ((f, None), (g, True), (h, False)):
with self.subTest(func=func):
self.assertNotInBytecode(func, 'LOAD_GLOBAL')
self.assertInBytecode(func, 'LOAD_CONST', elem)
self.assertInBytecode(func, 'LOAD_COMMON_CONSTANT', elem)
self.check_lnotab(func)
def f():
@ -129,7 +129,7 @@ def f():
return None
self.assertNotInBytecode(f, 'LOAD_GLOBAL')
self.assertInBytecode(f, 'LOAD_CONST', None)
self.assertInBytecode(f, 'LOAD_COMMON_CONSTANT', None)
self.check_lnotab(f)
def test_while_one(self):
@ -146,13 +146,14 @@ def f():
def test_pack_unpack(self):
for line, elem in (
('a, = a,', 'LOAD_CONST',),
('a, = a,', None),
('a, b = a, b', 'SWAP',),
('a, b, c = a, b, c', 'SWAP',),
):
with self.subTest(line=line):
code = compile(line,'','single')
self.assertInBytecode(code, elem)
if elem is not None:
self.assertInBytecode(code, elem)
self.assertNotInBytecode(code, 'BUILD_TUPLE')
self.assertNotInBytecode(code, 'UNPACK_SEQUENCE')
self.check_lnotab(code)
@ -174,10 +175,10 @@ def test_constant_folding_tuples_of_constants(self):
# Long tuples should be folded too.
code = compile(repr(tuple(range(10000))),'','single')
self.assertNotInBytecode(code, 'BUILD_TUPLE')
# One LOAD_CONST for the tuple, one for the None return value
# One LOAD_CONST for the tuple; None return value uses LOAD_COMMON_CONSTANT
load_consts = [instr for instr in dis.get_instructions(code)
if instr.opname == 'LOAD_CONST']
self.assertEqual(len(load_consts), 2)
self.assertEqual(len(load_consts), 1)
self.check_lnotab(code)
# Bug 1053819: Tuple of constants misidentified when presented with:
@ -283,11 +284,11 @@ def test_constant_folding_unaryop(self):
('-0.0', 'UNARY_NEGATIVE', None, True, 'LOAD_CONST', -0.0),
('-(1.0-1.0)', 'UNARY_NEGATIVE', None, True, 'LOAD_CONST', -0.0),
('-0.5', 'UNARY_NEGATIVE', None, True, 'LOAD_CONST', -0.5),
('---1', 'UNARY_NEGATIVE', None, True, 'LOAD_CONST', -1),
('---1', 'UNARY_NEGATIVE', None, True, 'LOAD_COMMON_CONSTANT', -1),
('---""', 'UNARY_NEGATIVE', None, False, None, None),
('~~~1', 'UNARY_INVERT', None, True, 'LOAD_CONST', -2),
('~~~""', 'UNARY_INVERT', None, False, None, None),
('not not True', 'UNARY_NOT', None, True, 'LOAD_CONST', True),
('not not True', 'UNARY_NOT', None, True, 'LOAD_COMMON_CONSTANT', True),
('not not x', 'UNARY_NOT', None, True, 'LOAD_NAME', 'x'), # this should be optimized regardless of constant or not
('+++1', 'CALL_INTRINSIC_1', intrinsic_positive, True, 'LOAD_SMALL_INT', 1),
('---x', 'UNARY_NEGATIVE', None, False, None, None),
@ -326,7 +327,7 @@ def test_constant_folding_binop(self):
('1 + 2', 'NB_ADD', True, 'LOAD_SMALL_INT', 3),
('1 + 2 + 3', 'NB_ADD', True, 'LOAD_SMALL_INT', 6),
('1 + ""', 'NB_ADD', False, None, None),
('1 - 2', 'NB_SUBTRACT', True, 'LOAD_CONST', -1),
('1 - 2', 'NB_SUBTRACT', True, 'LOAD_COMMON_CONSTANT', -1),
('1 - 2 - 3', 'NB_SUBTRACT', True, 'LOAD_CONST', -4),
('1 - ""', 'NB_SUBTRACT', False, None, None),
('2 * 2', 'NB_MULTIPLY', True, 'LOAD_SMALL_INT', 4),
@ -1539,7 +1540,7 @@ def test_optimize_literal_list_for_iter(self):
end,
('END_FOR', None, 0),
('POP_ITER', None, 0),
('LOAD_CONST', 0, 0),
('LOAD_COMMON_CONSTANT', 7, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[None], expected_consts=[None, (1, 2)])
@ -1572,7 +1573,7 @@ def test_optimize_literal_list_for_iter(self):
end,
('END_FOR', None, 0),
('POP_ITER', None, 0),
('LOAD_CONST', 0, 0),
('LOAD_COMMON_CONSTANT', 7, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[None], expected_consts=[None])
@ -1604,7 +1605,7 @@ def test_optimize_literal_set_for_iter(self):
end,
('END_FOR', None, 0),
('POP_ITER', None, 0),
('LOAD_CONST', 0, 0),
('LOAD_COMMON_CONSTANT', 7, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[None], expected_consts=[None, frozenset({1, 2})])
@ -1626,7 +1627,22 @@ def test_optimize_literal_set_for_iter(self):
('LOAD_CONST', 0, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(same, same, consts=[None], expected_consts=[None])
expected = [
('LOAD_SMALL_INT', 1, 0),
('LOAD_NAME', 0, 0),
('BUILD_SET', 2, 0),
('GET_ITER', 0, 0),
start := self.Label(),
('FOR_ITER', end := self.Label(), 0),
('STORE_FAST', 0, 0),
('JUMP', start, 0),
end,
('END_FOR', None, 0),
('POP_ITER', None, 0),
('LOAD_COMMON_CONSTANT', 7, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(same, expected, consts=[None], expected_consts=[None])
def test_optimize_literal_list_contains(self):
# x in [1, 2] ==> x in (1, 2)
@ -1645,7 +1661,7 @@ def test_optimize_literal_list_contains(self):
('LOAD_CONST', 1, 0),
('CONTAINS_OP', 0, 0),
('POP_TOP', None, 0),
('LOAD_CONST', 0, 0),
('LOAD_COMMON_CONSTANT', 7, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[None], expected_consts=[None, (1, 2)])
@ -1668,7 +1684,7 @@ def test_optimize_literal_list_contains(self):
('BUILD_TUPLE', 2, 0),
('CONTAINS_OP', 0, 0),
('POP_TOP', None, 0),
('LOAD_CONST', 0, 0),
('LOAD_COMMON_CONSTANT', 7, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[None], expected_consts=[None])
@ -1690,7 +1706,7 @@ def test_optimize_literal_set_contains(self):
('LOAD_CONST', 1, 0),
('CONTAINS_OP', 0, 0),
('POP_TOP', None, 0),
('LOAD_CONST', 0, 0),
('LOAD_COMMON_CONSTANT', 7, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[None], expected_consts=[None, frozenset({1, 2})])
@ -1707,7 +1723,17 @@ def test_optimize_literal_set_contains(self):
('LOAD_CONST', 0, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(same, same, consts=[None], expected_consts=[None])
expected = [
('LOAD_NAME', 0, 0),
('LOAD_SMALL_INT', 1, 0),
('LOAD_NAME', 1, 0),
('BUILD_SET', 2, 0),
('CONTAINS_OP', 0, 0),
('POP_TOP', None, 0),
('LOAD_COMMON_CONSTANT', 7, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(same, expected, consts=[None], expected_consts=[None])
def test_optimize_unary_not(self):
# test folding
@ -1718,10 +1744,10 @@ def test_optimize_unary_not(self):
('RETURN_VALUE', None, 0),
]
after = [
('LOAD_CONST', 1, 0),
('LOAD_COMMON_CONSTANT', 10, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[], expected_consts=[True, False])
self.cfg_optimization_test(before, after, consts=[], expected_consts=[True])
# test cancel out
before = [
@ -1769,7 +1795,7 @@ def test_optimize_unary_not(self):
('RETURN_VALUE', None, 0),
]
after = [
('LOAD_CONST', 0, 0),
('LOAD_COMMON_CONSTANT', 9, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[], expected_consts=[True])
@ -1785,10 +1811,10 @@ def test_optimize_unary_not(self):
('RETURN_VALUE', None, 0),
]
after = [
('LOAD_CONST', 1, 0),
('LOAD_COMMON_CONSTANT', 10, 0),
('RETURN_VALUE', None, 0),
]
self.cfg_optimization_test(before, after, consts=[], expected_consts=[True, False])
self.cfg_optimization_test(before, after, consts=[], expected_consts=[True])
# test cancel out & eliminate to bool (to bool stays as we are not iterating to a fixed point)
before = [
@ -2430,7 +2456,7 @@ def test_list_to_tuple_get_iter(self):
end,
("END_FOR", None, 11),
("POP_TOP", None, 12),
("LOAD_CONST", 0, 13),
("LOAD_COMMON_CONSTANT", 7, 13),
("RETURN_VALUE", None, 14),
]
self.cfg_optimization_test(insts, expected_insts, consts=[None])
@ -2454,7 +2480,7 @@ def make_bb(self, insts):
maxconst = max(maxconst, arg)
consts = [None for _ in range(maxconst + 1)]
return insts + [
("LOAD_CONST", 0, last_loc + 1),
("LOAD_COMMON_CONSTANT", 7, last_loc + 1),
("RETURN_VALUE", None, last_loc + 2),
], consts
@ -2485,7 +2511,7 @@ def test_optimized(self):
]
expected = [
("LOAD_FAST_BORROW", 0, 1),
("LOAD_CONST", 1, 2),
("LOAD_COMMON_CONSTANT", 7, 2),
("SWAP", 2, 3),
("POP_TOP", None, 4),
]
@ -2523,7 +2549,13 @@ def test_unoptimized_if_support_killed(self):
("STORE_FAST", 0, 3),
("POP_TOP", None, 4),
]
self.check(insts, insts)
expected = [
("LOAD_FAST", 0, 1),
("LOAD_COMMON_CONSTANT", 7, 2),
("STORE_FAST", 0, 3),
("POP_TOP", None, 4),
]
self.check(insts, expected)
insts = [
("LOAD_FAST", 0, 1),
@ -2532,7 +2564,14 @@ def test_unoptimized_if_support_killed(self):
("STORE_FAST_STORE_FAST", ((0 << 4) | 1), 4),
("POP_TOP", None, 5),
]
self.check(insts, insts)
expected = [
("LOAD_FAST", 0, 1),
("LOAD_COMMON_CONSTANT", 7, 2),
("LOAD_COMMON_CONSTANT", 7, 3),
("STORE_FAST_STORE_FAST", ((0 << 4) | 1), 4),
("POP_TOP", None, 5),
]
self.check(insts, expected)
insts = [
("LOAD_FAST", 0, 1),
@ -2553,7 +2592,12 @@ def test_unoptimized_if_aliased(self):
("LOAD_CONST", 0, 3),
("STORE_FAST_STORE_FAST", ((0 << 4) | 1), 4),
]
self.check(insts, insts)
expected = [
("LOAD_FAST", 0, 1),
("LOAD_COMMON_CONSTANT", 7, 3),
("STORE_FAST_STORE_FAST", ((0 << 4) | 1), 4),
]
self.check(insts, expected)
def test_consume_no_inputs(self):
insts = [
@ -2598,7 +2642,19 @@ def test_for_iter(self):
("LOAD_CONST", 0, 7),
("RETURN_VALUE", None, 8),
]
self.cfg_optimization_test(insts, insts, consts=[None])
expected = [
("LOAD_FAST", 0, 1),
top := self.Label(),
("FOR_ITER", end := self.Label(), 2),
("STORE_FAST", 2, 3),
("JUMP", top, 4),
end,
("END_FOR", None, 5),
("POP_TOP", None, 6),
("LOAD_COMMON_CONSTANT", 7, 7),
("RETURN_VALUE", None, 8),
]
self.cfg_optimization_test(insts, expected, consts=[None])
def test_load_attr(self):
insts = [
@ -2667,10 +2723,10 @@ def test_send(self):
("LOAD_FAST", 0, 1),
("LOAD_FAST_BORROW", 1, 2),
("SEND", end := self.Label(), 3),
("LOAD_CONST", 0, 4),
("LOAD_COMMON_CONSTANT", 7, 4),
("RETURN_VALUE", None, 5),
end,
("LOAD_CONST", 0, 6),
("LOAD_COMMON_CONSTANT", 7, 6),
("RETURN_VALUE", None, 7)
]
self.cfg_optimization_test(insts, expected, consts=[None])
@ -2708,7 +2764,15 @@ def test_set_function_attribute(self):
("LOAD_CONST", 0, 5),
("RETURN_VALUE", None, 6)
]
self.cfg_optimization_test(insts, insts, consts=[None])
expected = [
("LOAD_COMMON_CONSTANT", 7, 1),
("LOAD_FAST", 0, 2),
("SET_FUNCTION_ATTRIBUTE", 2, 3),
("STORE_FAST", 1, 4),
("LOAD_COMMON_CONSTANT", 7, 5),
("RETURN_VALUE", None, 6)
]
self.cfg_optimization_test(insts, expected, consts=[None])
insts = [
("LOAD_CONST", 0, 1),
@ -2717,7 +2781,7 @@ def test_set_function_attribute(self):
("RETURN_VALUE", None, 4)
]
expected = [
("LOAD_CONST", 0, 1),
("LOAD_COMMON_CONSTANT", 7, 1),
("LOAD_FAST_BORROW", 0, 2),
("SET_FUNCTION_ATTRIBUTE", 2, 3),
("RETURN_VALUE", None, 4)
@ -2740,7 +2804,22 @@ def test_get_yield_from_iter(self):
("LOAD_CONST", 0, 11),
("RETURN_VALUE", None, 12),
]
self.cfg_optimization_test(insts, insts, consts=[None])
expected = [
("LOAD_FAST", 0, 1),
("GET_ITER", 1, 2),
("PUSH_NULL", None, 3),
("LOAD_COMMON_CONSTANT", 7, 4),
send := self.Label(),
("SEND", end := self.Label(), 6),
("YIELD_VALUE", 1, 7),
("RESUME", 2, 8),
("JUMP", send, 9),
end,
("END_SEND", None, 10),
("LOAD_COMMON_CONSTANT", 7, 11),
("RETURN_VALUE", None, 12),
]
self.cfg_optimization_test(insts, expected, consts=[None])
def test_push_exc_info(self):
insts = [

View file

@ -1,4 +1,5 @@
import os
import sys
import sysconfig
import unittest
@ -17,6 +18,9 @@ def supports_trampoline_profiling():
raise unittest.SkipTest("perf trampoline profiling not supported")
class TestPerfMapWriting(unittest.TestCase):
def tearDown(self):
perf_map_state_teardown()
def test_write_perf_map_entry(self):
self.assertEqual(write_perf_map_entry(0x1234, 5678, "entry1"), 0)
self.assertEqual(write_perf_map_entry(0x2345, 6789, "entry2"), 0)
@ -24,4 +28,15 @@ def test_write_perf_map_entry(self):
perf_file_contents = f.read()
self.assertIn("1234 162e entry1", perf_file_contents)
self.assertIn("2345 1a85 entry2", perf_file_contents)
perf_map_state_teardown()
@unittest.skipIf(sys.maxsize <= 2**32, "requires size_t wider than unsigned int")
def test_write_perf_map_entry_large_size(self):
code_addr = 0x3456
code_size = 1 << 33
entry_name = "entry_big"
self.assertEqual(write_perf_map_entry(code_addr, code_size, entry_name), 0)
with open(f"/tmp/perf-{os.getpid()}.map") as f:
perf_file_contents = f.read()
self.assertIn(f"{code_addr:x} {code_size:x} {entry_name}",
perf_file_contents)

View file

@ -1075,6 +1075,12 @@ def test_avg_std(self):
msg='%s%r' % (variate.__name__, args))
self.assertAlmostEqual(s2/(N-1), sigmasqrd, places=2,
msg='%s%r' % (variate.__name__, args))
def test_binomialvariate_log_zero(self):
# gh-149222: Variety random() return 0.0 no input Error
with unittest.mock.patch.object(random.Random, 'random', side_effect= [0.0] + [0.5] * 20):
result = random.binomialvariate(10, 0.5)
self.assertIsInstance(result, int)
self.assertIn(result, range(11))
def test_constant(self):
g = random.Random()

View file

@ -217,6 +217,25 @@ def test_invalid_names(self):
# Package without __main__.py
self.expect_import_error("multiprocessing")
def test_invalid_names_set_name_attribute(self):
cases = [
# (mod_name, expected_name) -- comment indicates raise site
("nonexistent_runpy_test_module",
"nonexistent_runpy_test_module"), # spec is None
("sys.imp.eric", "sys.imp.eric"), # find_spec error
(".relative_name", ".relative_name"), # relative name rejected
("sys", "sys"), # builtin: no code object
("multiprocessing", "multiprocessing"), # package without __main__
]
for mod_name, expected_name in cases:
with self.subTest(mod_name=mod_name):
try:
run_module(mod_name)
except ImportError as exc:
self.assertEqual(exc.name, expected_name)
else:
self.fail("Expected ImportError for %r" % mod_name)
def test_library_module(self):
self.assertEqual(run_module("runpy")["__name__"], "runpy")
@ -714,6 +733,17 @@ def test_directory_error(self):
msg = "can't find '__main__' module in %r" % script_dir
self._check_import_error(script_dir, msg)
def test_directory_error_sets_name_attribute(self):
with temp_dir() as script_dir:
self._make_test_script(script_dir, 'not_main')
try:
run_path(script_dir)
except ImportError as exc:
self.assertEqual(exc.name, '__main__')
else:
self.fail("Expected ImportError for directory without "
"__main__.py")
def test_zipfile(self):
with temp_dir() as script_dir:
mod_name = '__main__'

View file

@ -1908,7 +1908,7 @@ def test_pythontypes(self):
check = self.check_sizeof
# _ast.AST
import _ast
check(_ast.AST(), size('P'))
check(_ast.Module(), size('3P'))
try:
raise TypeError
except TypeError as e:

View file

@ -10,6 +10,7 @@
import re
import warnings
import stat
import time
import unittest
import unittest.mock
@ -1828,6 +1829,19 @@ def test_source_directory_not_leaked(self):
payload = pathlib.Path(tmpname).read_text(encoding='latin-1')
assert os.path.dirname(tmpname) not in payload
def test_create_with_mtime(self):
tarfile.open(tmpname, self.mode, mtime=0).close()
with self.open(tmpname, 'r') as fobj:
fobj.read()
self.assertEqual(fobj.mtime, 0)
def test_create_without_mtime(self):
before = int(time.time())
tarfile.open(tmpname, self.mode).close()
after = int(time.time())
with self.open(tmpname, 'r') as fobj:
fobj.read()
self.assertTrue(before <= fobj.mtime <= after)
class Bz2StreamWriteTest(Bz2Test, StreamWriteTest):
decompressor = bz2.BZ2Decompressor if bz2 else None
@ -2134,6 +2148,19 @@ def test_create_with_compresslevel(self):
with tarfile.open(tmpname, 'r:gz', compresslevel=1) as tobj:
pass
def test_create_with_mtime(self):
tarfile.open(tmpname, self.mode, mtime=0).close()
with self.open(tmpname, 'rb') as fobj:
fobj.read()
self.assertEqual(fobj.mtime, 0)
def test_create_without_mtime(self):
before = int(time.time())
tarfile.open(tmpname, self.mode).close()
after = int(time.time())
with self.open(tmpname, 'r') as fobj:
fobj.read()
self.assertTrue(before <= fobj.mtime <= after)
class Bz2CreateTest(Bz2Test, CreateTest):

View file

@ -2368,6 +2368,231 @@ class BarrierTests(lock_tests.BarrierTests):
barriertype = staticmethod(threading.Barrier)
## Test Synchronization tools for iterators ################
class ThreadingIteratorToolsTests(BaseTestCase):
def test_serialize_serializes_concurrent_iteration(self):
limit = 10_000
workers_count = 10
result = 0
result_lock = threading.Lock()
start = threading.Event()
def producer(limit):
for x in range(limit):
yield x
def consumer(iterator):
nonlocal result
start.wait()
total = 0
for x in iterator:
total += x
with result_lock:
result += total
iterator = threading.serialize_iterator(producer(limit))
workers = [
threading.Thread(target=consumer, args=(iterator,))
for _ in range(workers_count)
]
with threading_helper.wait_threads_exit():
for worker in workers:
worker.start()
for worker in workers:
# Wait for the worker thread to actually start.
while worker.ident is None:
time.sleep(0.1)
start.set()
for worker in workers:
worker.join()
self.assertEqual(result, limit * (limit - 1) // 2)
def test_serialize_generator_methods(self):
# A generator that yields and receives
def echo():
try:
while True:
val = yield "ready"
yield f"received {val}"
except ValueError:
yield "caught"
it = threading.serialize_iterator(echo())
# Test __next__
self.assertEqual(next(it), "ready")
# Test send()
self.assertEqual(it.send("hello"), "received hello")
self.assertEqual(next(it), "ready")
# Test throw()
self.assertEqual(it.throw(ValueError), "caught")
# Test close()
it.close()
with self.assertRaises(StopIteration):
next(it)
def test_serialize_methods_attribute_error(self):
# A standard iterator that does not have send/throw/close
# should raise AttributeError when called.
standard_it = threading.serialize_iterator([1, 2, 3])
with self.assertRaises(AttributeError):
standard_it.send("foo")
with self.assertRaises(AttributeError):
standard_it.throw(ValueError)
with self.assertRaises(AttributeError):
standard_it.close()
def test_serialize_generator_methods_locking(self):
# Verifies that generator methods also acquire the lock.
# We can test this by checking if the lock is held during the call.
class LockCheckingGenerator:
def __init__(self, lock):
self.lock = lock
def __iter__(self):
return self
def send(self, value):
if not self.lock.locked():
raise RuntimeError("Lock not held during send()")
return value
def throw(self, *args):
if not self.lock.locked():
raise RuntimeError("Lock not held during throw()")
def close(self):
if not self.lock.locked():
raise RuntimeError("Lock not held during close()")
# Manually create the serialize object to inspect the lock
it = threading.serialize_iterator([])
mock_gen = LockCheckingGenerator(it._lock)
it._iterator = mock_gen
# These should not raise RuntimeError
it.send(1)
it.throw(ValueError)
it.close()
def test_serialize_next_exception(self):
# Verify exception pass through for calls to next()
def f():
raise RuntimeError
yield None
g = threading.serialize_iterator(f())
with self.assertRaises(RuntimeError):
next(g)
def test_synchronized_serializes_generator_instances(self):
unique = 10
repetitions = 5
limit = 100
start = threading.Event()
@threading.synchronized_iterator
def atomic_counter():
# The sleep widens the race window that would exist without
# synchronization between yielding a value and advancing state.
i = 0
while True:
yield i
time.sleep(0.0005)
i += 1
def consumer(counter):
start.wait()
for _ in range(limit):
next(counter)
unique_counters = [atomic_counter() for _ in range(unique)]
counters = unique_counters * repetitions
workers = [
threading.Thread(target=consumer, args=(counter,))
for counter in counters
]
with threading_helper.wait_threads_exit():
for worker in workers:
worker.start()
start.set()
for worker in workers:
worker.join()
self.assertEqual(
{next(counter) for counter in unique_counters},
{limit * repetitions},
)
def test_synchronized_preserves_wrapped_metadata(self):
def gen():
yield 1
wrapped = threading.synchronized_iterator(gen)
self.assertEqual(wrapped.__name__, gen.__name__)
self.assertIs(wrapped.__wrapped__, gen)
self.assertEqual(list(wrapped()), [1])
def test_concurrent_tee_supports_concurrent_consumers(self):
limit = 5_000
num_threads = 25
successes = 0
failures = []
result_lock = threading.Lock()
start = threading.Event()
expected = list(range(limit))
def producer(limit):
for x in range(limit):
yield x
def consumer(iterator):
nonlocal successes
start.wait()
items = list(iterator)
with result_lock:
if items == expected:
successes += 1
else:
failures.append(items[:20])
tees = threading.concurrent_tee(producer(limit), n=num_threads)
workers = [
threading.Thread(target=consumer, args=(iterator,))
for iterator in tees
]
with threading_helper.wait_threads_exit():
for worker in workers:
worker.start()
start.set()
for worker in workers:
worker.join()
self.assertEqual(failures, [])
self.assertEqual(successes, len(tees))
# Verify that locks are shared
self.assertEqual(len({id(t_obj.lock) for t_obj in tees}), 1)
def test_concurrent_tee_zero_iterators(self):
self.assertEqual(threading.concurrent_tee(range(10), n=0), ())
def test_concurrent_tee_negative_n(self):
with self.assertRaises(ValueError):
threading.concurrent_tee(range(10), n=-1)
#################
class MiscTestCase(unittest.TestCase):
def test__all__(self):
restore_default_excepthook(self)

View file

@ -372,6 +372,8 @@ def test_module(self):
mod_generics_cache.__name__)
self.assertEqual(mod_generics_cache.OldStyle.__module__,
mod_generics_cache.__name__)
Alias.__module__ = "ham.spam.eggs"
self.assertEqual(Alias.__module__, "ham.spam.eggs")
def test_unpack(self):
type Alias = tuple[int, int]

View file

@ -406,11 +406,12 @@ def testAssertWarnsRegex(self):
# test warning raised but with wrong message
def raise_wrong_message():
warnings.warn('foo')
self.assertMessagesCM('assertWarnsRegex', (UserWarning, 'regex'),
raise_wrong_message,
['^"regex" does not match "foo"$', '^oops$',
'^"regex" does not match "foo"$',
'^"regex" does not match "foo" : oops$'])
with self.assertWarnsRegex(UserWarning, 'foo'):
self.assertMessagesCM('assertWarnsRegex', (UserWarning, 'regex'),
raise_wrong_message,
['^"regex" does not match "foo"$', '^oops$',
'^"regex" does not match "foo"$',
'^"regex" does not match "foo" : oops$'])
if __name__ == "__main__":

View file

@ -1631,11 +1631,11 @@ def testAssertRaisesRegexNoExceptionType(self):
self.assertRaisesRegex((ValueError, object), 'expect')
def testAssertWarnsCallable(self):
def _runtime_warn():
warnings.warn("foo", RuntimeWarning)
def _runtime_warn(categories=(RuntimeWarning,)):
for category in categories:
warnings.warn("foo", category)
# Success when the right warning is triggered, even several times
self.assertWarns(RuntimeWarning, _runtime_warn)
self.assertWarns(RuntimeWarning, _runtime_warn)
self.assertWarns(RuntimeWarning, _runtime_warn, (RuntimeWarning, RuntimeWarning))
# A tuple of warning classes is accepted
self.assertWarns((DeprecationWarning, RuntimeWarning), _runtime_warn)
# *args and **kwargs also work
@ -1648,22 +1648,35 @@ def _runtime_warn():
with self.assertRaises(TypeError):
self.assertWarns(RuntimeWarning, None)
# Failure when another warning is triggered
with warnings.catch_warnings():
with warnings.catch_warnings(record=True) as log:
# Force default filter (in case tests are run with -We)
warnings.simplefilter("default", RuntimeWarning)
with self.assertRaises(self.failureException):
self.assertWarns(DeprecationWarning, _runtime_warn)
self.assertWarns(DeprecationWarning, _runtime_warn,
(RuntimeWarning, RuntimeWarning))
self.assertEqual(len(log), 1, log)
self.assertIsInstance(log[0].message, RuntimeWarning)
# Filters for other warnings are not modified
with warnings.catch_warnings():
warnings.simplefilter("error", RuntimeWarning)
with self.assertRaises(RuntimeWarning):
self.assertWarns(DeprecationWarning, _runtime_warn)
# Warnings that do not match the category are not swallowed.
with self.assertWarns(RuntimeWarning):
with self.assertRaises(self.failureException):
self.assertWarns(DeprecationWarning, _runtime_warn)
with self.assertWarns(RuntimeWarning):
self.assertWarns(DeprecationWarning, _runtime_warn,
(RuntimeWarning, DeprecationWarning))
with self.assertWarns(RuntimeWarning):
self.assertWarns(DeprecationWarning, _runtime_warn,
(DeprecationWarning, RuntimeWarning))
def testAssertWarnsContext(self):
# Believe it or not, it is preferable to duplicate all tests above,
# to make sure the __warningregistry__ $@ is circumvented correctly.
def _runtime_warn():
warnings.warn("foo", RuntimeWarning)
def _runtime_warn(category=RuntimeWarning):
warnings.warn("foo", category)
_runtime_warn_lineno = inspect.getsourcelines(_runtime_warn)[1]
with self.assertWarns(RuntimeWarning) as cm:
_runtime_warn()
@ -1694,18 +1707,58 @@ def _runtime_warn():
with self.assertWarns(RuntimeWarning, foobar=42):
pass
# Failure when another warning is triggered
with warnings.catch_warnings():
with warnings.catch_warnings(record=True) as log:
# Force default filter (in case tests are run with -We)
warnings.simplefilter("default", RuntimeWarning)
with self.assertRaises(self.failureException):
with self.assertWarns(DeprecationWarning):
_runtime_warn()
_runtime_warn()
self.assertEqual(len(log), 1, log)
self.assertIsInstance(log[0].message, RuntimeWarning)
with warnings.catch_warnings(record=True) as log:
# Force default filter (in case tests are run with -We)
warnings.simplefilter("error", RuntimeWarning)
warnings.filterwarnings("default", category=RuntimeWarning,
module=__name__)
with self.assertRaises(self.failureException):
with self.assertWarns(DeprecationWarning):
_runtime_warn()
_runtime_warn()
self.assertEqual(len(log), 1, log)
self.assertIsInstance(log[0].message, RuntimeWarning)
# Filters for other warnings are not modified
with warnings.catch_warnings():
warnings.simplefilter("error", RuntimeWarning)
with self.assertRaises(RuntimeWarning):
with self.assertWarns(DeprecationWarning):
_runtime_warn()
# Warnings that do not match the category are not swallowed.
with self.assertWarns(RuntimeWarning):
with self.assertRaises(self.failureException):
with self.assertWarns(DeprecationWarning):
_runtime_warn()
with self.assertWarns(RuntimeWarning):
with self.assertWarns(DeprecationWarning):
_runtime_warn()
_runtime_warn(DeprecationWarning)
with self.assertWarns(RuntimeWarning):
with self.assertWarns(DeprecationWarning):
_runtime_warn(DeprecationWarning)
_runtime_warn()
# Filters by module name work for other warnings.
with warnings.catch_warnings(record=True) as log:
warnings.filterwarnings("error", category=RuntimeWarning)
warnings.filterwarnings("default", category=RuntimeWarning,
module=re.escape(__name__))
warnings.filterwarnings("error", category=RuntimeWarning,
module='test_case')
with self.assertWarns(DeprecationWarning):
_runtime_warn(DeprecationWarning)
_runtime_warn()
_runtime_warn()
self.assertEqual(len(log), 1, log)
self.assertIsInstance(log[0].message, RuntimeWarning)
def testAssertWarnsNoExceptionType(self):
with self.assertRaises(TypeError):
@ -1722,8 +1775,9 @@ def testAssertWarnsNoExceptionType(self):
self.assertWarns((UserWarning, Exception))
def testAssertWarnsRegexCallable(self):
def _runtime_warn(msg):
warnings.warn(msg, RuntimeWarning)
def _runtime_warn(*msgs):
for msg in msgs:
warnings.warn(msg, RuntimeWarning)
self.assertWarnsRegex(RuntimeWarning, "o+",
_runtime_warn, "foox")
# Failure when no warning is triggered
@ -1734,16 +1788,26 @@ def _runtime_warn(msg):
with self.assertRaises(TypeError):
self.assertWarnsRegex(RuntimeWarning, "o+", None)
# Failure when another warning is triggered
with warnings.catch_warnings():
with warnings.catch_warnings(record=True) as log:
# Force default filter (in case tests are run with -We)
warnings.simplefilter("default", RuntimeWarning)
with self.assertRaises(self.failureException):
self.assertWarnsRegex(DeprecationWarning, "o+",
_runtime_warn, "foox")
# Failure when message doesn't match
with self.assertRaises(self.failureException):
_runtime_warn, "foox", "foox")
self.assertEqual(len(log), 1, log)
self.assertIsInstance(log[0].message, RuntimeWarning)
# Failure when message doesn't match.
# Warnings that do not match the regex are not swallowed.
with self.assertWarnsRegex(RuntimeWarning, "ar"):
with self.assertRaises(self.failureException):
self.assertWarnsRegex(RuntimeWarning, "o+",
_runtime_warn, "barz")
with self.assertWarnsRegex(RuntimeWarning, "ar"):
self.assertWarnsRegex(RuntimeWarning, "o+",
_runtime_warn, "barz")
_runtime_warn, "barz", "foox")
with self.assertWarnsRegex(RuntimeWarning, "ar"):
self.assertWarnsRegex(RuntimeWarning, "o+",
_runtime_warn, "foox", "barz")
# A little trickier: we ask RuntimeWarnings to be raised, and then
# check for some of them. It is implementation-defined whether
# non-matching RuntimeWarnings are simply re-raised, or produce a
@ -1778,16 +1842,29 @@ def _runtime_warn(msg):
with self.assertWarnsRegex(RuntimeWarning, 'o+', foobar=42):
pass
# Failure when another warning is triggered
with warnings.catch_warnings():
with warnings.catch_warnings(record=True) as log:
# Force default filter (in case tests are run with -We)
warnings.simplefilter("default", RuntimeWarning)
with self.assertRaises(self.failureException):
with self.assertWarnsRegex(DeprecationWarning, "o+"):
_runtime_warn("foox")
# Failure when message doesn't match
with self.assertRaises(self.failureException):
_runtime_warn("foox")
self.assertEqual(len(log), 1, log)
self.assertIsInstance(log[0].message, RuntimeWarning)
# Failure when message doesn't match.
# Warnings that do not match the regex are not swallowed.
with self.assertWarnsRegex(RuntimeWarning, "ar"):
with self.assertRaises(self.failureException):
with self.assertWarnsRegex(RuntimeWarning, "o+"):
_runtime_warn("barz")
with self.assertWarnsRegex(RuntimeWarning, "ar"):
with self.assertWarnsRegex(RuntimeWarning, "o+"):
_runtime_warn("barz")
_runtime_warn("foox")
with self.assertWarnsRegex(RuntimeWarning, "ar"):
with self.assertWarnsRegex(RuntimeWarning, "o+"):
_runtime_warn("foox")
_runtime_warn("barz")
# A little trickier: we ask RuntimeWarnings to be raised, and then
# check for some of them. It is implementation-defined whether
# non-matching RuntimeWarnings are simply re-raised, or produce a
@ -1797,6 +1874,19 @@ def _runtime_warn(msg):
with self.assertRaises((RuntimeWarning, self.failureException)):
with self.assertWarnsRegex(RuntimeWarning, "o+"):
_runtime_warn("barz")
# Filters by module name work for warnings with other message.
with warnings.catch_warnings(record=True) as log:
warnings.filterwarnings("error", category=RuntimeWarning)
warnings.filterwarnings("default", category=RuntimeWarning,
module=re.escape(__name__))
warnings.filterwarnings("error", category=RuntimeWarning,
module='test_case')
with self.assertWarnsRegex(RuntimeWarning, "ar"):
_runtime_warn("bar")
_runtime_warn("foox")
_runtime_warn("foox")
self.assertEqual(len(log), 1, log)
self.assertIsInstance(log[0].message, RuntimeWarning)
def testAssertWarnsRegexNoExceptionType(self):
with self.assertRaises(TypeError):

View file

@ -582,15 +582,19 @@ def test_warn_nonstandard_types(self):
# ``Warning() != Warning()``.
self.assertEqual(str(w[-1].message), str(UserWarning(ob)))
def test_filename(self):
def test_filename_module(self):
with warnings_state(self.module):
with self.module.catch_warnings(record=True) as w:
warning_tests.inner("spam1")
self.assertEqual(os.path.basename(w[-1].filename),
"stacklevel.py")
self.assertEqual(w[-1].module,
"test.test_warnings.data.stacklevel")
warning_tests.outer("spam2")
self.assertEqual(os.path.basename(w[-1].filename),
"stacklevel.py")
self.assertEqual(w[-1].module,
"test.test_warnings.data.stacklevel")
def test_stacklevel(self):
# Test stacklevel argument
@ -600,23 +604,32 @@ def test_stacklevel(self):
warning_tests.inner("spam3", stacklevel=1)
self.assertEqual(os.path.basename(w[-1].filename),
"stacklevel.py")
self.assertEqual(w[-1].module,
"test.test_warnings.data.stacklevel")
warning_tests.outer("spam4", stacklevel=1)
self.assertEqual(os.path.basename(w[-1].filename),
"stacklevel.py")
self.assertEqual(w[-1].module,
"test.test_warnings.data.stacklevel")
warning_tests.inner("spam5", stacklevel=2)
self.assertEqual(os.path.basename(w[-1].filename),
"__init__.py")
self.assertEqual(w[-1].module, __name__)
warning_tests.outer("spam6", stacklevel=2)
self.assertEqual(os.path.basename(w[-1].filename),
"stacklevel.py")
self.assertEqual(w[-1].module,
"test.test_warnings.data.stacklevel")
warning_tests.outer("spam6.5", stacklevel=3)
self.assertEqual(os.path.basename(w[-1].filename),
"__init__.py")
self.assertEqual(w[-1].module, __name__)
warning_tests.inner("spam7", stacklevel=9999)
self.assertEqual(os.path.basename(w[-1].filename),
"<sys>")
self.assertEqual(w[-1].module, "sys")
def test_stacklevel_import(self):
# Issue #24305: With stacklevel=2, module-level warnings should work.
@ -627,6 +640,7 @@ def test_stacklevel_import(self):
import test.test_warnings.data.import_warning # noqa: F401
self.assertEqual(len(w), 1)
self.assertEqual(w[0].filename, __file__)
self.assertEqual(w[0].module, __name__)
def test_skip_file_prefixes(self):
with warnings_state(self.module):
@ -638,20 +652,27 @@ def test_skip_file_prefixes(self):
"inner_api", stacklevel=2,
warnings_module=warning_tests.warnings)
self.assertEqual(w[-1].filename, __file__)
self.assertEqual(w[-1].module, __name__)
warning_tests.package("package api", stacklevel=2)
self.assertEqual(w[-1].filename, __file__)
self.assertEqual(w[-1].module, __name__)
self.assertEqual(w[-2].filename, w[-1].filename)
self.assertEqual(w[-2].module, w[-1].module)
# Low stacklevels are overridden to 2 behavior.
warning_tests.package("package api 1", stacklevel=1)
self.assertEqual(w[-1].filename, __file__)
self.assertEqual(w[-1].module, __name__)
warning_tests.package("package api 0", stacklevel=0)
self.assertEqual(w[-1].filename, __file__)
self.assertEqual(w[-1].module, __name__)
warning_tests.package("package api -99", stacklevel=-99)
self.assertEqual(w[-1].filename, __file__)
self.assertEqual(w[-1].module, __name__)
# The stacklevel still goes up out of the package.
warning_tests.package("prefix02", stacklevel=3)
self.assertIn("unittest", w[-1].filename)
self.assertStartsWith(w[-1].module, "unittest")
def test_skip_file_prefixes_file_path(self):
# see: gh-126209
@ -662,6 +683,8 @@ def test_skip_file_prefixes_file_path(self):
self.assertEqual(len(w), 1)
self.assertNotEqual(w[-1].filename, skipped)
self.assertEqual(w[-1].filename, __file__)
self.assertEqual(w[-1].module, __name__)
def test_skip_file_prefixes_type_errors(self):
with warnings_state(self.module):
@ -673,7 +696,7 @@ def test_skip_file_prefixes_type_errors(self):
with self.assertRaises(TypeError):
warn("msg", skip_file_prefixes="a sequence of strs")
def test_exec_filename(self):
def test_exec_filename_module(self):
filename = "<warnings-test>"
codeobj = compile(("import warnings\n"
"warnings.warn('hello', UserWarning)"),
@ -682,6 +705,12 @@ def test_exec_filename(self):
self.module.simplefilter("always", category=UserWarning)
exec(codeobj)
self.assertEqual(w[0].filename, filename)
self.assertEqual(w[0].module, __name__)
with self.module.catch_warnings(record=True) as w:
self.module.simplefilter("always", category=UserWarning)
exec(codeobj, {})
self.assertEqual(w[0].filename, filename)
self.assertEqual(w[0].module, '<string>')
def test_warn_explicit_non_ascii_filename(self):
with self.module.catch_warnings(record=True) as w:

View file

@ -3197,7 +3197,7 @@ def test_deeply_nested_deepcopy(self):
# This should raise a RecursionError and not crash.
# See https://github.com/python/cpython/issues/148801.
root = cur = ET.Element('s')
for _ in range(150_000):
for _ in range(500_000):
cur = ET.SubElement(cur, 'u')
with support.infinite_recursion():
with self.assertRaises(RecursionError):

View file

@ -29,6 +29,7 @@
'Barrier', 'BrokenBarrierError', 'Timer', 'ThreadError',
'setprofile', 'settrace', 'local', 'stack_size',
'excepthook', 'ExceptHookArgs', 'gettrace', 'getprofile',
'serialize_iterator', 'synchronized_iterator', 'concurrent_tee',
'setprofile_all_threads','settrace_all_threads']
# Rename some stuff so "from threading import *" is safe
@ -842,6 +843,148 @@ class BrokenBarrierError(RuntimeError):
pass
## Synchronization tools for iterators #####################
class serialize_iterator:
"""Wrap a non-concurrent iterator with a lock to enforce sequential access.
Applies a non-reentrant lock around calls to __next__. If the
wrapped iterator also defines send(), throw(), or close(), those
calls are serialized as well.
Allows iterator and generator instances to be shared by multiple consumer
threads.
For example, itertools.count does not make thread-safe instances,
but that is easily fixed with:
atomic_counter = serialize_iterator(itertools.count())
"""
__slots__ = ('_iterator', '_lock')
def __init__(self, iterable):
self._iterator = iter(iterable)
self._lock = Lock()
def __iter__(self):
return self
def __next__(self):
with self._lock:
return next(self._iterator)
def send(self, value, /):
"""Send a value to a generator.
Raises AttributeError if not a generator.
"""
with self._lock:
return self._iterator.send(value)
def throw(self, *args):
"""Call throw() on a generator.
Raises AttributeError if not a generator.
"""
with self._lock:
return self._iterator.throw(*args)
def close(self):
"""Call close() on a generator.
Raises AttributeError if not a generator.
"""
with self._lock:
return self._iterator.close()
def synchronized_iterator(func):
"""Wrap an iterator-returning callable to make its iterators thread-safe.
Existing itertools and more-itertools can be wrapped so that their
iterator instances are serialized.
For example, itertools.count does not make thread-safe instances,
but that is easily fixed with:
atomic_counter = synchronized_iterator(itertools.count)
Can also be used as a decorator for generator function definitions
so that the generator instances are serialized::
import time
@synchronized_iterator
def enumerate_and_timestamp(iterable):
for count, value in enumerate(iterable):
yield count, time.time_ns(), value
"""
from functools import wraps
@wraps(func)
def inner(*args, **kwargs):
iterator = func(*args, **kwargs)
return serialize_iterator(iterator)
return inner
def concurrent_tee(iterable, n=2):
"""Variant of itertools.tee() but with guaranteed threading semantics.
Takes a non-threadsafe iterator as an input and creates concurrent
tee objects for other threads to have reliable independent copies of
the data stream.
The new iterators are only thread-safe if consumed within a single thread.
To share just one of the new iterators across multiple threads, wrap it
with threading.serialize_iterator().
"""
if n < 0:
raise ValueError("n must be a non-negative integer")
if n == 0:
return ()
iterator = _concurrent_tee(iterable)
result = [iterator]
for _ in range(n - 1):
result.append(_concurrent_tee(iterator))
return tuple(result)
class _concurrent_tee:
__slots__ = ('iterator', 'link', 'lock')
def __init__(self, iterable):
if isinstance(iterable, _concurrent_tee):
self.iterator = iterable.iterator
self.link = iterable.link
self.lock = iterable.lock
else:
self.iterator = iter(iterable)
self.link = [None, None]
self.lock = Lock()
def __iter__(self):
return self
def __next__(self):
link = self.link
if link[1] is None:
with self.lock:
if link[1] is None:
link[0] = next(self.iterator)
link[1] = [None, None]
value, self.link = link
return value
############################################################
# Helper to generate new thread names
_counter = _count(1).__next__
def _newname(name_template):

View file

@ -301,7 +301,7 @@ def __enter__(self):
v.__warningregistry__ = {}
self.warnings_manager = warnings.catch_warnings(record=True)
self.warnings = self.warnings_manager.__enter__()
warnings.simplefilter("always", self.expected)
warnings.simplefilter("always")
return self
def __exit__(self, exc_type, exc_value, tb):
@ -314,19 +314,44 @@ def __exit__(self, exc_type, exc_value, tb):
except AttributeError:
exc_name = str(self.expected)
first_matching = None
matched = False
non_matching_warnings = []
for m in self.warnings:
w = m.message
if not isinstance(w, self.expected):
non_matching_warnings.append(m)
continue
if first_matching is None:
first_matching = w
if (self.expected_regex is not None and
not self.expected_regex.search(str(w))):
non_matching_warnings.append(m)
continue
if matched:
continue
matched = True
# store warning for later retrieval
self.warning = w
self.filename = m.filename
self.lineno = m.lineno
for m in non_matching_warnings:
module = m.module
module_globals = None
registry = None
if module is not None:
try:
module_globals = vars(sys.modules[module])
except (KeyError, TypeError):
# module == "<string>" or sys.modules[module] is None
pass
else:
registry = module_globals.setdefault("__warningregistry__", {})
warnings.warn_explicit(m.message, m.category, m.filename, m.lineno,
module=module,
registry=registry,
module_globals=module_globals,
source=m.source)
if matched:
return
# Now we simply try to choose a helpful failure message
if first_matching is not None:
@ -338,7 +363,6 @@ def __exit__(self, exc_type, exc_value, tb):
else:
self._raiseFailure("{} not triggered".format(exc_name))
class _AssertNotWarnsContext(_AssertWarnsContext):
def __exit__(self, exc_type, exc_value, tb):

View file

@ -246,9 +246,9 @@ def library_recipes():
result.extend([
dict(
name="OpenSSL 3.5.5",
url="https://github.com/openssl/openssl/releases/download/openssl-3.5.5/openssl-3.5.5.tar.gz",
checksum="b28c91532a8b65a1f983b4c28b7488174e4a01008e29ce8e69bd789f28bc2a89",
name="OpenSSL 3.5.6",
url="https://github.com/openssl/openssl/releases/download/openssl-3.5.6/openssl-3.5.6.tar.gz",
checksum="deae7c80cba99c4b4f940ecadb3c3338b13cb77418409238e57d7f31f2a3b736",
buildrecipe=build_universal_openssl,
configure=None,
install=None,

View file

@ -514,6 +514,7 @@ PYTHON_OBJS= \
Python/suggestions.o \
Python/perf_trampoline.o \
Python/perf_jit_trampoline.o \
Python/jit_unwind.o \
Python/remote_debugging.o \
Python/$(DYNLOADFILE) \
$(LIBOBJS) \
@ -3209,7 +3210,8 @@ Python/emscripten_trampoline_wasm.c: Python/emscripten_trampoline_inner.wasm
$(PYTHON_FOR_REGEN) $(srcdir)/Platforms/emscripten/prepare_external_wasm.py $< $@ getWasmTrampolineModule
JIT_SHIM_BUILD_OBJS= @JIT_SHIM_BUILD_O@
JIT_BUILD_TARGETS= jit_stencils.h @JIT_STENCILS_H@ $(JIT_SHIM_BUILD_OBJS)
JIT_UNWIND_INFO_H= $(if $(JIT_OBJS),jit_unwind_info.h $(patsubst jit_stencils-%.h,jit_unwind_info-%.h,@JIT_STENCILS_H@))
JIT_BUILD_TARGETS= jit_stencils.h @JIT_STENCILS_H@ $(JIT_UNWIND_INFO_H) $(JIT_SHIM_BUILD_OBJS)
JIT_TARGETS= $(JIT_BUILD_TARGETS) $(filter-out $(JIT_SHIM_BUILD_OBJS),$(JIT_OBJS))
JIT_GENERATED_STAMP= .jit-stamp
@ -3237,6 +3239,9 @@ jit_shim-universal2-apple-darwin.o: jit_shim-aarch64-apple-darwin.o jit_shim-x86
Python/jit.o: $(srcdir)/Python/jit.c @JIT_STENCILS_H@
$(CC) -c $(PY_CORE_CFLAGS) -o $@ $<
Python/jit_unwind.o: $(srcdir)/Python/jit_unwind.c $(JIT_UNWIND_INFO_H)
$(CC) -c $(PY_CORE_CFLAGS) -o $@ $<
.PHONY: regen-jit
regen-jit: $(JIT_TARGETS)
@ -3362,7 +3367,7 @@ clean-profile: clean-retain-profile clean-bolt
# gh-141808: The JIT stencils are deliberately kept in clean-profile
.PHONY: clean-jit-stencils
clean-jit-stencils:
-rm -f $(JIT_TARGETS) $(JIT_GENERATED_STAMP) jit_stencils*.h jit_shim*.o
-rm -f $(JIT_TARGETS) $(JIT_GENERATED_STAMP) jit_stencils*.h jit_unwind_info*.h jit_shim*.o
.PHONY: clean
clean: clean-profile clean-jit-stencils
@ -3491,7 +3496,7 @@ MODULE__DECIMAL_DEPS=@LIBMPDEC_INTERNAL@
MODULE__ELEMENTTREE_DEPS=$(srcdir)/Modules/pyexpat.c @LIBEXPAT_INTERNAL@
MODULE__HASHLIB_DEPS=$(srcdir)/Modules/hashlib.h
MODULE__IO_DEPS=$(srcdir)/Modules/_io/_iomodule.h
MODULE__REMOTE_DEBUGGING_DEPS=$(srcdir)/Modules/_remote_debugging/_remote_debugging.h
MODULE__REMOTE_DEBUGGING_DEPS=$(srcdir)/Modules/_remote_debugging/_remote_debugging.h $(srcdir)/Modules/_remote_debugging/gc_stats.h
# HACL*-based cryptographic primitives
MODULE__MD5_DEPS=$(srcdir)/Modules/hashlib.h $(LIBHACL_MD5_HEADERS) $(LIBHACL_MD5_LIB_@LIBHACL_LDEPS_LIBTYPE@)

View file

@ -0,0 +1,2 @@
Block Apple Clang from being used to build the JIT as it ships without
required LLVM tools.

View file

@ -0,0 +1 @@
Update to WASI SDK 33.

View file

@ -0,0 +1,2 @@
Support for creating instances of abstract AST nodes from the :mod:`ast` module
is deprecated and scheduled for removal in Python 3.20. Patch by Brian Schubert.

View file

@ -0,0 +1 @@
Add support for unwinding JIT frames using GDB. Patch by Diego Russo and Pablo Galindo.

View file

@ -0,0 +1,4 @@
Enable frame pointers by default for GCC-compatible CPython builds, including
``-mno-omit-leaf-frame-pointer`` when the compiler supports it, so profilers
and debuggers can unwind native interpreter frames more reliably. Users can pass
``--without-frame-pointers`` to opt out.

View file

@ -0,0 +1 @@
Repaired undercount of bytes in type-specific free lists reported by sys._debugmallocstats(). For types that participate in cyclic garbage collection, it was missing two pointers used by GC.

View file

@ -0,0 +1,5 @@
Add a ``GCMonitor`` class with a ``get_gc_stats`` method to the
:mod:`!_remote_debugging` module to allow reading GC statistics from an
external Python process without requiring the full ``RemoteUnwinder``
functionality.
Patch by Sergey Miryanov and Pablo Galindo.

View file

@ -0,0 +1 @@
Fix the memory sanitizer false positive in :func:`os.getrandom`.

View file

@ -0,0 +1,2 @@
Allow assignment to the ``__module__`` attribute of
:class:`typing.TypeAliasType` instances.

View file

@ -0,0 +1 @@
Check for recursion limits in ``CALL_ALLOC_AND_ENTER_INIT`` opcode.

View file

@ -0,0 +1,8 @@
The :mod:`email` module no longer treats email addresses with non-ASCII
characters as defects when parsing a Unicode string or in the ``addr_spec``
parameter to :class:`email.headerregistry.Address`. :rfc:`5322` permits such
addresses, and they were already supported when parsing bytes and in the Address
``username`` parameter.
The (undocumented) :exc:`!email.errors.NonASCIILocalPartDefect` is no longer
used and should be considered deprecated.

Some files were not shown because too many files have changed in this diff Show more