Loading a small data which does not even involve arbitrary code execution
could consume arbitrary large amount of memory. There were three issues:
* PUT and LONG_BINPUT with large argument (the C implementation only).
Since the memo is implemented in C as a continuous dynamic array, a single
opcode can cause its resizing to arbitrary size. Now the sparsity of
memo indices is limited.
* BINBYTES, BINBYTES8 and BYTEARRAY8 with large argument. They allocated
the bytes or bytearray object of the specified size before reading into
it. Now they read very large data by chunks.
* BINSTRING, BINUNICODE, LONG4, BINUNICODE8 and FRAME with large
argument. They read the whole data by calling the read() method of
the underlying file object, which usually allocates the bytes object of
the specified size before reading into it. Now they read very large data
by chunks.
Also add comprehensive benchmark suite to measure performance and memory
impact of chunked reading optimization in PR #119204.
Features:
- Normal mode: benchmarks legitimate pickles (time/memory metrics)
- Antagonistic mode: tests malicious pickles (DoS protection)
- Baseline comparison: side-by-side comparison of two Python builds
- Support for truncated data and sparse memo attack vectors
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Add PythonFinalizationError exception. This exception derived from
RuntimeError is raised when an operation is blocked during the Python
finalization.
The following functions now raise PythonFinalizationError, instead of
RuntimeError:
* _thread.start_new_thread()
* subprocess.Popen
* os.fork()
* os.fork1()
* os.forkpty()
Morever, _winapi.Overlapped finalizer now logs an unraisable
PythonFinalizationError, instead of an unraisable RuntimeError.
Previously the C implementation of pickle.Pickler and pickle.Unpickler
classes did not have such methods and they could only be used if
they were overloaded in subclasses or set as instance attributes.
Fixed calling super().persistent_id() and super().persistent_load() in
subclasses of the C implementation of pickle.Pickler and pickle.Unpickler
classes. It no longer causes an infinite recursion.
This starts the process. Users who don't specify their own start method
and use the default on platforms where it is 'fork' will see a
DeprecationWarning upon multiprocessing.Pool() construction or upon
multiprocessing.Process.start() or concurrent.futures.ProcessPool use.
See the related issue and documentation within this change for details.
See [PEP 597](https://www.python.org/dev/peps/pep-0597/).
* Add `-X warn_default_encoding` and `PYTHONWARNDEFAULTENCODING`.
* Add EncodingWarning
* Add io.text_encoding()
* open(), TextIOWrapper() emits EncodingWarning when encoding is omitted and warn_default_encoding is enabled.
* _pyio.TextIOWrapper() uses UTF-8 as fallback default encoding used when failed to import locale module. (used during building Python)
* bz2, configparser, gzip, lzma, pathlib, tempfile modules use io.text_encoding().
* What's new entry
Allow pure Python implementation of pickle to work
even when the C _pickle module is unavailable.
Fix test_pickle when _pickle is missing: declare PyPicklerHookTests
outside "if has_c_implementation:" block.
when serialize into memory buffer with C pickle implementations.
This optimization already is performed when serialize into memory
with Python pickle implementations or into a file with both
implementations.
Fixed ambigious reverse mappings. Added many new mappings. Import mapping
is no longer applied to modules already mapped with full name mapping.
Added tests for compatible pickling and unpickling and for consistency of
_compat_pickle mappings.
Initial patch by Merlijn van Deen.
I've added a few unrelated docstring fixes in the patch while I was at
it, which makes the documentation for pickle a bit more consistent.