mmapmodule: remove unreachable code in Windows error path
Remove an unreachable `return NULL` after `PyErr_SetFromWindowsErr()` in
the Windows mmap resize error path.
Signed-off-by: Yongtao Huang <yongtaoh2022@gmail.com>
Make the attributes in _bz2 module thread-safe on the free-threading build.
Attributes (eof, needs_input, unused_data) are now stored atomically or
accessed via mutex-protected getters.
Makes the zlib module thread-safe free-threading build. Even though operations
are protected by locks, attributes exposed via PyMemberDef (eof, needs_input,
unused_data, unconsumed_tail) should still be stored atomically within locked
sections, since they can be read without acquiring the lock.
If there are many untracked tuples, the GC will run too often, resulting
in poor performance. The fix is to include untracked tuples in the
"long lived" object count. The number of frozen objects is also now
included since the free-threaded GC must scan those too.
This PR implements frame caching in the RemoteUnwinder class to significantly reduce memory reads when profiling remote processes with deep call stacks.
When cache_frames=True, the unwinder stores the frame chain from each sample and reuses unchanged portions in subsequent samples. Since most profiling samples capture similar call stacks (especially the parent frames), this optimization avoids repeatedly reading the same frame data from the target process.
The implementation adds a last_profiled_frame field to the thread state that tracks where the previous sample stopped. On the next sample, if the current frame chain reaches this marker, the cached frames from that point onward are reused instead of being re-read from remote memory.
The sampling profiler now enables frame caching by default.
Loading a small data which does not even involve arbitrary code execution
could consume arbitrary large amount of memory. There were three issues:
* PUT and LONG_BINPUT with large argument (the C implementation only).
Since the memo is implemented in C as a continuous dynamic array, a single
opcode can cause its resizing to arbitrary size. Now the sparsity of
memo indices is limited.
* BINBYTES, BINBYTES8 and BYTEARRAY8 with large argument. They allocated
the bytes or bytearray object of the specified size before reading into
it. Now they read very large data by chunks.
* BINSTRING, BINUNICODE, LONG4, BINUNICODE8 and FRAME with large
argument. They read the whole data by calling the read() method of
the underlying file object, which usually allocates the bytes object of
the specified size before reading into it. Now they read very large data
by chunks.
Also add comprehensive benchmark suite to measure performance and memory
impact of chunked reading optimization in PR #119204.
Features:
- Normal mode: benchmarks legitimate pickles (time/memory metrics)
- Antagonistic mode: tests malicious pickles (DoS protection)
- Baseline comparison: side-by-side comparison of two Python builds
- Support for truncated data and sparse memo attack vectors
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
Add readline.get_pre_input_hook() to retrieve the current pre-input
hook. This allows applications to save and restore the hook without
overwriting user settings.
Added atomic operations to `scanner_begin()` and `scanner_end()` to prevent
race conditions on the `executing` flag in free-threaded builds. Also added
tests for concurrent usage of the `re` module.
Without the atomic operations, `test_scanner_concurrent_access()` triggers
`assert(self->executing)` failures, or a thread sanitizer run emits errors.