gh-115952: Fix a potential virtual memory allocation denial of service in pickle (GH-119204)

Loading a small data which does not even involve arbitrary code execution
could consume arbitrary large amount of memory. There were three issues:

* PUT and LONG_BINPUT with large argument (the C implementation only).
  Since the memo is implemented in C as a continuous dynamic array, a single
  opcode can cause its resizing to arbitrary size. Now the sparsity of
  memo indices is limited.
* BINBYTES, BINBYTES8 and BYTEARRAY8 with large argument.  They allocated
  the bytes or bytearray object of the specified size before reading into
  it.  Now they read very large data by chunks.
* BINSTRING, BINUNICODE, LONG4, BINUNICODE8 and FRAME with large
  argument.  They read the whole data by calling the read() method of
  the underlying file object, which usually allocates the bytes object of
  the specified size before reading into it.  Now they read very large data
  by chunks.

Also add comprehensive benchmark suite to measure performance and memory
impact of chunked reading optimization in PR #119204.

Features:
- Normal mode: benchmarks legitimate pickles (time/memory metrics)
- Antagonistic mode: tests malicious pickles (DoS protection)
- Baseline comparison: side-by-side comparison of two Python builds
- Support for truncated data and sparse memo attack vectors

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
This commit is contained in:
Serhiy Storchaka 2025-12-05 19:17:01 +02:00 committed by GitHub
parent 4085ff7b32
commit 59f247e43b
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
7 changed files with 1767 additions and 177 deletions

View file

@ -189,6 +189,11 @@ def __init__(self, value):
__all__.extend(x for x in dir() if x.isupper() and not x.startswith('_'))
# Data larger than this will be read in chunks, to prevent extreme
# overallocation.
_MIN_READ_BUF_SIZE = (1 << 20)
class _Framer:
_FRAME_SIZE_MIN = 4
@ -287,7 +292,7 @@ def read(self, n):
"pickle exhausted before end of frame")
return data
else:
return self.file_read(n)
return self._chunked_file_read(n)
def readline(self):
if self.current_frame:
@ -302,11 +307,23 @@ def readline(self):
else:
return self.file_readline()
def _chunked_file_read(self, size):
cursize = min(size, _MIN_READ_BUF_SIZE)
b = self.file_read(cursize)
while cursize < size and len(b) == cursize:
delta = min(cursize, size - cursize)
b += self.file_read(delta)
cursize += delta
return b
def load_frame(self, frame_size):
if self.current_frame and self.current_frame.read() != b'':
raise UnpicklingError(
"beginning of a new frame before end of current frame")
self.current_frame = io.BytesIO(self.file_read(frame_size))
data = self._chunked_file_read(frame_size)
if len(data) < frame_size:
raise EOFError
self.current_frame = io.BytesIO(data)
# Tools used for pickling.
@ -1496,12 +1513,17 @@ def load_binbytes8(self):
dispatch[BINBYTES8[0]] = load_binbytes8
def load_bytearray8(self):
len, = unpack('<Q', self.read(8))
if len > maxsize:
size, = unpack('<Q', self.read(8))
if size > maxsize:
raise UnpicklingError("BYTEARRAY8 exceeds system's maximum size "
"of %d bytes" % maxsize)
b = bytearray(len)
self.readinto(b)
cursize = min(size, _MIN_READ_BUF_SIZE)
b = bytearray(cursize)
if self.readinto(b) == cursize:
while cursize < size and len(b) == cursize:
delta = min(cursize, size - cursize)
b += self.read(delta)
cursize += delta
self.append(b)
dispatch[BYTEARRAY8[0]] = load_bytearray8

View file

@ -74,6 +74,15 @@ def count_opcode(code, pickle):
def identity(x):
return x
def itersize(start, stop):
# Produce geometrical increasing sequence from start to stop
# (inclusively) for tests.
size = start
while size < stop:
yield size
size <<= 1
yield stop
class UnseekableIO(io.BytesIO):
def peek(self, *args):
@ -853,9 +862,8 @@ def assert_is_copy(self, obj, objcopy, msg=None):
self.assertEqual(getattr(obj, slot, None),
getattr(objcopy, slot, None), msg=msg)
def check_unpickling_error(self, errors, data):
with self.subTest(data=data), \
self.assertRaises(errors):
def check_unpickling_error_strict(self, errors, data):
with self.assertRaises(errors):
try:
self.loads(data)
except BaseException as exc:
@ -864,6 +872,10 @@ def check_unpickling_error(self, errors, data):
(data, exc.__class__.__name__, exc))
raise
def check_unpickling_error(self, errors, data):
with self.subTest(data=data):
self.check_unpickling_error_strict(errors, data)
def test_load_from_data0(self):
self.assert_is_copy(self._testdata, self.loads(DATA0))
@ -1150,6 +1162,155 @@ def test_negative_32b_binput(self):
dumped = b'\x80\x03X\x01\x00\x00\x00ar\xff\xff\xff\xff.'
self.check_unpickling_error(ValueError, dumped)
def test_too_large_put(self):
# Test that PUT with large id does not cause allocation of
# too large memo table. The C implementation uses a dict-based memo
# for sparse indices (when idx > memo_len * 2) instead of allocating
# a massive array. This test verifies large sparse indices work without
# causing memory exhaustion.
#
# The following simple pickle creates an empty list, memoizes it
# using a large index, then loads it back on the stack, builds
# a tuple containing 2 identical empty lists and returns it.
data = lambda n: (b'((lp' + str(n).encode() + b'\n' +
b'g' + str(n).encode() + b'\nt.')
# 0: ( MARK
# 1: ( MARK
# 2: l LIST (MARK at 1)
# 3: p PUT 1000000000000
# 18: g GET 1000000000000
# 33: t TUPLE (MARK at 0)
# 34: . STOP
for idx in [10**6, 10**9, 10**12]:
if idx > sys.maxsize:
continue
self.assertEqual(self.loads(data(idx)), ([],)*2)
def test_too_large_long_binput(self):
# Test that LONG_BINPUT with large id does not cause allocation of
# too large memo table. The C implementation uses a dict-based memo
# for sparse indices (when idx > memo_len * 2) instead of allocating
# a massive array. This test verifies large sparse indices work without
# causing memory exhaustion.
#
# The following simple pickle creates an empty list, memoizes it
# using a large index, then loads it back on the stack, builds
# a tuple containing 2 identical empty lists and returns it.
data = lambda n: (b'(]r' + struct.pack('<I', n) +
b'j' + struct.pack('<I', n) + b't.')
# 0: ( MARK
# 1: ] EMPTY_LIST
# 2: r LONG_BINPUT 4294967295
# 7: j LONG_BINGET 4294967295
# 12: t TUPLE (MARK at 0)
# 13: . STOP
for idx in itersize(1 << 20, min(sys.maxsize, (1 << 32) - 1)):
self.assertEqual(self.loads(data(idx)), ([],)*2)
def _test_truncated_data(self, dumped, expected_error=None):
# Test that instructions to read large data without providing
# such amount of data do not cause large memory usage.
if expected_error is None:
expected_error = self.truncated_data_error
# BytesIO
with self.assertRaisesRegex(*expected_error):
self.loads(dumped)
if hasattr(self, 'unpickler'):
try:
with open(TESTFN, 'wb') as f:
f.write(dumped)
# buffered file
with open(TESTFN, 'rb') as f:
u = self.unpickler(f)
with self.assertRaisesRegex(*expected_error):
u.load()
# unbuffered file
with open(TESTFN, 'rb', buffering=0) as f:
u = self.unpickler(f)
with self.assertRaisesRegex(*expected_error):
u.load()
finally:
os_helper.unlink(TESTFN)
def test_truncated_large_binstring(self):
data = lambda size: b'T' + struct.pack('<I', size) + b'.' * 5
# 0: T BINSTRING '....'
# 9: . STOP
self.assertEqual(self.loads(data(4)), '....') # self-testing
for size in itersize(1 << 10, min(sys.maxsize - 5, (1 << 31) - 1)):
self._test_truncated_data(data(size))
self._test_truncated_data(data(1 << 31),
(pickle.UnpicklingError, 'truncated|exceeds|negative byte count'))
def test_truncated_large_binunicode(self):
data = lambda size: b'X' + struct.pack('<I', size) + b'.' * 5
# 0: X BINUNICODE '....'
# 9: . STOP
self.assertEqual(self.loads(data(4)), '....') # self-testing
for size in itersize(1 << 10, min(sys.maxsize - 5, (1 << 32) - 1)):
self._test_truncated_data(data(size))
def test_truncated_large_binbytes(self):
data = lambda size: b'B' + struct.pack('<I', size) + b'.' * 5
# 0: B BINBYTES b'....'
# 9: . STOP
self.assertEqual(self.loads(data(4)), b'....') # self-testing
for size in itersize(1 << 10, min(sys.maxsize, 1 << 31)):
self._test_truncated_data(data(size))
def test_truncated_large_long4(self):
data = lambda size: b'\x8b' + struct.pack('<I', size) + b'.' * 5
# 0: \x8b LONG4 0x2e2e2e2e
# 9: . STOP
self.assertEqual(self.loads(data(4)), 0x2e2e2e2e) # self-testing
for size in itersize(1 << 10, min(sys.maxsize - 5, (1 << 31) - 1)):
self._test_truncated_data(data(size))
self._test_truncated_data(data(1 << 31),
(pickle.UnpicklingError, 'LONG pickle has negative byte count'))
def test_truncated_large_frame(self):
data = lambda size: b'\x95' + struct.pack('<Q', size) + b'N.'
# 0: \x95 FRAME 2
# 9: N NONE
# 10: . STOP
self.assertIsNone(self.loads(data(2))) # self-testing
for size in itersize(1 << 10, sys.maxsize - 9):
self._test_truncated_data(data(size))
if sys.maxsize + 1 < 1 << 64:
self._test_truncated_data(data(sys.maxsize + 1),
((OverflowError, ValueError),
'FRAME length exceeds|frame size > sys.maxsize'))
def test_truncated_large_binunicode8(self):
data = lambda size: b'\x8d' + struct.pack('<Q', size) + b'.' * 5
# 0: \x8d BINUNICODE8 '....'
# 13: . STOP
self.assertEqual(self.loads(data(4)), '....') # self-testing
for size in itersize(1 << 10, sys.maxsize - 9):
self._test_truncated_data(data(size))
if sys.maxsize + 1 < 1 << 64:
self._test_truncated_data(data(sys.maxsize + 1), self.size_overflow_error)
def test_truncated_large_binbytes8(self):
data = lambda size: b'\x8e' + struct.pack('<Q', size) + b'.' * 5
# 0: \x8e BINBYTES8 b'....'
# 13: . STOP
self.assertEqual(self.loads(data(4)), b'....') # self-testing
for size in itersize(1 << 10, sys.maxsize):
self._test_truncated_data(data(size))
if sys.maxsize + 1 < 1 << 64:
self._test_truncated_data(data(sys.maxsize + 1), self.size_overflow_error)
def test_truncated_large_bytearray8(self):
data = lambda size: b'\x96' + struct.pack('<Q', size) + b'.' * 5
# 0: \x96 BYTEARRAY8 bytearray(b'....')
# 13: . STOP
self.assertEqual(self.loads(data(4)), bytearray(b'....')) # self-testing
for size in itersize(1 << 10, sys.maxsize):
self._test_truncated_data(data(size))
if sys.maxsize + 1 < 1 << 64:
self._test_truncated_data(data(sys.maxsize + 1), self.size_overflow_error)
def test_badly_escaped_string(self):
self.check_unpickling_error(ValueError, b"S'\\'\n.")

View file

@ -59,6 +59,8 @@ class PyUnpicklerTests(AbstractUnpickleTests, unittest.TestCase):
truncated_errors = (pickle.UnpicklingError, EOFError,
AttributeError, ValueError,
struct.error, IndexError, ImportError)
truncated_data_error = (EOFError, '')
size_overflow_error = (pickle.UnpicklingError, 'exceeds')
def loads(self, buf, **kwds):
f = io.BytesIO(buf)
@ -103,6 +105,8 @@ class InMemoryPickleTests(AbstractPickleTests, AbstractUnpickleTests,
truncated_errors = (pickle.UnpicklingError, EOFError,
AttributeError, ValueError,
struct.error, IndexError, ImportError)
truncated_data_error = ((pickle.UnpicklingError, EOFError), '')
size_overflow_error = ((OverflowError, pickle.UnpicklingError), 'exceeds')
def dumps(self, arg, protocol=None, **kwargs):
return pickle.dumps(arg, protocol, **kwargs)
@ -375,6 +379,8 @@ class CUnpicklerTests(PyUnpicklerTests):
unpickler = _pickle.Unpickler
bad_stack_errors = (pickle.UnpicklingError,)
truncated_errors = (pickle.UnpicklingError,)
truncated_data_error = (pickle.UnpicklingError, 'truncated')
size_overflow_error = (OverflowError, 'exceeds')
class CPicklingErrorTests(PyPicklingErrorTests):
pickler = _pickle.Pickler
@ -478,7 +484,7 @@ def test_pickler(self):
0) # Write buffer is cleared after every dump().
def test_unpickler(self):
basesize = support.calcobjsize('2P2n2P 2P2n2i5P 2P3n8P2n2i')
basesize = support.calcobjsize('2P2n3P 2P2n2i5P 2P3n8P2n2i')
unpickler = _pickle.Unpickler
P = struct.calcsize('P') # Size of memo table entry.
n = struct.calcsize('n') # Size of mark table entry.

View file

@ -0,0 +1,7 @@
Fix a potential memory denial of service in the :mod:`pickle` module.
When reading a pickled data received from untrusted source, it could cause
an arbitrary amount of memory to be allocated, even if the code that is
allowed to execute is restricted by overriding the
:meth:`~pickle.Unpickler.find_class` method.
This could have led to symptoms including a :exc:`MemoryError`, swapping, out
of memory (OOM) killed processes or containers, or even system crashes.

View file

@ -155,6 +155,9 @@ enum {
/* Prefetch size when unpickling (disabled on unpeekable streams) */
PREFETCH = 8192 * 16,
/* Data larger that this will be read in chunks, to prevent extreme
overallocation. */
MIN_READ_BUF_SIZE = 1 << 20,
FRAME_SIZE_MIN = 4,
FRAME_SIZE_TARGET = 64 * 1024,
@ -647,10 +650,11 @@ typedef struct UnpicklerObject {
Pdata *stack; /* Pickle data stack, store unpickled objects. */
/* The unpickler memo is just an array of PyObject *s. Using a dict
is unnecessary, since the keys are contiguous ints. */
is unnecessary, since the keys usually are contiguous ints. */
PyObject **memo;
size_t memo_size; /* Capacity of the memo array */
size_t memo_len; /* Number of objects in the memo */
PyObject *memo_dict; /* The backup memo dict for non-continuous keys. */
PyObject *persistent_load; /* persistent_load() method, can be NULL. */
PyObject *persistent_load_attr; /* instance attribute, can be NULL. */
@ -1247,141 +1251,13 @@ _Unpickler_SkipConsumed(UnpicklerObject *self)
static const Py_ssize_t READ_WHOLE_LINE = -1;
/* If reading from a file, we need to only pull the bytes we need, since there
may be multiple pickle objects arranged contiguously in the same input
buffer.
If `n` is READ_WHOLE_LINE, read a whole line. Otherwise, read up to `n`
bytes from the input stream/buffer.
Update the unpickler's input buffer with the newly-read data. Returns -1 on
failure; on success, returns the number of bytes read from the file.
On success, self->input_len will be 0; this is intentional so that when
unpickling from a file, the "we've run out of data" code paths will trigger,
causing the Unpickler to go back to the file for more data. Use the returned
size to tell you how much data you can process. */
/* Don't call it directly: use _Unpickler_ReadInto() */
static Py_ssize_t
_Unpickler_ReadFromFile(UnpicklerObject *self, Py_ssize_t n)
{
PyObject *data;
Py_ssize_t read_size;
assert(self->read != NULL);
if (_Unpickler_SkipConsumed(self) < 0)
return -1;
if (n == READ_WHOLE_LINE) {
data = PyObject_CallNoArgs(self->readline);
}
else {
PyObject *len;
/* Prefetch some data without advancing the file pointer, if possible */
if (self->peek && n < PREFETCH) {
len = PyLong_FromSsize_t(PREFETCH);
if (len == NULL)
return -1;
data = _Pickle_FastCall(self->peek, len);
if (data == NULL) {
if (!PyErr_ExceptionMatches(PyExc_NotImplementedError))
return -1;
/* peek() is probably not supported by the given file object */
PyErr_Clear();
Py_CLEAR(self->peek);
}
else {
read_size = _Unpickler_SetStringInput(self, data);
Py_DECREF(data);
if (read_size < 0) {
return -1;
}
self->prefetched_idx = 0;
if (n <= read_size)
return n;
}
}
len = PyLong_FromSsize_t(n);
if (len == NULL)
return -1;
data = _Pickle_FastCall(self->read, len);
}
if (data == NULL)
return -1;
read_size = _Unpickler_SetStringInput(self, data);
Py_DECREF(data);
return read_size;
}
/* Don't call it directly: use _Unpickler_Read() */
static Py_ssize_t
_Unpickler_ReadImpl(UnpicklerObject *self, PickleState *st, char **s, Py_ssize_t n)
{
Py_ssize_t num_read;
*s = NULL;
if (self->next_read_idx > PY_SSIZE_T_MAX - n) {
PyErr_SetString(st->UnpicklingError,
"read would overflow (invalid bytecode)");
return -1;
}
/* This case is handled by the _Unpickler_Read() macro for efficiency */
assert(self->next_read_idx + n > self->input_len);
if (!self->read)
return bad_readline(st);
/* Extend the buffer to satisfy desired size */
num_read = _Unpickler_ReadFromFile(self, n);
if (num_read < 0)
return -1;
if (num_read < n)
return bad_readline(st);
*s = self->input_buffer;
self->next_read_idx = n;
return n;
}
/* Read `n` bytes from the unpickler's data source, storing the result in `buf`.
*
* This should only be used for non-small data reads where potentially
* avoiding a copy is beneficial. This method does not try to prefetch
* more data into the input buffer.
*
* _Unpickler_Read() is recommended in most cases.
*/
static Py_ssize_t
_Unpickler_ReadInto(PickleState *state, UnpicklerObject *self, char *buf,
Py_ssize_t n)
_Unpickler_ReadIntoFromFile(PickleState *state, UnpicklerObject *self, char *buf,
Py_ssize_t n)
{
assert(n != READ_WHOLE_LINE);
/* Read from available buffer data, if any */
Py_ssize_t in_buffer = self->input_len - self->next_read_idx;
if (in_buffer > 0) {
Py_ssize_t to_read = Py_MIN(in_buffer, n);
memcpy(buf, self->input_buffer + self->next_read_idx, to_read);
self->next_read_idx += to_read;
buf += to_read;
n -= to_read;
if (n == 0) {
/* Entire read was satisfied from buffer */
return n;
}
}
/* Read from file */
if (!self->read) {
/* We're unpickling memory, this means the input is truncated */
return bad_readline(state);
}
if (_Unpickler_SkipConsumed(self) < 0) {
return -1;
}
if (!self->readinto) {
/* readinto() not supported on file-like object, fall back to read()
* and copy into destination buffer (bpo-39681) */
@ -1435,6 +1311,163 @@ _Unpickler_ReadInto(PickleState *state, UnpicklerObject *self, char *buf,
return n;
}
/* If reading from a file, we need to only pull the bytes we need, since there
may be multiple pickle objects arranged contiguously in the same input
buffer.
If `n` is READ_WHOLE_LINE, read a whole line. Otherwise, read up to `n`
bytes from the input stream/buffer.
Update the unpickler's input buffer with the newly-read data. Returns -1 on
failure; on success, returns the number of bytes read from the file.
On success, self->input_len will be 0; this is intentional so that when
unpickling from a file, the "we've run out of data" code paths will trigger,
causing the Unpickler to go back to the file for more data. Use the returned
size to tell you how much data you can process. */
static Py_ssize_t
_Unpickler_ReadFromFile(PickleState *state, UnpicklerObject *self, Py_ssize_t n)
{
PyObject *data;
Py_ssize_t read_size;
assert(self->read != NULL);
if (_Unpickler_SkipConsumed(self) < 0)
return -1;
if (n == READ_WHOLE_LINE) {
data = PyObject_CallNoArgs(self->readline);
if (data == NULL) {
return -1;
}
}
else {
PyObject *len;
/* Prefetch some data without advancing the file pointer, if possible */
if (self->peek && n < PREFETCH) {
len = PyLong_FromSsize_t(PREFETCH);
if (len == NULL)
return -1;
data = _Pickle_FastCall(self->peek, len);
if (data == NULL) {
if (!PyErr_ExceptionMatches(PyExc_NotImplementedError))
return -1;
/* peek() is probably not supported by the given file object */
PyErr_Clear();
Py_CLEAR(self->peek);
}
else {
read_size = _Unpickler_SetStringInput(self, data);
Py_DECREF(data);
if (read_size < 0) {
return -1;
}
self->prefetched_idx = 0;
if (n <= read_size)
return n;
}
}
Py_ssize_t cursize = Py_MIN(n, MIN_READ_BUF_SIZE);
len = PyLong_FromSsize_t(cursize);
if (len == NULL)
return -1;
data = _Pickle_FastCall(self->read, len);
if (data == NULL) {
return -1;
}
while (cursize < n) {
Py_ssize_t prevsize = cursize;
// geometrically double the chunk size to avoid CPU DoS
cursize += Py_MIN(cursize, n - cursize);
if (_PyBytes_Resize(&data, cursize) < 0) {
return -1;
}
if (_Unpickler_ReadIntoFromFile(state, self,
PyBytes_AS_STRING(data) + prevsize, cursize - prevsize) < 0)
{
Py_DECREF(data);
return -1;
}
}
}
read_size = _Unpickler_SetStringInput(self, data);
Py_DECREF(data);
return read_size;
}
/* Don't call it directly: use _Unpickler_Read() */
static Py_ssize_t
_Unpickler_ReadImpl(UnpicklerObject *self, PickleState *st, char **s, Py_ssize_t n)
{
Py_ssize_t num_read;
*s = NULL;
if (self->next_read_idx > PY_SSIZE_T_MAX - n) {
PyErr_SetString(st->UnpicklingError,
"read would overflow (invalid bytecode)");
return -1;
}
/* This case is handled by the _Unpickler_Read() macro for efficiency */
assert(self->next_read_idx + n > self->input_len);
if (!self->read)
return bad_readline(st);
/* Extend the buffer to satisfy desired size */
num_read = _Unpickler_ReadFromFile(st, self, n);
if (num_read < 0)
return -1;
if (num_read < n)
return bad_readline(st);
*s = self->input_buffer;
self->next_read_idx = n;
return n;
}
/* Read `n` bytes from the unpickler's data source, storing the result in `buf`.
*
* This should only be used for non-small data reads where potentially
* avoiding a copy is beneficial. This method does not try to prefetch
* more data into the input buffer.
*
* _Unpickler_Read() is recommended in most cases.
*/
static Py_ssize_t
_Unpickler_ReadInto(PickleState *state, UnpicklerObject *self, char *buf,
Py_ssize_t n)
{
assert(n != READ_WHOLE_LINE);
/* Read from available buffer data, if any */
Py_ssize_t in_buffer = self->input_len - self->next_read_idx;
if (in_buffer > 0) {
Py_ssize_t to_read = Py_MIN(in_buffer, n);
memcpy(buf, self->input_buffer + self->next_read_idx, to_read);
self->next_read_idx += to_read;
buf += to_read;
n -= to_read;
if (n == 0) {
/* Entire read was satisfied from buffer */
return n;
}
}
/* Read from file */
if (!self->read) {
/* We're unpickling memory, this means the input is truncated */
return bad_readline(state);
}
if (_Unpickler_SkipConsumed(self) < 0) {
return -1;
}
return _Unpickler_ReadIntoFromFile(state, self, buf, n);
}
/* Read `n` bytes from the unpickler's data source, storing the result in `*s`.
This should be used for all data reads, rather than accessing the unpickler's
@ -1492,7 +1525,7 @@ _Unpickler_Readline(PickleState *state, UnpicklerObject *self, char **result)
if (!self->read)
return bad_readline(state);
num_read = _Unpickler_ReadFromFile(self, READ_WHOLE_LINE);
num_read = _Unpickler_ReadFromFile(state, self, READ_WHOLE_LINE);
if (num_read < 0)
return -1;
if (num_read == 0 || self->input_buffer[num_read - 1] != '\n')
@ -1525,12 +1558,35 @@ _Unpickler_ResizeMemoList(UnpicklerObject *self, size_t new_size)
/* Returns NULL if idx is out of bounds. */
static PyObject *
_Unpickler_MemoGet(UnpicklerObject *self, size_t idx)
_Unpickler_MemoGet(PickleState *st, UnpicklerObject *self, size_t idx)
{
if (idx >= self->memo_size)
return NULL;
return self->memo[idx];
PyObject *value;
if (idx < self->memo_size) {
value = self->memo[idx];
if (value != NULL) {
return value;
}
}
if (self->memo_dict != NULL) {
PyObject *key = PyLong_FromSize_t(idx);
if (key == NULL) {
return NULL;
}
if (idx < self->memo_size) {
(void)PyDict_Pop(self->memo_dict, key, &value);
// Migrate dict entry to array for faster future access
self->memo[idx] = value;
}
else {
value = PyDict_GetItemWithError(self->memo_dict, key);
}
Py_DECREF(key);
if (value != NULL || PyErr_Occurred()) {
return value;
}
}
PyErr_Format(st->UnpicklingError, "Memo value not found at index %zd", idx);
return NULL;
}
/* Returns -1 (with an exception set) on failure, 0 on success.
@ -1541,6 +1597,27 @@ _Unpickler_MemoPut(UnpicklerObject *self, size_t idx, PyObject *value)
PyObject *old_item;
if (idx >= self->memo_size) {
if (idx > self->memo_len * 2) {
/* The memo keys are too sparse. Use a dict instead of
* a continuous array for the memo. */
if (self->memo_dict == NULL) {
self->memo_dict = PyDict_New();
if (self->memo_dict == NULL) {
return -1;
}
}
PyObject *key = PyLong_FromSize_t(idx);
if (key == NULL) {
return -1;
}
if (PyDict_SetItem(self->memo_dict, key, value) < 0) {
Py_DECREF(key);
return -1;
}
Py_DECREF(key);
return 0;
}
if (_Unpickler_ResizeMemoList(self, idx * 2) < 0)
return -1;
assert(idx < self->memo_size);
@ -1610,6 +1687,7 @@ _Unpickler_New(PyObject *module)
self->memo = memo;
self->memo_size = MEMO_SIZE;
self->memo_len = 0;
self->memo_dict = NULL;
self->persistent_load = NULL;
self->persistent_load_attr = NULL;
memset(&self->buffer, 0, sizeof(Py_buffer));
@ -5582,13 +5660,28 @@ load_counted_binbytes(PickleState *state, UnpicklerObject *self, int nbytes)
return -1;
}
bytes = PyBytes_FromStringAndSize(NULL, size);
if (bytes == NULL)
return -1;
if (_Unpickler_ReadInto(state, self, PyBytes_AS_STRING(bytes), size) < 0) {
Py_DECREF(bytes);
Py_ssize_t cursize = Py_MIN(size, MIN_READ_BUF_SIZE);
Py_ssize_t prevsize = 0;
bytes = PyBytes_FromStringAndSize(NULL, cursize);
if (bytes == NULL) {
return -1;
}
while (1) {
if (_Unpickler_ReadInto(state, self,
PyBytes_AS_STRING(bytes) + prevsize, cursize - prevsize) < 0)
{
Py_DECREF(bytes);
return -1;
}
if (cursize >= size) {
break;
}
prevsize = cursize;
cursize += Py_MIN(cursize, size - cursize);
if (_PyBytes_Resize(&bytes, cursize) < 0) {
return -1;
}
}
PDATA_PUSH(self->stack, bytes, -1);
return 0;
@ -5613,14 +5706,27 @@ load_counted_bytearray(PickleState *state, UnpicklerObject *self)
return -1;
}
bytearray = PyByteArray_FromStringAndSize(NULL, size);
Py_ssize_t cursize = Py_MIN(size, MIN_READ_BUF_SIZE);
Py_ssize_t prevsize = 0;
bytearray = PyByteArray_FromStringAndSize(NULL, cursize);
if (bytearray == NULL) {
return -1;
}
char *str = PyByteArray_AS_STRING(bytearray);
if (_Unpickler_ReadInto(state, self, str, size) < 0) {
Py_DECREF(bytearray);
return -1;
while (1) {
if (_Unpickler_ReadInto(state, self,
PyByteArray_AS_STRING(bytearray) + prevsize,
cursize - prevsize) < 0) {
Py_DECREF(bytearray);
return -1;
}
if (cursize >= size) {
break;
}
prevsize = cursize;
cursize += Py_MIN(cursize, size - cursize);
if (PyByteArray_Resize(bytearray, cursize) < 0) {
return -1;
}
}
PDATA_PUSH(self->stack, bytearray, -1);
@ -6222,20 +6328,15 @@ load_get(PickleState *st, UnpicklerObject *self)
if (key == NULL)
return -1;
idx = PyLong_AsSsize_t(key);
Py_DECREF(key);
if (idx == -1 && PyErr_Occurred()) {
Py_DECREF(key);
return -1;
}
value = _Unpickler_MemoGet(self, idx);
value = _Unpickler_MemoGet(st, self, idx);
if (value == NULL) {
if (!PyErr_Occurred()) {
PyErr_Format(st->UnpicklingError, "Memo value not found at index %ld", idx);
}
Py_DECREF(key);
return -1;
}
Py_DECREF(key);
PDATA_APPEND(self->stack, value, -1);
return 0;
@ -6253,13 +6354,8 @@ load_binget(PickleState *st, UnpicklerObject *self)
idx = Py_CHARMASK(s[0]);
value = _Unpickler_MemoGet(self, idx);
value = _Unpickler_MemoGet(st, self, idx);
if (value == NULL) {
PyObject *key = PyLong_FromSsize_t(idx);
if (key != NULL) {
PyErr_Format(st->UnpicklingError, "Memo value not found at index %ld", idx);
Py_DECREF(key);
}
return -1;
}
@ -6279,13 +6375,8 @@ load_long_binget(PickleState *st, UnpicklerObject *self)
idx = calc_binsize(s, 4);
value = _Unpickler_MemoGet(self, idx);
value = _Unpickler_MemoGet(st, self, idx);
if (value == NULL) {
PyObject *key = PyLong_FromSsize_t(idx);
if (key != NULL) {
PyErr_Format(st->UnpicklingError, "Memo value not found at index %ld", idx);
Py_DECREF(key);
}
return -1;
}
@ -7250,6 +7341,7 @@ Unpickler_clear(PyObject *op)
self->buffer.buf = NULL;
}
Py_CLEAR(self->memo_dict);
_Unpickler_MemoCleanup(self);
PyMem_Free(self->marks);
self->marks = NULL;
@ -7286,6 +7378,7 @@ Unpickler_traverse(PyObject *op, visitproc visit, void *arg)
Py_VISIT(self->persistent_load);
Py_VISIT(self->persistent_load_attr);
Py_VISIT(self->buffers);
Py_VISIT(self->memo_dict);
PyObject **memo = self->memo;
if (memo) {
Py_ssize_t i = self->memo_size;

232
Tools/picklebench/README.md Normal file
View file

@ -0,0 +1,232 @@
# Pickle Chunked Reading Benchmark
This benchmark measures the performance impact of the chunked reading optimization in GH PR #119204 for the pickle module.
## What This Tests
The PR adds chunked reading (1MB chunks) to prevent memory exhaustion when unpickling large objects:
- **BINBYTES8** - Large bytes objects (protocol 4+)
- **BINUNICODE8** - Large strings (protocol 4+)
- **BYTEARRAY8** - Large bytearrays (protocol 5)
- **FRAME** - Large frames
- **LONG4** - Large integers
- An antagonistic mode that tests using memory denial of service inducing malicious pickles.
## Quick Start
```bash
# Run full benchmark suite (1MiB → 200MiB, takes several minutes)
build/python Tools/picklebench/memory_dos_impact.py
# Test just a few sizes (quick test: 1, 10, 50 MiB)
build/python Tools/picklebench/memory_dos_impact.py --sizes 1 10 50
# Test smaller range for faster results
build/python Tools/picklebench/memory_dos_impact.py --sizes 1 5 10
# Output as markdown for reports
build/python Tools/picklebench/memory_dos_impact.py --format markdown > results.md
# Test with protocol 4 instead of 5
build/python Tools/picklebench/memory_dos_impact.py --protocol 4
```
**Note:** Sizes are specified in MiB. Use `--sizes 1 2 5` for 1MiB, 2MiB, 5MiB objects.
## Antagonistic Mode (DoS Protection Test)
The `--antagonistic` flag tests **malicious pickles** that demonstrate the memory DoS protection:
```bash
# Quick DoS protection test (claims 10, 50, 100 MB but provides 1KB)
build/python Tools/picklebench/memory_dos_impact.py --antagonistic --sizes 10 50 100
# Full DoS test (default: 10, 50, 100, 500, 1000, 5000 MB claimed)
build/python Tools/picklebench/memory_dos_impact.py --antagonistic
```
### What This Tests
Unlike normal benchmarks that test **legitimate pickles**, antagonistic mode tests:
- **Truncated BINBYTES8**: Claims 100MB but provides only 1KB (will fail to unpickle)
- **Truncated BINUNICODE8**: Same for strings
- **Truncated BYTEARRAY8**: Same for bytearrays
- **Sparse memo attacks**: PUT at index 1 billion (would allocate huge array before PR)
**Key difference:**
- **Normal mode**: Tests real data, shows ~5% time overhead
- **Antagonistic mode**: Tests malicious data, shows ~99% memory savings
### Expected Results
```
100MB Claimed (actual: 1KB)
binbytes8_100MB_claim
Peak memory: 1.00 MB (claimed: 100 MB, saved: 99.00 MB, 99.0%)
Error: UnpicklingError ← Expected!
Summary:
Average claimed: 126.2 MB
Average peak: 0.54 MB
Average saved: 125.7 MB (99.6% reduction)
Protection Status: ✓ Memory DoS attacks mitigated by chunked reading
```
**Before PR**: Would allocate full claimed size (100MB+), potentially crash
**After PR**: Allocates 1MB chunks, fails fast with minimal memory
This demonstrates the **security improvement** - protection against memory exhaustion attacks.
## Before/After Comparison
The benchmark includes an automatic comparison feature that runs the same tests on both a baseline and current Python build.
### Option 1: Automatic Comparison (Recommended)
Build both versions, then use `--baseline` to automatically compare:
```bash
# Build the baseline (main branch without PR)
git checkout main
mkdir -p build-main
cd build-main && ../configure && make -j $(nproc) && cd ..
# Build the current version (with PR)
git checkout unpickle-overallocate
mkdir -p build
cd build && ../configure && make -j $(nproc) && cd ..
# Run automatic comparison (quick test with a few sizes)
build/python Tools/picklebench/memory_dos_impact.py \
--baseline build-main/python \
--sizes 1 10 50
# Full comparison (all default sizes)
build/python Tools/picklebench/memory_dos_impact.py \
--baseline build-main/python
```
The comparison output shows:
- Side-by-side metrics (Current vs Baseline)
- Percentage change for time and memory
- Overall summary statistics
### Interpreting Comparison Results
- **Time change**: Small positive % is expected (chunking adds overhead, typically 5-10%)
- **Memory change**: Negative % is good (chunking saves memory, especially for large objects)
- **Trade-off**: Slightly slower but much safer against memory exhaustion attacks
### Option 2: Manual Comparison
Save results separately and compare manually:
```bash
# Baseline results
build-main/python Tools/picklebench/memory_dos_impact.py --format json > baseline.json
# Current results
build/python Tools/picklebench/memory_dos_impact.py --format json > current.json
# Manual comparison
diff -y <(jq '.' baseline.json) <(jq '.' current.json)
```
## Understanding the Results
### Critical Sizes
The default test suite includes:
- **< 1MiB (999,000 bytes)**: No chunking, allocates full size upfront
- **= 1MiB (1,048,576 bytes)**: Threshold, chunking just starts
- **> 1MiB (1,048,577 bytes)**: Chunked reading engaged
- **1, 2, 5, 10MiB**: Show scaling behavior with chunking
- **20, 50, 100, 200MiB**: Stress test large object handling
**Note:** The full suite may require more than 16GiB of RAM.
### Key Metrics
- **Time (mean)**: Average unpickling time - should be similar before/after
- **Time (stdev)**: Consistency - lower is better
- **Peak Memory**: Maximum memory during unpickling - **expected to be LOWER after PR**
- **Pickle Size**: Size of the serialized data on disk
### Test Types
| Test | What It Stresses |
|------|------------------|
| `bytes_*` | BINBYTES8 opcode, raw binary data |
| `string_ascii_*` | BINUNICODE8 with simple ASCII |
| `string_utf8_*` | BINUNICODE8 with multibyte UTF-8 (€ chars) |
| `bytearray_*` | BYTEARRAY8 opcode (protocol 5) |
| `list_large_items_*` | Multiple chunked reads in sequence |
| `dict_large_values_*` | Chunking in dict deserialization |
| `nested_*` | Realistic mixed data structures |
| `tuple_*` | Immutable structures |
## Expected Results
### Before PR (main branch)
- Single large allocation per object
- Risk of memory exhaustion with malicious pickles
### After PR (unpickle-overallocate branch)
- Chunked allocation (1MB at a time)
- **Slightly higher CPU time** (multiple allocations + resizing)
- **Significantly lower peak memory** (no large pre-allocation)
- Protection against DoS via memory exhaustion
## Advanced Usage
### Test Specific Sizes
```bash
# Test only 5MiB and 10MiB objects
build/python Tools/picklebench/memory_dos_impact.py --sizes 5 10
# Test large objects: 50, 100, 200 MiB
build/python Tools/picklebench/memory_dos_impact.py --sizes 50 100 200
```
### More Iterations for Stable Timing
```bash
# Run 10 iterations per test for better statistics
build/python Tools/picklebench/memory_dos_impact.py --iterations 10 --sizes 1 10
```
### JSON Output for Analysis
```bash
# Generate JSON for programmatic analysis
build/python Tools/picklebench/memory_dos_impact.py --format json | python -m json.tool
```
## Interpreting Memory Results
The **peak memory** metric shows the maximum memory allocated during unpickling:
- **Without chunking**: Allocates full size immediately
- 10MB object → 10MB allocation upfront
- **With chunking**: Allocates in 1MB chunks, grows geometrically
- 10MB object → starts with 1MB, grows: 2MB, 4MB, 8MB (final: ~10MB total)
- Peak is lower because allocation is incremental
## Typical Results
On a system with the PR applied, you should see:
```
1.00MiB Test Results
bytes_1.00MiB: ~0.3ms, 1.00MiB peak (just at threshold)
2.00MiB Test Results
bytes_2.00MiB: ~0.8ms, 2.00MiB peak (chunked: 1MiB → 2MiB)
10.00MiB Test Results
bytes_10.00MiB: ~3-5ms, 10.00MiB peak (chunked: 1→2→4→8→10 MiB)
```
Time overhead is minimal (~10-20% for very large objects), but memory safety is significantly improved.

File diff suppressed because it is too large Load diff