In the free threaded build, the `_PyObject_LookupSpecial()` call can lead to
reference count contention on the returned function object becuase it
doesn't use stackrefs. Refactor some of the callers to use
`_PyObject_MaybeCallSpecialNoArgs`, which uses stackrefs internally.
This fixes the scaling bottleneck in the "lookup_special" microbenchmark
in `ftscalingbench.py`. However, the are still some uses of
`_PyObject_LookupSpecial()` that need to be addressed in future PRs.
Concurrent accesses from multiple threads to the same `cell` object did not
scale well in the free-threaded build. Use `_PyStackRef` and optimistically
avoid locking to improve scaling.
With the locks around cell reads gone, some of the free threading tests were
prone to starvation: the readers were able to run in a tight loop and the
writer threads weren't scheduled frequently enough to make timely progress.
Adjust the tests to avoid this.
Co-authored-by: Donghee Na <donghee.na@python.org>
* Rename 'defined' attribute to 'in_local' to more accurately reflect how it is used
* Make death of variables explicit even for array variables.
* Convert in_memory from boolean to stack offset
* Don't apply liveness analysis to optimizer generated code
* Fix RETURN_VALUE in optimizer
This tells TSAN not to sanitize `PyUnstable_InterpreterFrame_GetLine()`.
There's a possible data race on the access to the frame's `instr_ptr`
if the frame is currently executing. We don't really care about the
race. In theory, we could use relaxed atomics for every access to
`instr_ptr`, but that would create more code churn and current compilers
are overly conservative with optimizations around relaxed atomic
accesses.
We also don't sanitize `_PyFrame_IsIncomplete()` because it accesses
`instr_ptr` and is called from assertions within PyFrame_GetCode().
- Restore max field size to sys.maxsize, as in Python 3.13 & below
- PyCField: Split out bit/byte sizes/offsets.
- Expose CField's size/offset data to Python code
- Add generic checks for all the test structs/unions, using the newly exposed attrs
* Move _Py_VISIT_STACKREF() from pycore_gc.h to pycore_stackref.h.
* Remove pycore_interpframe.h include from pycore_genobject.h.
* Remove now useless includes from C files.
* Add pycore_interpframe_structs.h to Makefile.pre.in and
pythoncore.vcxproj.
Move pycore_obmalloc.h include from pycore_interp_structs.h to
pycore_runtime_structs.h.
Add also comment explaining the purpose of each include in
pycore_interp_structs.h, pycore_runtime_structs.h and
pycore_structs.h.
Remove <stdbool.h> and <stddef.h> from pycore_structs.h.
Add free-threaded versions of existing specialization for FOR_ITER (list, tuples, fast range iterators and generators), without significantly affecting their thread-safety. (Iterating over shared lists/tuples/ranges should be fine like before. Reusing iterators between threads is not fine, like before. Sharing generators between threads is a recipe for significant crashes, like before.)
The PyThreadState field gains a reference count field to avoid
issues with PyThreadState being a dangling pointer to freed memory.
The refcount starts with a value of two: one reference is owned by the
interpreter's linked list of thread states and one reference is owned by
the OS thread. The reference count is decremented when the thread state
is removed from the interpreter's linked list and before the OS thread
calls `PyThread_hang_thread()`. The thread that decrements it to zero
frees the `PyThreadState` memory.
The `holds_gil` field is moved out of the `_status` bit field, to avoid
a data race where on thread calls `PyThreadState_Clear()`, modifying the
`_status` bit field while the OS thread reads `holds_gil` when
attempting to acquire the GIL.
The `PyThreadState.state` field now has `_Py_THREAD_SHUTTING_DOWN` as a
possible value. This corresponds to the `_PyThreadState_MustExit()`
check. This avoids race conditions in the free threading build when
checking `_PyThreadState_MustExit()`.
* Fix use after free in list objects
Set the items pointer in the list object to NULL after the items array
is freed during list deallocation. Otherwise, we can end up with a list
object added to the free list that contains a pointer to an already-freed
items array.
* Mark `_PyList_FromStackRefStealOnSuccess` as escaping
I think technically it's not escaping, because the only object that
can be decrefed if allocation fails is an exact list, which cannot
execute arbitrary code when it is destroyed. However, this seems less
intrusive than trying to special cases objects in the assert in `_Py_Dealloc`
that checks for non-null stackpointers and shouldn't matter for performance.
* Add location information when accessing already closed stackref
* Add #def option to track closed stackrefs to provide precise information for use after free and double frees.
* Combine _GUARD_GLOBALS_VERSION_PUSH_KEYS and _LOAD_GLOBAL_MODULE_FROM_KEYS into _LOAD_GLOBAL_MODULE
* Combine _GUARD_BUILTINS_VERSION_PUSH_KEYS and _LOAD_GLOBAL_BUILTINS_FROM_KEYS into _LOAD_GLOBAL_BUILTINS
* Combine _CHECK_ATTR_MODULE_PUSH_KEYS and _LOAD_ATTR_MODULE_FROM_KEYS into _LOAD_ATTR_MODULE
* Remove stack transient in LOAD_ATTR_WITH_HINT