The dataclasses `__init__` function is generated dynamically by a call to `exec()` and so doesn't have deferred reference counting enabled. Enable deferred reference counting on functions when assigned as an attribute to type objects to avoid reference count contention when creating dataclass instances.
Replace code that directly accesses PyASCIIObject.hash with
PyUnstable_Unicode_GET_CACHED_HASH().
Remove redundant "assert(PyUnicode_Check(op))" from
PyUnstable_Unicode_GET_CACHED_HASH(), _PyASCIIObject_CAST() already
implements the check.
Currently, there are a few places where tp_mro could theoretically
become NULL, but do not in practice. This commit adds defensive checks for
NULL values to ensure that any changes do not introduce a crash and that
state invariants are upheld.
The assertions added in this commit are all instances where a NULL value would get passed to something not expecting a NULL, so it is better to catch an assertion failure than crash later on.
There are a few cases where it is OK for the return of lookup_tp_mro to be NULL, such as when passed to is_subtype_with_mro, which handles this explicitly.
This partially reverts #137047, keeping the tests for GC collectability of the
original class that dataclass adds `__slots__` to.
The reference leaks solved there are instead solved by having the `__dict__` &
`__weakref__` descriptors not tied to (and referencing) their class.
Instead, they're shared between all classes that need them (within
an interpreter).
The `__objclass__` ol the descriptors is set to `object`, since these
descriptors work with *any* object. (The appropriate checks were already
made in the get/set code, so the `__objclass__` check was redundant.)
The repr of these descriptors (and any others whose `__objclass__` is `object`)
now doesn't mention the objclass.
This change required adjustment of introspection code that checks
`__objclass__` to determine an object's “own” (i.e. not inherited) `__dict__`.
Third-party code that does similar introspection of the internals will also
need adjusting.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This makes the following APIs public:
* `Py_BEGIN_CRITICAL_SECTION_MUTEX(mutex),`
* `Py_BEGIN_CRITICAL_SECTION2_MUTEX(mutex1, mutex2)`
* `void PyCriticalSection_BeginMutex(PyCriticalSection *c, PyMutex *mutex)`
* `void PyCriticalSection2_BeginMutex(PyCriticalSection2 *c, PyMutex *mutex1, PyMutex *mutex2)`
The macros are identical to the corresponding `Py_BEGIN_CRITICAL_SECTION` and
`Py_BEGIN_CRITICAL_SECTION2` macros (e.g., they include braces), but they
accept a `PyMutex` instead of an object.
The new macros are still paired with the existing END macros
(`Py_END_CRITICAL_SECTION`, `Py_END_CRITICAL_SECTION2`).
This fixes the data races in typeobject.c in subinterpreters under free-threading. The type flags and slots are only modified in the main interpreter as all static types are first initialised in main interpreter.
In the free-threaded build, avoid data races caused by updating type
slots or type flags after the type was initially created. For those
(typically rare) cases, use the stop-the-world mechanism. Remove the
use of atomics when reading or writing type flags.
In the free-threaded build, avoid data races caused by updating type slots
or type flags after the type was initially created. For those (typically
rare) cases, use the stop-the-world mechanism. Remove the use of atomics
when reading or writing type flags. The use of atomics is not sufficient to
avoid races (since flags are sometimes read without a lock and without
atomics) and are no longer required.
Two races related to the type lookup cache, when used in the
free-threaded build. This caused test_opcache to sometimes fail (as
well as other hard to re-produce failures).
In the free threaded build, the `_PyObject_LookupSpecial()` call can lead to
reference count contention on the returned function object becuase it
doesn't use stackrefs. Refactor some of the callers to use
`_PyObject_MaybeCallSpecialNoArgs`, which uses stackrefs internally.
This fixes the scaling bottleneck in the "lookup_special" microbenchmark
in `ftscalingbench.py`. However, the are still some uses of
`_PyObject_LookupSpecial()` that need to be addressed in future PRs.
The `PyType_HasFeature()` function reads the flags with a relaxed atomic
load and without holding the type lock. To avoid data races, use atomic
stores if `PyType_Ready()` has already been called.
Fix UBSan failures for `PyTypeObject`.
Introduce a macro cast for `superobject` and remove redundant casts.
Rename the unused parameter in getter/setter methods to `closure`
for semantic purposes.
The reference count fields, such as `ob_tid` and `ob_ref_shared`, may be
accessed concurrently in the free threading build by a `_Py_TryXGetRef`
or similar operation. The PyObject header fields will be initialized by
`_PyObject_Init`, so only call `memset()` to zero-initialize the remainder
of the allocation.
We should use a relaxed atomic load in the free threading build in
`PyType_Modified()` because that's called without the type lock held.
It's not necessary to use atomics in `type_modified_unlocked()`.
We should also use `FT_ATOMIC_STORE_UINT_RELAXED()` instead of the
`UINT32` variant because `tp_version_tag` is declared as `unsigned int`.