- Unify `get_unicode` and `get_string` in a single function.
- Allow to retrieve the underlying `object` attribute, its
size, and the adjusted 'start' and 'end', all at once.
Add a new `_PyUnicodeError_GetParams` internal function for this.
(In `exceptions.c`, it's somewhat common to not need all the attributes,
but the compiler has opportunity to inline the function and optimize
unneeded work away. Outside that file, we'll usually need all or
most of them at once.)
- Use a common implementation for the following functions:
- `PyUnicode{Decode,Encode}Error_GetEncoding`
- `PyUnicode{Decode,Encode,Translate}Error_GetObject`
- `PyUnicode{Decode,Encode,Translate}Error_{Get,Set}Reason`
- `PyUnicode{Decode,Encode,Translate}Error_{Get,Set}{Start,End}`
There was a data race on the utf8 field between `PyUnicode_SET_UTF8` and
`_PyUnicode_CheckConsistency`. Use the `_PyUnicode_UTF8()` accessor,
which uses an atomic load internally, to avoid the data race.
Methods (functions defined in class scope) are likely to be cleaned
up by the GC anyway.
Add a new code flag, `CO_METHOD`, that is set for functions defined
in a class scope. Use that when deciding to defer functions.
* Add `_PyDictKeys_StringLookupSplit` which does locking on dict keys and
use in place of `_PyDictKeys_StringLookup`.
* Change `_PyObject_TryGetInstanceAttribute` to use that function
in the case of split keys.
* Add `unicodekeys_lookup_split` helper which allows code sharing
between `_Py_dict_lookup` and `_PyDictKeys_StringLookupSplit`.
* Fix locking for `STORE_ATTR_INSTANCE_VALUE`. Create
`_GUARD_TYPE_VERSION_AND_LOCK` uop so that object stays locked and
`tp_version_tag` cannot change.
* Pass `tp_version_tag` to `specialize_dict_access()`, ensuring
the version we store on the cache is the correct one (in case of
it changing during the specalize analysis).
* Split `analyze_descriptor` into `analyze_descriptor_load` and
`analyze_descriptor_store` since those don't share much logic.
Add `descriptor_is_class` helper function.
* In `specialize_dict_access`, double check `_PyObject_GetManagedDict()`
in case we race and dict was materialized before the lock.
* Avoid borrowed references in `_Py_Specialize_StoreAttr()`.
* Use `specialize()` and `unspecialize()` helpers.
* Add unit tests to ensure specializing happens as expected in FT builds.
* Add unit tests to attempt to trigger data races (useful for running under TSAN).
* Add `has_split_table` function to `_testinternalcapi`.
The `PyWeakref_IsDead()` function tests if a weak reference is dead
without any side effects. Although you can also detect if a weak
reference is dead using `PyWeakref_GetRef()`, that function returns a
strong reference that must be `Py_DECREF()`'d, which can introduce side
effects if the last reference is concurrently dropped (at least in the
free threading build).
In some cases, previously computed as (nan+nanj), we could recover
meaningful component values in the result, see e.g. the C11, Annex
G.5.1, routine _Cmultd():
>>> z = 1e300+1j
>>> z*(nan+infj) # was (nan+nanj)
(-inf+infj)
That also fix some complex powers for small integer exponents, computed
with optimized algorithm (by squaring):
>>> z**5 # was (nan+nanj)
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
z**5
~^^~
OverflowError: complex exponentiation
Objects may be temporarily "resurrected" in destructors when calling
finalizers or watcher callbacks. We previously undid the resurrection
by decrementing the reference count using `Py_SET_REFCNT`. This was not
thread-safe because other threads might be accessing the object
(modifying its reference count) if it was exposed by the finalizer,
watcher callback, or temporarily accessed by a racy dictionary or list
access.
This adds internal-only thread-safe functions for temporary object
resurrection during destructors.