Merge branch 'main' into gh-63882-doc_strings

This commit is contained in:
Srinivas Reddy Thatiparthy (తాటిపర్తి శ్రీనివాస్ రెడ్డి) 2025-05-13 09:45:59 +05:30 committed by GitHub
commit c68cb4d6f6
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
1840 changed files with 181870 additions and 79215 deletions

View file

@ -1,4 +1,4 @@
trigger: ['main', '3.13', '3.12', '3.11', '3.10', '3.9', '3.8'] trigger: ['main', '3.*']
jobs: jobs:
- job: Prebuild - job: Prebuild

View file

@ -1,6 +1,6 @@
root = true root = true
[*.{py,c,cpp,h,js,rst,md,yml}] [*.{py,c,cpp,h,js,rst,md,yml,yaml}]
trim_trailing_whitespace = true trim_trailing_whitespace = true
insert_final_newline = true insert_final_newline = true
indent_style = space indent_style = space
@ -11,5 +11,5 @@ indent_size = 4
[*.rst] [*.rst]
indent_size = 3 indent_size = 3
[*.{js,yml}] [*.{js,yml,yaml}]
indent_size = 2 indent_size = 2

55
.github/CODEOWNERS vendored
View file

@ -5,11 +5,11 @@
# https://git-scm.com/docs/gitignore#_pattern_format # https://git-scm.com/docs/gitignore#_pattern_format
# GitHub # GitHub
.github/** @ezio-melotti @hugovk .github/** @ezio-melotti @hugovk @AA-Turner
# pre-commit # pre-commit
.pre-commit-config.yaml @hugovk @AlexWaygood .pre-commit-config.yaml @hugovk @AlexWaygood
.ruff.toml @hugovk @AlexWaygood .ruff.toml @hugovk @AlexWaygood @AA-Turner
# Build system # Build system
configure* @erlend-aasland @corona10 configure* @erlend-aasland @corona10
@ -30,6 +30,7 @@ Modules/Setup* @erlend-aasland
Objects/set* @rhettinger Objects/set* @rhettinger
Objects/dict* @methane @markshannon Objects/dict* @methane @markshannon
Objects/typevarobject.c @JelleZijlstra Objects/typevarobject.c @JelleZijlstra
Objects/unionobject.c @JelleZijlstra
Objects/type* @markshannon Objects/type* @markshannon
Objects/codeobject.c @markshannon Objects/codeobject.c @markshannon
Objects/frameobject.c @markshannon Objects/frameobject.c @markshannon
@ -56,6 +57,14 @@ Tools/c-analyzer/ @ericsnowcurrently
# dbm # dbm
**/*dbm* @corona10 @erlend-aasland @serhiy-storchaka **/*dbm* @corona10 @erlend-aasland @serhiy-storchaka
# Doc/ tools
Doc/conf.py @AA-Turner @hugovk
Doc/Makefile @AA-Turner @hugovk
Doc/make.bat @AA-Turner @hugovk
Doc/requirements.txt @AA-Turner @hugovk
Doc/_static/** @AA-Turner @hugovk
Doc/tools/** @AA-Turner @hugovk
# runtime state/lifecycle # runtime state/lifecycle
**/*pylifecycle* @ericsnowcurrently **/*pylifecycle* @ericsnowcurrently
**/*pystate* @ericsnowcurrently **/*pystate* @ericsnowcurrently
@ -98,13 +107,17 @@ Objects/exceptions.c @iritkatriel
# Hashing & cryptographic primitives # Hashing & cryptographic primitives
**/*hashlib* @gpshead @tiran @picnixz **/*hashlib* @gpshead @tiran @picnixz
**/*pyhash* @gpshead @tiran **/*hashopenssl* @gpshead @tiran @picnixz
**/sha* @gpshead @tiran @picnixz **/*pyhash* @gpshead @tiran @picnixz
Modules/md5* @gpshead @tiran @picnixz Modules/*blake* @gpshead @tiran @picnixz
**/*blake* @gpshead @tiran @picnixz Modules/*md5* @gpshead @tiran @picnixz
Modules/_hacl/** @gpshead Modules/*sha* @gpshead @tiran @picnixz
Modules/_hacl/** @gpshead @picnixz
**/*hmac* @gpshead @picnixz **/*hmac* @gpshead @picnixz
# libssl
**/*ssl* @gpshead @picnixz
# logging # logging
**/*logging* @vsajip **/*logging* @vsajip
@ -155,6 +168,9 @@ Include/internal/pycore_time.h @pganssle @abalkin
**/*imap* @python/email-team **/*imap* @python/email-team
**/*poplib* @python/email-team **/*poplib* @python/email-team
# Exclude .mailmap from being owned by @python/email-team
/.mailmap
# Garbage collector # Garbage collector
/Modules/gcmodule.c @pablogsal /Modules/gcmodule.c @pablogsal
/Doc/library/gc.rst @pablogsal /Doc/library/gc.rst @pablogsal
@ -172,10 +188,11 @@ Include/internal/pycore_time.h @pganssle @abalkin
# AST # AST
Python/ast.c @isidentical @JelleZijlstra @eclips4 Python/ast.c @isidentical @JelleZijlstra @eclips4
Python/ast_opt.c @isidentical @eclips4 Python/ast_preprocess.c @isidentical @eclips4
Parser/asdl.py @isidentical @JelleZijlstra @eclips4 Parser/asdl.py @isidentical @JelleZijlstra @eclips4
Parser/asdl_c.py @isidentical @JelleZijlstra @eclips4 Parser/asdl_c.py @isidentical @JelleZijlstra @eclips4
Lib/ast.py @isidentical @JelleZijlstra @eclips4 Lib/ast.py @isidentical @JelleZijlstra @eclips4
Lib/_ast_unparse.py @isidentical @JelleZijlstra @eclips4
Lib/test/test_ast/ @eclips4 Lib/test/test_ast/ @eclips4
# Mock # Mock
@ -281,7 +298,12 @@ Lib/test/test_interpreters/ @ericsnowcurrently
**/*-ios* @freakboy3742 **/*-ios* @freakboy3742
# WebAssembly # WebAssembly
/Tools/wasm/ @brettcannon @freakboy3742 Tools/wasm/config.site-wasm32-emscripten @freakboy3742
/Tools/wasm/README.md @brettcannon @freakboy3742
/Tools/wasm/wasi-env @brettcannon
/Tools/wasm/wasi.py @brettcannon
/Tools/wasm/emscripten @freakboy3742
/Tools/wasm/wasi @brettcannon
# SBOM # SBOM
/Misc/externals.spdx.json @sethmlarson /Misc/externals.spdx.json @sethmlarson
@ -293,6 +315,19 @@ Lib/configparser.py @jaraco
Lib/test/test_configparser.py @jaraco Lib/test/test_configparser.py @jaraco
# Doc sections # Doc sections
Doc/reference/ @willingc Doc/reference/ @willingc @AA-Turner
**/*weakref* @kumaraditya303 **/*weakref* @kumaraditya303
# Colorize
Lib/_colorize.py @hugovk
Lib/test/test__colorize.py @hugovk
# Fuzzing
Modules/_xxtestfuzz/ @ammaraskar
# t-strings
**/*interpolationobject* @lysnikolaou
**/*templateobject* @lysnikolaou
**/*templatelib* @lysnikolaou
**/*tstring* @lysnikolaou

View file

@ -40,6 +40,7 @@ body:
- "3.12" - "3.12"
- "3.13" - "3.13"
- "3.14" - "3.14"
- "3.15"
- "CPython main branch" - "CPython main branch"
validations: validations:
required: true required: true

View file

@ -33,6 +33,7 @@ body:
- "3.12" - "3.12"
- "3.13" - "3.13"
- "3.14" - "3.14"
- "3.15"
- "CPython main branch" - "CPython main branch"
validations: validations:
required: true required: true

View file

@ -7,10 +7,10 @@ # Pull Request title
It should be in the following format: It should be in the following format:
``` ```
gh-NNNNN: Summary of the changes made gh-NNNNNN: Summary of the changes made
``` ```
Where: gh-NNNNN refers to the GitHub issue number. Where: gh-NNNNNN refers to the GitHub issue number.
Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue. Most PRs will require an issue number. Trivial changes, like fixing a typo, do not need an issue.
@ -20,11 +20,11 @@ # Backport Pull Request title
please ensure that the PR title is in the following format: please ensure that the PR title is in the following format:
``` ```
[X.Y] <title from the original PR> (GH-NNNN) [X.Y] <title from the original PR> (GH-NNNNNN)
``` ```
Where: [X.Y] is the branch name, e.g. [3.6]. Where: [X.Y] is the branch name, for example: [3.13].
GH-NNNN refers to the PR number from `main`. GH-NNNNNN refers to the PR number from `main`.
--> -->

View file

@ -1,5 +1,6 @@
self-hosted-runner: self-hosted-runner:
labels: ["ubuntu-24.04-aarch64", "windows-aarch64"] # Pending https://github.com/rhysd/actionlint/issues/533
labels: ["windows-11-arm"]
config-variables: null config-variables: null
@ -7,4 +8,4 @@ paths:
.github/workflows/**/*.yml: .github/workflows/**/*.yml:
ignore: ignore:
- 1st argument of function call is not assignable - 1st argument of function call is not assignable
- SC2(015|038|086|091|097|098|129|155) - SC2(015|038|086|091|097|098|129|155)

View file

@ -18,6 +18,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
issues: write issues: write
timeout-minutes: 5
steps: steps:
- uses: actions/github-script@v7 - uses: actions/github-script@v7
with: with:

View file

@ -18,29 +18,32 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}-reusable group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}-reusable
cancel-in-progress: true cancel-in-progress: true
env:
FORCE_COLOR: 1
jobs: jobs:
check_source: build-context:
name: Change detection name: Change detection
# To use boolean outputs from this job, parse them as JSON. # To use boolean outputs from this job, parse them as JSON.
# Here's some examples: # Here's some examples:
# #
# if: fromJSON(needs.check_source.outputs.run-docs) # if: fromJSON(needs.build-context.outputs.run-docs)
# #
# ${{ # ${{
# fromJSON(needs.check_source.outputs.run_tests) # fromJSON(needs.build-context.outputs.run-tests)
# && 'truthy-branch' # && 'truthy-branch'
# || 'falsy-branch' # || 'falsy-branch'
# }} # }}
# #
uses: ./.github/workflows/reusable-change-detection.yml uses: ./.github/workflows/reusable-context.yml
check-docs: check-docs:
name: Docs name: Docs
needs: check_source needs: build-context
if: fromJSON(needs.check_source.outputs.run-docs) if: fromJSON(needs.build-context.outputs.run-docs)
uses: ./.github/workflows/reusable-docs.yml uses: ./.github/workflows/reusable-docs.yml
check_autoconf_regen: check-autoconf-regen:
name: 'Check if Autoconf files are up to date' name: 'Check if Autoconf files are up to date'
# Don't use ubuntu-latest but a specific version to make the job # Don't use ubuntu-latest but a specific version to make the job
# reproducible: to get the same tools versions (autoconf, aclocal, ...) # reproducible: to get the same tools versions (autoconf, aclocal, ...)
@ -48,8 +51,8 @@ jobs:
container: container:
image: ghcr.io/python/autoconf:2025.01.02.12581854023 image: ghcr.io/python/autoconf:2025.01.02.12581854023
timeout-minutes: 60 timeout-minutes: 60
needs: check_source needs: build-context
if: needs.check_source.outputs.run_tests == 'true' if: needs.build-context.outputs.run-tests == 'true'
steps: steps:
- name: Install Git - name: Install Git
run: | run: |
@ -59,8 +62,6 @@ jobs:
with: with:
fetch-depth: 1 fetch-depth: 1
persist-credentials: false persist-credentials: false
- name: Runner image version
run: echo "IMAGE_VERSION=${ImageVersion}" >> "$GITHUB_ENV"
- name: Check Autoconf and aclocal versions - name: Check Autoconf and aclocal versions
run: | run: |
grep "Generated by GNU Autoconf 2.72" configure grep "Generated by GNU Autoconf 2.72" configure
@ -85,14 +86,14 @@ jobs:
exit 1 exit 1
fi fi
check_generated_files: check-generated-files:
name: 'Check if generated files are up to date' name: 'Check if generated files are up to date'
# Don't use ubuntu-latest but a specific version to make the job # Don't use ubuntu-latest but a specific version to make the job
# reproducible: to get the same tools versions (autoconf, aclocal, ...) # reproducible: to get the same tools versions (autoconf, aclocal, ...)
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
timeout-minutes: 60 timeout-minutes: 60
needs: check_source needs: build-context
if: needs.check_source.outputs.run_tests == 'true' if: needs.build-context.outputs.run-tests == 'true'
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@ -101,14 +102,14 @@ jobs:
with: with:
python-version: '3.x' python-version: '3.x'
- name: Runner image version - name: Runner image version
run: echo "IMAGE_VERSION=${ImageVersion}" >> "$GITHUB_ENV" run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: Restore config.cache - name: Restore config.cache
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: config.cache path: config.cache
# Include env.pythonLocation in key to avoid changes in environment when setup-python updates Python # Include env.pythonLocation in key to avoid changes in environment when setup-python updates Python
key: ${{ github.job }}-${{ runner.os }}-${{ env.IMAGE_VERSION }}-${{ needs.check_source.outputs.config_hash }}-${{ env.pythonLocation }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ needs.build-context.outputs.config-hash }}-${{ env.pythonLocation }}
- name: Install Dependencies - name: Install dependencies
run: sudo ./.github/workflows/posix-deps-apt.sh run: sudo ./.github/workflows/posix-deps-apt.sh
- name: Add ccache to PATH - name: Add ccache to PATH
run: echo "PATH=/usr/lib/ccache:$PATH" >> "$GITHUB_ENV" run: echo "PATH=/usr/lib/ccache:$PATH" >> "$GITHUB_ENV"
@ -146,44 +147,37 @@ jobs:
if: github.event_name == 'pull_request' # $GITHUB_EVENT_NAME if: github.event_name == 'pull_request' # $GITHUB_EVENT_NAME
run: make check-c-globals run: make check-c-globals
build_windows: build-windows:
name: >- name: >-
Windows Windows
${{ fromJSON(matrix.free-threading) && '(free-threading)' || '' }} ${{ fromJSON(matrix.free-threading) && '(free-threading)' || '' }}
needs: check_source needs: build-context
if: fromJSON(needs.check_source.outputs.run_tests) if: fromJSON(needs.build-context.outputs.run-windows-tests)
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os:
- windows-latest
arch: arch:
- x64 - x64
- Win32
- arm64
free-threading: free-threading:
- false - false
- true - true
include: exclude:
- os: windows-latest # FIXME(diegorusso): change to os: windows-aarch64 # Skip Win32 on free-threaded builds
arch: arm64 - { arch: Win32, free-threading: true }
free-threading: false
- os: windows-latest # FIXME(diegorusso): change to os: windows-aarch64
arch: arm64
free-threading: true
- os: windows-latest
arch: Win32
free-threading: false
uses: ./.github/workflows/reusable-windows.yml uses: ./.github/workflows/reusable-windows.yml
with: with:
os: ${{ matrix.os }}
arch: ${{ matrix.arch }} arch: ${{ matrix.arch }}
free-threading: ${{ matrix.free-threading }} free-threading: ${{ matrix.free-threading }}
build_windows_msi: build-windows-msi:
name: >- # ${{ '' } is a hack to nest jobs under the same sidebar category name: >- # ${{ '' } is a hack to nest jobs under the same sidebar category
Windows MSI${{ '' }} Windows MSI${{ '' }}
needs: check_source needs: build-context
if: fromJSON(needs.check_source.outputs.run-win-msi) if: fromJSON(needs.build-context.outputs.run-windows-msi)
strategy: strategy:
fail-fast: false
matrix: matrix:
arch: arch:
- x86 - x86
@ -193,12 +187,12 @@ jobs:
with: with:
arch: ${{ matrix.arch }} arch: ${{ matrix.arch }}
build_macos: build-macos:
name: >- name: >-
macOS macOS
${{ fromJSON(matrix.free-threading) && '(free-threading)' || '' }} ${{ fromJSON(matrix.free-threading) && '(free-threading)' || '' }}
needs: check_source needs: build-context
if: needs.check_source.outputs.run_tests == 'true' if: needs.build-context.outputs.run-tests == 'true'
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
@ -223,46 +217,54 @@ jobs:
free-threading: true free-threading: true
uses: ./.github/workflows/reusable-macos.yml uses: ./.github/workflows/reusable-macos.yml
with: with:
config_hash: ${{ needs.check_source.outputs.config_hash }} config_hash: ${{ needs.build-context.outputs.config-hash }}
free-threading: ${{ matrix.free-threading }} free-threading: ${{ matrix.free-threading }}
os: ${{ matrix.os }} os: ${{ matrix.os }}
build_ubuntu: build-ubuntu:
name: >- name: >-
Ubuntu Ubuntu
${{ fromJSON(matrix.free-threading) && '(free-threading)' || '' }} ${{ fromJSON(matrix.free-threading) && '(free-threading)' || '' }}
needs: check_source ${{ fromJSON(matrix.bolt) && '(bolt)' || '' }}
if: needs.check_source.outputs.run_tests == 'true' needs: build-context
if: needs.build-context.outputs.run-tests == 'true'
strategy: strategy:
fail-fast: false
matrix: matrix:
bolt:
- false
- true
free-threading: free-threading:
- false - false
- true - true
os: os:
- ubuntu-24.04 - ubuntu-24.04
- ubuntu-24.04-aarch64 - ubuntu-24.04-arm
is-fork: # only used for the exclusion trick
- ${{ github.repository_owner != 'python' }}
exclude: exclude:
- os: ubuntu-24.04-aarch64 # Do not test BOLT with free-threading, to conserve resources
is-fork: true - bolt: true
free-threading: true
# BOLT currently crashes during instrumentation on aarch64
- os: ubuntu-24.04-arm
bolt: true
uses: ./.github/workflows/reusable-ubuntu.yml uses: ./.github/workflows/reusable-ubuntu.yml
with: with:
config_hash: ${{ needs.check_source.outputs.config_hash }} config_hash: ${{ needs.build-context.outputs.config-hash }}
bolt-optimizations: ${{ matrix.bolt }}
free-threading: ${{ matrix.free-threading }} free-threading: ${{ matrix.free-threading }}
os: ${{ matrix.os }} os: ${{ matrix.os }}
build_ubuntu_ssltests: build-ubuntu-ssltests:
name: 'Ubuntu SSL tests with OpenSSL' name: 'Ubuntu SSL tests with OpenSSL'
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
timeout-minutes: 60 timeout-minutes: 60
needs: check_source needs: build-context
if: needs.check_source.outputs.run_tests == 'true' if: needs.build-context.outputs.run-tests == 'true'
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os: [ubuntu-24.04] os: [ubuntu-24.04]
openssl_ver: [3.0.15, 3.1.7, 3.2.3, 3.3.2, 3.4.0] openssl_ver: [3.0.16, 3.1.8, 3.2.4, 3.3.3, 3.4.1]
# See Tools/ssl/make_ssl_data.py for notes on adding a new version # See Tools/ssl/make_ssl_data.py for notes on adding a new version
env: env:
OPENSSL_VER: ${{ matrix.openssl_ver }} OPENSSL_VER: ${{ matrix.openssl_ver }}
@ -274,15 +276,15 @@ jobs:
with: with:
persist-credentials: false persist-credentials: false
- name: Runner image version - name: Runner image version
run: echo "IMAGE_VERSION=${ImageVersion}" >> "$GITHUB_ENV" run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: Restore config.cache - name: Restore config.cache
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: config.cache path: config.cache
key: ${{ github.job }}-${{ runner.os }}-${{ env.IMAGE_VERSION }}-${{ needs.check_source.outputs.config_hash }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ needs.build-context.outputs.config-hash }}
- name: Register gcc problem matcher - name: Register gcc problem matcher
run: echo "::add-matcher::.github/problem-matchers/gcc.json" run: echo "::add-matcher::.github/problem-matchers/gcc.json"
- name: Install Dependencies - name: Install dependencies
run: sudo ./.github/workflows/posix-deps-apt.sh run: sudo ./.github/workflows/posix-deps-apt.sh
- name: Configure OpenSSL env vars - name: Configure OpenSSL env vars
run: | run: |
@ -314,22 +316,22 @@ jobs:
- name: SSL tests - name: SSL tests
run: ./python Lib/test/ssltests.py run: ./python Lib/test/ssltests.py
build_wasi: build-wasi:
name: 'WASI' name: 'WASI'
needs: check_source needs: build-context
if: needs.check_source.outputs.run_tests == 'true' if: needs.build-context.outputs.run-tests == 'true'
uses: ./.github/workflows/reusable-wasi.yml uses: ./.github/workflows/reusable-wasi.yml
with: with:
config_hash: ${{ needs.check_source.outputs.config_hash }} config_hash: ${{ needs.build-context.outputs.config-hash }}
test_hypothesis: test-hypothesis:
name: "Hypothesis tests on Ubuntu" name: "Hypothesis tests on Ubuntu"
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
timeout-minutes: 60 timeout-minutes: 60
needs: check_source needs: build-context
if: needs.check_source.outputs.run_tests == 'true' && needs.check_source.outputs.run_hypothesis == 'true' if: needs.build-context.outputs.run-tests == 'true'
env: env:
OPENSSL_VER: 3.0.15 OPENSSL_VER: 3.0.16
PYTHONSTRICTEXTENSIONBUILD: 1 PYTHONSTRICTEXTENSIONBUILD: 1
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
@ -337,7 +339,7 @@ jobs:
persist-credentials: false persist-credentials: false
- name: Register gcc problem matcher - name: Register gcc problem matcher
run: echo "::add-matcher::.github/problem-matchers/gcc.json" run: echo "::add-matcher::.github/problem-matchers/gcc.json"
- name: Install Dependencies - name: Install dependencies
run: sudo ./.github/workflows/posix-deps-apt.sh run: sudo ./.github/workflows/posix-deps-apt.sh
- name: Configure OpenSSL env vars - name: Configure OpenSSL env vars
run: | run: |
@ -369,12 +371,12 @@ jobs:
- name: Bind mount sources read-only - name: Bind mount sources read-only
run: sudo mount --bind -o ro "$GITHUB_WORKSPACE" "$CPYTHON_RO_SRCDIR" run: sudo mount --bind -o ro "$GITHUB_WORKSPACE" "$CPYTHON_RO_SRCDIR"
- name: Runner image version - name: Runner image version
run: echo "IMAGE_VERSION=${ImageVersion}" >> "$GITHUB_ENV" run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: Restore config.cache - name: Restore config.cache
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: ${{ env.CPYTHON_BUILDDIR }}/config.cache path: ${{ env.CPYTHON_BUILDDIR }}/config.cache
key: ${{ github.job }}-${{ runner.os }}-${{ env.IMAGE_VERSION }}-${{ needs.check_source.outputs.config_hash }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ needs.build-context.outputs.config-hash }}
- name: Configure CPython out-of-tree - name: Configure CPython out-of-tree
working-directory: ${{ env.CPYTHON_BUILDDIR }} working-directory: ${{ env.CPYTHON_BUILDDIR }}
run: | run: |
@ -420,8 +422,9 @@ jobs:
# failing when executed from inside a virtual environment. # failing when executed from inside a virtual environment.
"${VENV_PYTHON}" -m test \ "${VENV_PYTHON}" -m test \
-W \ -W \
-o \ --slowest \
-j4 \ -j4 \
--timeout 900 \
-x test_asyncio \ -x test_asyncio \
-x test_multiprocessing_fork \ -x test_multiprocessing_fork \
-x test_multiprocessing_forkserver \ -x test_multiprocessing_forkserver \
@ -437,18 +440,18 @@ jobs:
name: hypothesis-example-db name: hypothesis-example-db
path: ${{ env.CPYTHON_BUILDDIR }}/.hypothesis/examples/ path: ${{ env.CPYTHON_BUILDDIR }}/.hypothesis/examples/
build-asan:
build_asan:
name: 'Address sanitizer' name: 'Address sanitizer'
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
timeout-minutes: 60 timeout-minutes: 60
needs: check_source needs: build-context
if: needs.check_source.outputs.run_tests == 'true' if: needs.build-context.outputs.run-tests == 'true'
strategy: strategy:
fail-fast: false
matrix: matrix:
os: [ubuntu-24.04] os: [ubuntu-24.04]
env: env:
OPENSSL_VER: 3.0.15 OPENSSL_VER: 3.0.16
PYTHONSTRICTEXTENSIONBUILD: 1 PYTHONSTRICTEXTENSIONBUILD: 1
ASAN_OPTIONS: detect_leaks=0:allocator_may_return_null=1:handle_segv=0 ASAN_OPTIONS: detect_leaks=0:allocator_may_return_null=1:handle_segv=0
steps: steps:
@ -456,15 +459,15 @@ jobs:
with: with:
persist-credentials: false persist-credentials: false
- name: Runner image version - name: Runner image version
run: echo "IMAGE_VERSION=${ImageVersion}" >> "$GITHUB_ENV" run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: Restore config.cache - name: Restore config.cache
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: config.cache path: config.cache
key: ${{ github.job }}-${{ runner.os }}-${{ env.IMAGE_VERSION }}-${{ needs.check_source.outputs.config_hash }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ needs.build-context.outputs.config-hash }}
- name: Register gcc problem matcher - name: Register gcc problem matcher
run: echo "::add-matcher::.github/problem-matchers/gcc.json" run: echo "::add-matcher::.github/problem-matchers/gcc.json"
- name: Install Dependencies - name: Install dependencies
run: sudo ./.github/workflows/posix-deps-apt.sh run: sudo ./.github/workflows/posix-deps-apt.sh
- name: Set up GCC-10 for ASAN - name: Set up GCC-10 for ASAN
uses: egor-tensin/setup-gcc@v1 uses: egor-tensin/setup-gcc@v1
@ -501,35 +504,70 @@ jobs:
- name: Tests - name: Tests
run: xvfb-run make ci run: xvfb-run make ci
build_tsan: build-tsan:
name: 'Thread sanitizer' name: >-
needs: check_source Thread sanitizer
if: needs.check_source.outputs.run_tests == 'true' ${{ fromJSON(matrix.free-threading) && '(free-threading)' || '' }}
needs: build-context
if: needs.build-context.outputs.run-tests == 'true'
strategy:
fail-fast: false
matrix:
free-threading:
- false
- true
uses: ./.github/workflows/reusable-tsan.yml uses: ./.github/workflows/reusable-tsan.yml
with: with:
config_hash: ${{ needs.check_source.outputs.config_hash }} config_hash: ${{ needs.build-context.outputs.config-hash }}
options: ./configure --config-cache --with-thread-sanitizer --with-pydebug free-threading: ${{ matrix.free-threading }}
suppressions_path: Tools/tsan/supressions.txt
tsan_logs_artifact_name: tsan-logs-default
build_tsan_free_threading: cross-build-linux:
name: 'Thread sanitizer (free-threading)' name: Cross build Linux
needs: check_source runs-on: ubuntu-latest
if: needs.check_source.outputs.run_tests == 'true' timeout-minutes: 60
uses: ./.github/workflows/reusable-tsan.yml needs: build-context
with: if: needs.build-context.outputs.run-tests == 'true'
config_hash: ${{ needs.check_source.outputs.config_hash }} steps:
options: ./configure --config-cache --disable-gil --with-thread-sanitizer --with-pydebug - uses: actions/checkout@v4
suppressions_path: Tools/tsan/suppressions_free_threading.txt with:
tsan_logs_artifact_name: tsan-logs-free-threading persist-credentials: false
- name: Runner image version
run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: Restore config.cache
uses: actions/cache@v4
with:
path: config.cache
key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ needs.build-context.outputs.config-hash }}
- name: Register gcc problem matcher
run: echo "::add-matcher::.github/problem-matchers/gcc.json"
- name: Set build dir
run:
# an absolute path outside of the working directoy
echo "BUILD_DIR=$(realpath ${{ github.workspace }}/../build)" >> "$GITHUB_ENV"
- name: Install dependencies
run: sudo ./.github/workflows/posix-deps-apt.sh
- name: Configure host build
run: ./configure --prefix="$BUILD_DIR/host-python"
- name: Install host Python
run: make -j8 install
- name: Run test subset with host build
run: |
"$BUILD_DIR/host-python/bin/python3" -m test test_sysconfig test_site test_embed
- name: Configure cross build
run: ./configure --prefix="$BUILD_DIR/cross-python" --with-build-python="$BUILD_DIR/host-python/bin/python3"
- name: Install cross Python
run: make -j8 install
- name: Run test subset with host build
run: |
"$BUILD_DIR/cross-python/bin/python3" -m test test_sysconfig test_site test_embed
# CIFuzz job based on https://google.github.io/oss-fuzz/getting-started/continuous-integration/ # CIFuzz job based on https://google.github.io/oss-fuzz/getting-started/continuous-integration/
cifuzz: cifuzz:
name: CIFuzz name: CIFuzz
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 60 timeout-minutes: 60
needs: check_source needs: build-context
if: needs.check_source.outputs.run_cifuzz == 'true' if: needs.build-context.outputs.run-ci-fuzz == 'true'
permissions: permissions:
security-events: write security-events: write
strategy: strategy:
@ -551,8 +589,8 @@ jobs:
output-sarif: true output-sarif: true
sanitizer: ${{ matrix.sanitizer }} sanitizer: ${{ matrix.sanitizer }}
- name: Upload crash - name: Upload crash
uses: actions/upload-artifact@v4
if: failure() && steps.build.outcome == 'success' if: failure() && steps.build.outcome == 'success'
uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.sanitizer }}-artifacts name: ${{ matrix.sanitizer }}-artifacts
path: ./out/artifacts path: ./out/artifacts
@ -565,72 +603,71 @@ jobs:
all-required-green: # This job does nothing and is only used for the branch protection all-required-green: # This job does nothing and is only used for the branch protection
name: All required checks pass name: All required checks pass
if: always()
needs:
- check_source # Transitive dependency, needed to access `run_tests` value
- check-docs
- check_autoconf_regen
- check_generated_files
- build_macos
- build_ubuntu
- build_ubuntu_ssltests
- build_wasi
- build_windows
- build_windows_msi
- test_hypothesis
- build_asan
- build_tsan
- build_tsan_free_threading
- cifuzz
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 5
needs:
- build-context # Transitive dependency, needed to access `run-tests` value
- check-docs
- check-autoconf-regen
- check-generated-files
- build-windows
- build-windows-msi
- build-macos
- build-ubuntu
- build-ubuntu-ssltests
- build-wasi
- test-hypothesis
- build-asan
- build-tsan
- cross-build-linux
- cifuzz
if: always()
steps: steps:
- name: Check whether the needed jobs succeeded or failed - name: Check whether the needed jobs succeeded or failed
uses: re-actors/alls-green@05ac9388f0aebcb5727afa17fcccfecd6f8ec5fe uses: re-actors/alls-green@05ac9388f0aebcb5727afa17fcccfecd6f8ec5fe
with: with:
allowed-failures: >- allowed-failures: >-
build_ubuntu_ssltests, build-windows-msi,
build_windows_msi, build-ubuntu-ssltests,
test-hypothesis,
cifuzz, cifuzz,
test_hypothesis,
allowed-skips: >- allowed-skips: >-
${{ ${{
!fromJSON(needs.check_source.outputs.run-docs) !fromJSON(needs.build-context.outputs.run-docs)
&& ' && '
check-docs, check-docs,
' '
|| '' || ''
}} }}
${{ ${{
needs.check_source.outputs.run_tests != 'true' needs.build-context.outputs.run-tests != 'true'
&& ' && '
check_autoconf_regen, check-autoconf-regen,
check_generated_files, check-generated-files,
build_macos, build-macos,
build_ubuntu, build-ubuntu,
build_ubuntu_ssltests, build-ubuntu-ssltests,
build_wasi, build-wasi,
build_windows, test-hypothesis,
build_asan, build-asan,
build_tsan, build-tsan,
build_tsan_free_threading, cross-build-linux,
' '
|| '' || ''
}} }}
${{ ${{
!fromJSON(needs.check_source.outputs.run_cifuzz) !fromJSON(needs.build-context.outputs.run-windows-tests)
&& '
build-windows,
'
|| ''
}}
${{
!fromJSON(needs.build-context.outputs.run-ci-fuzz)
&& ' && '
cifuzz, cifuzz,
' '
|| '' || ''
}} }}
${{
!fromJSON(needs.check_source.outputs.run_hypothesis)
&& '
test_hypothesis,
'
|| ''
}}
jobs: ${{ toJSON(needs) }} jobs: ${{ toJSON(needs) }}

View file

@ -19,6 +19,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
pull-requests: write pull-requests: write
timeout-minutes: 5
steps: steps:
- uses: readthedocs/actions/preview@v1 - uses: readthedocs/actions/preview@v1

View file

@ -25,6 +25,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
env:
FORCE_COLOR: 1
jobs: jobs:
interpreter: interpreter:
name: Interpreter (Debug) name: Interpreter (Debug)
@ -71,7 +74,7 @@ jobs:
runner: windows-latest runner: windows-latest
- target: aarch64-pc-windows-msvc/msvc - target: aarch64-pc-windows-msvc/msvc
architecture: ARM64 architecture: ARM64
runner: windows-latest runner: windows-11-arm
- target: x86_64-apple-darwin/clang - target: x86_64-apple-darwin/clang
architecture: x86_64 architecture: x86_64
runner: macos-13 runner: macos-13
@ -83,8 +86,7 @@ jobs:
runner: ubuntu-24.04 runner: ubuntu-24.04
- target: aarch64-unknown-linux-gnu/gcc - target: aarch64-unknown-linux-gnu/gcc
architecture: aarch64 architecture: aarch64
# Forks don't have access to our paid AArch64 runners. These jobs are skipped below: runner: ubuntu-24.04-arm
runner: ${{ github.repository_owner == 'python' && 'ubuntu-24.04-aarch64' || 'ubuntu-24.04' }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@ -93,38 +95,30 @@ jobs:
with: with:
python-version: '3.11' python-version: '3.11'
- name: Native Windows # PCbuild downloads LLVM automatically:
if: runner.os == 'Windows' && matrix.architecture != 'ARM64' - name: Windows
if: runner.os == 'Windows'
run: | run: |
choco install llvm --allow-downgrade --no-progress --version ${{ matrix.llvm }}.1.0
./PCbuild/build.bat --experimental-jit ${{ matrix.debug && '-d' || '' }} -p ${{ matrix.architecture }} ./PCbuild/build.bat --experimental-jit ${{ matrix.debug && '-d' || '' }} -p ${{ matrix.architecture }}
./PCbuild/rt.bat ${{ matrix.debug && '-d' || '' }} -p ${{ matrix.architecture }} -q --multiprocess 0 --timeout 4500 --verbose2 --verbose3 ./PCbuild/rt.bat ${{ matrix.debug && '-d' || '' }} -p ${{ matrix.architecture }} -q --multiprocess 0 --timeout 4500 --verbose2 --verbose3
# No tests (yet):
- name: Emulated Windows
if: runner.os == 'Windows' && matrix.architecture == 'ARM64'
run: |
choco install llvm --allow-downgrade --no-progress --version ${{ matrix.llvm }}.1.0
./PCbuild/build.bat --experimental-jit ${{ matrix.debug && '-d' || '' }} -p ${{ matrix.architecture }}
# The `find` line is required as a result of https://github.com/actions/runner-images/issues/9966. # The `find` line is required as a result of https://github.com/actions/runner-images/issues/9966.
# This is a bug in the macOS runner image where the pre-installed Python is installed in the same # This is a bug in the macOS runner image where the pre-installed Python is installed in the same
# directory as the Homebrew Python, which causes the build to fail for macos-13. This line removes # directory as the Homebrew Python, which causes the build to fail for macos-13. This line removes
# the symlink to the pre-installed Python so that the Homebrew Python is used instead. # the symlink to the pre-installed Python so that the Homebrew Python is used instead.
- name: Native macOS - name: macOS
if: runner.os == 'macOS' if: runner.os == 'macOS'
run: | run: |
brew update brew update
find /usr/local/bin -lname '*/Library/Frameworks/Python.framework/*' -delete find /usr/local/bin -lname '*/Library/Frameworks/Python.framework/*' -delete
brew install llvm@${{ matrix.llvm }} brew install llvm@${{ matrix.llvm }}
export SDKROOT="$(xcrun --show-sdk-path)" export SDKROOT="$(xcrun --show-sdk-path)"
./configure --enable-experimental-jit ${{ matrix.debug && '--with-pydebug' || '' }} ./configure --enable-experimental-jit --enable-universalsdk --with-universal-archs=universal2 ${{ matrix.debug && '--with-pydebug' || '' }}
make all --jobs 4 make all --jobs 4
./python.exe -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3 ./python.exe -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3
- name: Native Linux - name: Linux
# Forks don't have access to our paid AArch64 runners. Skip those: if: runner.os == 'Linux'
if: runner.os == 'Linux' && (matrix.architecture == 'x86_64' || github.repository_owner == 'python')
run: | run: |
sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ./llvm.sh ${{ matrix.llvm }} sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ./llvm.sh ${{ matrix.llvm }}
export PATH="$(llvm-config-${{ matrix.llvm }} --bindir):$PATH" export PATH="$(llvm-config-${{ matrix.llvm }} --bindir):$PATH"
@ -132,27 +126,30 @@ jobs:
make all --jobs 4 make all --jobs 4
./python -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3 ./python -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3
jit-with-disabled-gil: # XXX: GH-133171
name: Free-Threaded (Debug) # jit-with-disabled-gil:
needs: interpreter # name: Free-Threaded (Debug)
runs-on: ubuntu-24.04 # needs: interpreter
strategy: # runs-on: ubuntu-24.04
matrix: # timeout-minutes: 90
llvm: # strategy:
- 19 # fail-fast: false
steps: # matrix:
- uses: actions/checkout@v4 # llvm:
with: # - 19
persist-credentials: false # steps:
- uses: actions/setup-python@v5 # - uses: actions/checkout@v4
with: # with:
python-version: '3.11' # persist-credentials: false
- name: Build with JIT enabled and GIL disabled # - uses: actions/setup-python@v5
run: | # with:
sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ./llvm.sh ${{ matrix.llvm }} # python-version: '3.11'
export PATH="$(llvm-config-${{ matrix.llvm }} --bindir):$PATH" # - name: Build with JIT enabled and GIL disabled
./configure --enable-experimental-jit --with-pydebug --disable-gil # run: |
make all --jobs 4 # sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ./llvm.sh ${{ matrix.llvm }}
- name: Run tests # export PATH="$(llvm-config-${{ matrix.llvm }} --bindir):$PATH"
run: | # ./configure --enable-experimental-jit --with-pydebug --disable-gil
./python -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3 # make all --jobs 4
# - name: Run tests
# run: |
# ./python -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3

View file

@ -8,15 +8,21 @@ on:
pull_request: pull_request:
paths: paths:
- ".github/workflows/mypy.yml" - ".github/workflows/mypy.yml"
- "Lib/_colorize.py"
- "Lib/_pyrepl/**" - "Lib/_pyrepl/**"
- "Lib/test/libregrtest/**" - "Lib/test/libregrtest/**"
- "Lib/tomllib/**"
- "Misc/mypy/**"
- "Tools/build/compute-changes.py"
- "Tools/build/generate_sbom.py" - "Tools/build/generate_sbom.py"
- "Tools/build/generate-build-details.py"
- "Tools/build/verify_ensurepip_wheels.py"
- "Tools/build/update_file.py"
- "Tools/cases_generator/**" - "Tools/cases_generator/**"
- "Tools/clinic/**" - "Tools/clinic/**"
- "Tools/jit/**" - "Tools/jit/**"
- "Tools/peg_generator/**" - "Tools/peg_generator/**"
- "Tools/requirements-dev.txt" - "Tools/requirements-dev.txt"
- "Tools/wasm/**"
workflow_dispatch: workflow_dispatch:
permissions: permissions:
@ -33,22 +39,22 @@ concurrency:
jobs: jobs:
mypy: mypy:
name: Run mypy on ${{ matrix.target }}
runs-on: ubuntu-latest
timeout-minutes: 10
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
target: [ target: [
"Lib/_pyrepl", "Lib/_pyrepl",
"Lib/test/libregrtest", "Lib/test/libregrtest",
"Lib/tomllib",
"Tools/build", "Tools/build",
"Tools/cases_generator", "Tools/cases_generator",
"Tools/clinic", "Tools/clinic",
"Tools/jit", "Tools/jit",
"Tools/peg_generator", "Tools/peg_generator",
"Tools/wasm",
] ]
name: Run mypy on ${{ matrix.target }}
runs-on: ubuntu-latest
timeout-minutes: 10
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
@ -59,4 +65,5 @@ jobs:
cache: pip cache: pip
cache-dependency-path: Tools/requirements-dev.txt cache-dependency-path: Tools/requirements-dev.txt
- run: pip install -r Tools/requirements-dev.txt - run: pip install -r Tools/requirements-dev.txt
- run: python3 Misc/mypy/make_symlinks.py --symlink
- run: mypy --config-file ${{ matrix.target }}/mypy.ini - run: mypy --config-file ${{ matrix.target }}/mypy.ini

View file

@ -15,6 +15,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 10 timeout-minutes: 10
strategy: strategy:
fail-fast: false
matrix: matrix:
include: include:
# if an issue has any of these labels, it will be added # if an issue has any of these labels, it will be added

View file

@ -10,8 +10,7 @@ jobs:
if: github.repository_owner == 'python' if: github.repository_owner == 'python'
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
issues: write pull-requests: read
pull-requests: write
timeout-minutes: 10 timeout-minutes: 10
steps: steps:
@ -28,8 +27,7 @@ jobs:
if: github.repository_owner == 'python' if: github.repository_owner == 'python'
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
issues: write pull-requests: read
pull-requests: write
timeout-minutes: 10 timeout-minutes: 10
steps: steps:

View file

@ -1,173 +0,0 @@
name: Reusable change detection
on: # yamllint disable-line rule:truthy
workflow_call:
outputs:
# Some of the referenced steps set outputs conditionally and there may be
# cases when referencing them evaluates to empty strings. It is nice to
# work with proper booleans so they have to be evaluated through JSON
# conversion in the expressions. However, empty strings used like that
# may trigger all sorts of undefined and hard-to-debug behaviors in
# GitHub Actions CI/CD. To help with this, all of the outputs set here
# that are meant to be used as boolean flags (and not arbitrary strings),
# MUST have fallbacks with default values set. A common pattern would be
# to add ` || false` to all such expressions here, in the output
# definitions. They can then later be safely used through the following
# idiom in job conditionals and other expressions. Here's some examples:
#
# if: fromJSON(needs.change-detection.outputs.run-docs)
#
# ${{
# fromJSON(needs.change-detection.outputs.run-tests)
# && 'truthy-branch'
# || 'falsy-branch'
# }}
#
config_hash:
description: Config hash value for use in cache keys
value: ${{ jobs.compute-changes.outputs.config-hash }} # str
run-docs:
description: Whether to build the docs
value: ${{ jobs.compute-changes.outputs.run-docs || false }} # bool
run_tests:
description: Whether to run the regular tests
value: ${{ jobs.compute-changes.outputs.run-tests || false }} # bool
run-win-msi:
description: Whether to run the MSI installer smoke tests
value: >- # bool
${{ jobs.compute-changes.outputs.run-win-msi || false }}
run_hypothesis:
description: Whether to run the Hypothesis tests
value: >- # bool
${{ jobs.compute-changes.outputs.run-hypothesis || false }}
run_cifuzz:
description: Whether to run the CIFuzz job
value: >- # bool
${{ jobs.compute-changes.outputs.run-cifuzz || false }}
jobs:
compute-changes:
name: Compute changed files
runs-on: ubuntu-latest
timeout-minutes: 10
outputs:
config-hash: ${{ steps.config-hash.outputs.hash }}
run-cifuzz: ${{ steps.check.outputs.run-cifuzz }}
run-docs: ${{ steps.docs-changes.outputs.run-docs }}
run-hypothesis: ${{ steps.check.outputs.run-hypothesis }}
run-tests: ${{ steps.check.outputs.run-tests }}
run-win-msi: ${{ steps.win-msi-changes.outputs.run-win-msi }}
steps:
- run: >-
echo '${{ github.event_name }}'
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: Check for source changes
id: check
run: |
if [ -z "$GITHUB_BASE_REF" ]; then
echo "run-tests=true" >> "$GITHUB_OUTPUT"
else
git fetch origin "$GITHUB_BASE_REF" --depth=1
# git diff "origin/$GITHUB_BASE_REF..." (3 dots) may be more
# reliable than git diff "origin/$GITHUB_BASE_REF.." (2 dots),
# but it requires to download more commits (this job uses
# "git fetch --depth=1").
#
# git diff "origin/$GITHUB_BASE_REF..." (3 dots) works with Git
# 2.26, but Git 2.28 is stricter and fails with "no merge base".
#
# git diff "origin/$GITHUB_BASE_REF.." (2 dots) should be enough on
# GitHub, since GitHub starts by merging origin/$GITHUB_BASE_REF
# into the PR branch anyway.
#
# https://github.com/python/core-workflow/issues/373
grep_ignore_args=(
# file extensions
-e '\.md$'
-e '\.rst$'
# top-level folders
-e '^Doc/'
-e '^Misc/'
# configuration files
-e '^\.github/CODEOWNERS$'
-e '^\.pre-commit-config\.yaml$'
-e '\.ruff\.toml$'
-e 'mypy\.ini$'
)
git diff --name-only "origin/$GITHUB_BASE_REF.." \
| grep -qvE "${grep_ignore_args[@]}" \
&& echo "run-tests=true" >> "$GITHUB_OUTPUT" || true
fi
# Check if we should run hypothesis tests
GIT_BRANCH=${GITHUB_BASE_REF:-${GITHUB_REF#refs/heads/}}
echo "$GIT_BRANCH"
if $(echo "$GIT_BRANCH" | grep -q -w '3\.\(8\|9\|10\|11\)'); then
echo "Branch too old for hypothesis tests"
echo "run-hypothesis=false" >> "$GITHUB_OUTPUT"
else
echo "Run hypothesis tests"
echo "run-hypothesis=true" >> "$GITHUB_OUTPUT"
fi
# oss-fuzz maintains a configuration for fuzzing the main branch of
# CPython, so CIFuzz should be run only for code that is likely to be
# merged into the main branch; compatibility with older branches may
# be broken.
FUZZ_RELEVANT_FILES='(\.c$|\.h$|\.cpp$|^configure$|^\.github/workflows/build\.yml$|^Modules/_xxtestfuzz)'
if [ "$GITHUB_BASE_REF" = "main" ] && [ "$(git diff --name-only "origin/$GITHUB_BASE_REF.." | grep -qE $FUZZ_RELEVANT_FILES; echo $?)" -eq 0 ]; then
# The tests are pretty slow so they are executed only for PRs
# changing relevant files.
echo "Run CIFuzz tests"
echo "run-cifuzz=true" >> "$GITHUB_OUTPUT"
else
echo "Branch too old for CIFuzz tests; or no C files were changed"
echo "run-cifuzz=false" >> "$GITHUB_OUTPUT"
fi
- name: Compute hash for config cache key
id: config-hash
run: |
echo "hash=${{ hashFiles('configure', 'configure.ac', '.github/workflows/build.yml') }}" >> "$GITHUB_OUTPUT"
- name: Get a list of the changed documentation-related files
if: github.event_name == 'pull_request'
id: changed-docs-files
uses: Ana06/get-changed-files@v2.3.0
with:
filter: |
Doc/**
Misc/**
.github/workflows/reusable-docs.yml
format: csv # works for paths with spaces
- name: Check for docs changes
# We only want to run this on PRs when related files are changed,
# or when user triggers manual workflow run.
if: >-
(
github.event_name == 'pull_request'
&& steps.changed-docs-files.outputs.added_modified_renamed != ''
) || github.event_name == 'workflow_dispatch'
id: docs-changes
run: |
echo "run-docs=true" >> "${GITHUB_OUTPUT}"
- name: Get a list of the MSI installer-related files
if: github.event_name == 'pull_request'
id: changed-win-msi-files
uses: Ana06/get-changed-files@v2.3.0
with:
filter: |
Tools/msi/**
.github/workflows/reusable-windows-msi.yml
format: csv # works for paths with spaces
- name: Check for changes in MSI installer-related files
# We only want to run this on PRs when related files are changed,
# or when user triggers manual workflow run.
if: >-
(
github.event_name == 'pull_request'
&& steps.changed-win-msi-files.outputs.added_modified_renamed != ''
) || github.event_name == 'workflow_dispatch'
id: win-msi-changes
run: |
echo "run-win-msi=true" >> "${GITHUB_OUTPUT}"

106
.github/workflows/reusable-context.yml vendored Normal file
View file

@ -0,0 +1,106 @@
name: Reusable build context
on: # yamllint disable-line rule:truthy
workflow_call:
outputs:
# Every referenced step MUST always set its output variable,
# either via ``Tools/build/compute-changes.py`` or in this workflow file.
# Boolean outputs (generally prefixed ``run-``) can then later be used
# safely through the following idiom in job conditionals and other
# expressions. Here's some examples:
#
# if: fromJSON(needs.build-context.outputs.run-tests)
#
# ${{
# fromJSON(needs.build-context.outputs.run-tests)
# && 'truthy-branch'
# || 'falsy-branch'
# }}
#
config-hash:
description: Config hash value for use in cache keys
value: ${{ jobs.compute-changes.outputs.config-hash }} # str
run-docs:
description: Whether to build the docs
value: ${{ jobs.compute-changes.outputs.run-docs }} # bool
run-tests:
description: Whether to run the regular tests
value: ${{ jobs.compute-changes.outputs.run-tests }} # bool
run-windows-tests:
description: Whether to run the Windows tests
value: ${{ jobs.compute-changes.outputs.run-windows-tests }} # bool
run-windows-msi:
description: Whether to run the MSI installer smoke tests
value: ${{ jobs.compute-changes.outputs.run-windows-msi }} # bool
run-ci-fuzz:
description: Whether to run the CIFuzz job
value: ${{ jobs.compute-changes.outputs.run-ci-fuzz }} # bool
jobs:
compute-changes:
name: Create context from changed files
runs-on: ubuntu-latest
timeout-minutes: 10
outputs:
config-hash: ${{ steps.config-hash.outputs.hash }}
run-ci-fuzz: ${{ steps.changes.outputs.run-ci-fuzz }}
run-docs: ${{ steps.changes.outputs.run-docs }}
run-tests: ${{ steps.changes.outputs.run-tests }}
run-windows-msi: ${{ steps.changes.outputs.run-windows-msi }}
run-windows-tests: ${{ steps.changes.outputs.run-windows-tests }}
steps:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3"
- run: >-
echo '${{ github.event_name }}'
- uses: actions/checkout@v4
with:
persist-credentials: false
ref: >-
${{
github.event_name == 'pull_request'
&& github.event.pull_request.head.sha
|| ''
}}
# Adapted from https://github.com/actions/checkout/issues/520#issuecomment-1167205721
- name: Fetch commits to get branch diff
if: github.event_name == 'pull_request'
run: |
set -eux
# Fetch enough history to find a common ancestor commit (aka merge-base):
git fetch origin "${refspec_pr}" --depth=$(( commits + 1 )) \
--no-tags --prune --no-recurse-submodules
# This should get the oldest commit in the local fetched history (which may not be the commit the PR branched from):
COMMON_ANCESTOR=$( git rev-list --first-parent --max-parents=0 --max-count=1 "${branch_pr}" )
DATE=$( git log --date=iso8601 --format=%cd "${COMMON_ANCESTOR}" )
# Get all commits since that commit date from the base branch (eg: main):
git fetch origin "${refspec_base}" --shallow-since="${DATE}" \
--no-tags --prune --no-recurse-submodules
env:
branch_pr: 'origin/${{ github.event.pull_request.head.ref }}'
commits: ${{ github.event.pull_request.commits }}
refspec_base: '+${{ github.event.pull_request.base.sha }}:remotes/origin/${{ github.event.pull_request.base.ref }}'
refspec_pr: '+${{ github.event.pull_request.head.sha }}:remotes/origin/${{ github.event.pull_request.head.ref }}'
# We only want to run tests on PRs when related files are changed,
# or when someone triggers a manual workflow run.
- name: Compute changed files
id: changes
run: python Tools/build/compute-changes.py
env:
GITHUB_DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}
CCF_TARGET_REF: ${{ github.base_ref || github.event.repository.default_branch }}
CCF_HEAD_REF: ${{ github.event.pull_request.head.sha || github.sha }}
- name: Compute hash for config cache key
id: config-hash
run: |
echo "hash=${{ hashFiles('configure', 'configure.ac', '.github/workflows/build.yml') }}" >> "$GITHUB_OUTPUT"

View file

@ -15,7 +15,7 @@ env:
FORCE_COLOR: 1 FORCE_COLOR: 1
jobs: jobs:
build_doc: build-doc:
name: 'Docs' name: 'Docs'
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 60 timeout-minutes: 60
@ -65,8 +65,8 @@ jobs:
continue-on-error: true continue-on-error: true
run: | run: |
set -Eeuo pipefail set -Eeuo pipefail
# Build docs with the '-n' (nit-picky) option; write warnings to file # Build docs with the nit-picky option; write warnings to file
make -C Doc/ PYTHON=../python SPHINXOPTS="-q -n -W --keep-going -w sphinx-warnings.txt" html make -C Doc/ PYTHON=../python SPHINXOPTS="--quiet --nitpicky --fail-on-warning --warning-file sphinx-warnings.txt" html
- name: 'Check warnings' - name: 'Check warnings'
if: github.event_name == 'pull_request' if: github.event_name == 'pull_request'
run: | run: |
@ -76,26 +76,6 @@ jobs:
--fail-if-improved \ --fail-if-improved \
--fail-if-new-news-nit --fail-if-new-news-nit
# This build doesn't use problem matchers or check annotations
build_doc_oldest_supported_sphinx:
name: 'Docs (Oldest Sphinx)'
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: 'Set up Python'
uses: actions/setup-python@v5
with:
python-version: '3.13' # known to work with Sphinx 7.2.6
cache: 'pip'
cache-dependency-path: 'Doc/requirements-oldest-sphinx.txt'
- name: 'Install build dependencies'
run: make -C Doc/ venv REQUIREMENTS="requirements-oldest-sphinx.txt"
- name: 'Build HTML documentation'
run: make -C Doc/ SPHINXOPTS="-q" SPHINXERRORHANDLING="-W --keep-going" html
# Run "doctest" on HEAD as new syntax doesn't exist in the latest stable release # Run "doctest" on HEAD as new syntax doesn't exist in the latest stable release
doctest: doctest:
name: 'Doctest' name: 'Doctest'
@ -121,4 +101,4 @@ jobs:
run: make -C Doc/ PYTHON=../python venv run: make -C Doc/ PYTHON=../python venv
# Use "xvfb-run" since some doctest tests open GUI windows # Use "xvfb-run" since some doctest tests open GUI windows
- name: 'Run documentation doctest' - name: 'Run documentation doctest'
run: xvfb-run make -C Doc/ PYTHON=../python SPHINXERRORHANDLING="-W --keep-going" doctest run: xvfb-run make -C Doc/ PYTHON=../python SPHINXERRORHANDLING="--fail-on-warning" doctest

View file

@ -15,9 +15,13 @@ on:
required: true required: true
type: string type: string
env:
FORCE_COLOR: 1
jobs: jobs:
build_macos: build-macos:
name: build and test (${{ inputs.os }}) name: build and test (${{ inputs.os }})
runs-on: ${{ inputs.os }}
timeout-minutes: 60 timeout-minutes: 60
env: env:
HOMEBREW_NO_ANALYTICS: 1 HOMEBREW_NO_ANALYTICS: 1
@ -26,18 +30,17 @@ jobs:
HOMEBREW_NO_INSTALLED_DEPENDENTS_CHECK: 1 HOMEBREW_NO_INSTALLED_DEPENDENTS_CHECK: 1
PYTHONSTRICTEXTENSIONBUILD: 1 PYTHONSTRICTEXTENSIONBUILD: 1
TERM: linux TERM: linux
runs-on: ${{ inputs.os }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
persist-credentials: false persist-credentials: false
- name: Runner image version - name: Runner image version
run: echo "IMAGE_VERSION=${ImageVersion}" >> "$GITHUB_ENV" run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: Restore config.cache - name: Restore config.cache
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: config.cache path: config.cache
key: ${{ github.job }}-${{ inputs.os }}-${{ env.IMAGE_VERSION }}-${{ inputs.config_hash }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ inputs.config_hash }}
- name: Install Homebrew dependencies - name: Install Homebrew dependencies
run: | run: |
brew install pkg-config openssl@3.0 xz gdbm tcl-tk@8 make brew install pkg-config openssl@3.0 xz gdbm tcl-tk@8 make

View file

@ -6,38 +6,32 @@ on:
config_hash: config_hash:
required: true required: true
type: string type: string
options: free-threading:
required: true description: Whether to use free-threaded mode
type: string required: false
suppressions_path: type: boolean
description: 'A repo relative path to the suppressions file' default: false
required: true
type: string env:
tsan_logs_artifact_name: FORCE_COLOR: 1
description: 'Name of the TSAN logs artifact. Must be unique for each job.'
required: true
type: string
jobs: jobs:
build_tsan_reusable: build-tsan-reusable:
name: 'Thread sanitizer' name: 'Thread sanitizer'
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
timeout-minutes: 60 timeout-minutes: 60
env:
OPTIONS: ${{ inputs.options }}
SUPPRESSIONS_PATH: ${{ inputs.suppressions_path }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with: with:
persist-credentials: false persist-credentials: false
- name: Runner image version - name: Runner image version
run: echo "IMAGE_VERSION=${ImageVersion}" >> "$GITHUB_ENV" run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: Restore config.cache - name: Restore config.cache
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: config.cache path: config.cache
key: ${{ github.job }}-${{ runner.os }}-${{ env.IMAGE_VERSION }}-${{ inputs.config_hash }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ inputs.config_hash }}
- name: Install Dependencies - name: Install dependencies
run: | run: |
sudo ./.github/workflows/posix-deps-apt.sh sudo ./.github/workflows/posix-deps-apt.sh
# Install clang-18 # Install clang-18
@ -50,9 +44,13 @@ jobs:
sudo update-alternatives --set clang++ /usr/bin/clang++-17 sudo update-alternatives --set clang++ /usr/bin/clang++-17
# Reduce ASLR to avoid TSAN crashing # Reduce ASLR to avoid TSAN crashing
sudo sysctl -w vm.mmap_rnd_bits=28 sudo sysctl -w vm.mmap_rnd_bits=28
- name: TSAN Option Setup - name: TSAN option setup
run: | run: |
echo "TSAN_OPTIONS=log_path=${GITHUB_WORKSPACE}/tsan_log suppressions=${GITHUB_WORKSPACE}/${SUPPRESSIONS_PATH} handle_segv=0" >> "$GITHUB_ENV" echo "TSAN_OPTIONS=log_path=${GITHUB_WORKSPACE}/tsan_log suppressions=${GITHUB_WORKSPACE}/Tools/tsan/suppressions${{
fromJSON(inputs.free-threading)
&& '_free_threading'
|| ''
}}.txt handle_segv=0" >> "$GITHUB_ENV"
echo "CC=clang" >> "$GITHUB_ENV" echo "CC=clang" >> "$GITHUB_ENV"
echo "CXX=clang++" >> "$GITHUB_ENV" echo "CXX=clang++" >> "$GITHUB_ENV"
- name: Add ccache to PATH - name: Add ccache to PATH
@ -64,13 +62,21 @@ jobs:
save: ${{ github.event_name == 'push' }} save: ${{ github.event_name == 'push' }}
max-size: "200M" max-size: "200M"
- name: Configure CPython - name: Configure CPython
run: "${OPTIONS}" run: >-
./configure
--config-cache
--with-thread-sanitizer
--with-pydebug
${{ fromJSON(inputs.free-threading) && '--disable-gil' || '' }}
- name: Build CPython - name: Build CPython
run: make -j4 run: make -j4
- name: Display build info - name: Display build info
run: make pythoninfo run: make pythoninfo
- name: Tests - name: Tests
run: ./python -m test --tsan -j4 run: ./python -m test --tsan -j4
- name: Parallel tests
if: fromJSON(inputs.free-threading)
run: ./python -m test --tsan-parallel --parallel-threads=4 -j4
- name: Display TSAN logs - name: Display TSAN logs
if: always() if: always()
run: find "${GITHUB_WORKSPACE}" -name 'tsan_log.*' | xargs head -n 1000 run: find "${GITHUB_WORKSPACE}" -name 'tsan_log.*' | xargs head -n 1000
@ -78,6 +84,11 @@ jobs:
if: always() if: always()
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: ${{ inputs.tsan_logs_artifact_name }} name: >-
tsan-logs-${{
fromJSON(inputs.free-threading)
&& 'free-threading'
|| 'default'
}}
path: tsan_log.* path: tsan_log.*
if-no-files-found: ignore if-no-files-found: ignore

View file

@ -6,6 +6,11 @@ on:
config_hash: config_hash:
required: true required: true
type: string type: string
bolt-optimizations:
description: Whether to enable BOLT optimizations
required: false
type: boolean
default: false
free-threading: free-threading:
description: Whether to use free-threaded mode description: Whether to use free-threaded mode
required: false required: false
@ -16,13 +21,15 @@ on:
required: true required: true
type: string type: string
env:
FORCE_COLOR: 1
jobs: jobs:
build_ubuntu_reusable: build-ubuntu-reusable:
name: build and test (${{ inputs.os }}) name: build and test (${{ inputs.os }})
timeout-minutes: 60
runs-on: ${{ inputs.os }} runs-on: ${{ inputs.os }}
timeout-minutes: 60
env: env:
FORCE_COLOR: 1
OPENSSL_VER: 3.0.15 OPENSSL_VER: 3.0.15
PYTHONSTRICTEXTENSIONBUILD: 1 PYTHONSTRICTEXTENSIONBUILD: 1
TERM: linux TERM: linux
@ -34,6 +41,12 @@ jobs:
run: echo "::add-matcher::.github/problem-matchers/gcc.json" run: echo "::add-matcher::.github/problem-matchers/gcc.json"
- name: Install dependencies - name: Install dependencies
run: sudo ./.github/workflows/posix-deps-apt.sh run: sudo ./.github/workflows/posix-deps-apt.sh
- name: Install Clang and BOLT
if: ${{ fromJSON(inputs.bolt-optimizations) }}
run: |
sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ./llvm.sh 19
sudo apt-get install bolt-19
echo PATH="$(llvm-config-19 --bindir):$PATH" >> $GITHUB_ENV
- name: Configure OpenSSL env vars - name: Configure OpenSSL env vars
run: | run: |
echo "MULTISSL_DIR=${GITHUB_WORKSPACE}/multissl" >> "$GITHUB_ENV" echo "MULTISSL_DIR=${GITHUB_WORKSPACE}/multissl" >> "$GITHUB_ENV"
@ -65,15 +78,18 @@ jobs:
- name: Bind mount sources read-only - name: Bind mount sources read-only
run: sudo mount --bind -o ro "$GITHUB_WORKSPACE" "$CPYTHON_RO_SRCDIR" run: sudo mount --bind -o ro "$GITHUB_WORKSPACE" "$CPYTHON_RO_SRCDIR"
- name: Runner image version - name: Runner image version
run: echo "IMAGE_VERSION=${ImageVersion}" >> "$GITHUB_ENV" run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: Restore config.cache - name: Restore config.cache
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
path: ${{ env.CPYTHON_BUILDDIR }}/config.cache path: ${{ env.CPYTHON_BUILDDIR }}/config.cache
key: ${{ github.job }}-${{ runner.os }}-${{ env.IMAGE_VERSION }}-${{ inputs.config_hash }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ inputs.config_hash }}
- name: Configure CPython out-of-tree - name: Configure CPython out-of-tree
working-directory: ${{ env.CPYTHON_BUILDDIR }} working-directory: ${{ env.CPYTHON_BUILDDIR }}
# `test_unpickle_module_race` writes to the source directory, which is
# read-only during builds — so we exclude it from profiling with BOLT.
run: >- run: >-
PROFILE_TASK='-m test --pgo --ignore test_unpickle_module_race'
../cpython-ro-srcdir/configure ../cpython-ro-srcdir/configure
--config-cache --config-cache
--with-pydebug --with-pydebug
@ -81,6 +97,7 @@ jobs:
--enable-safety --enable-safety
--with-openssl="$OPENSSL_DIR" --with-openssl="$OPENSSL_DIR"
${{ fromJSON(inputs.free-threading) && '--disable-gil' || '' }} ${{ fromJSON(inputs.free-threading) && '--disable-gil' || '' }}
${{ fromJSON(inputs.bolt-optimizations) && '--enable-bolt' || '' }}
- name: Build CPython out-of-tree - name: Build CPython out-of-tree
if: ${{ inputs.free-threading }} if: ${{ inputs.free-threading }}
working-directory: ${{ env.CPYTHON_BUILDDIR }} working-directory: ${{ env.CPYTHON_BUILDDIR }}

View file

@ -7,11 +7,14 @@ on:
required: true required: true
type: string type: string
env:
FORCE_COLOR: 1
jobs: jobs:
build_wasi_reusable: build-wasi-reusable:
name: 'build and test' name: 'build and test'
timeout-minutes: 60
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
timeout-minutes: 60
env: env:
WASMTIME_VERSION: 22.0.0 WASMTIME_VERSION: 22.0.0
WASI_SDK_VERSION: 24 WASI_SDK_VERSION: 24
@ -50,6 +53,8 @@ jobs:
uses: actions/setup-python@v5 uses: actions/setup-python@v5
with: with:
python-version: '3.x' python-version: '3.x'
- name: "Runner image version"
run: echo "IMAGE_OS_VERSION=${ImageOS}-${ImageVersion}" >> "$GITHUB_ENV"
- name: "Restore Python build config.cache" - name: "Restore Python build config.cache"
uses: actions/cache@v4 uses: actions/cache@v4
with: with:
@ -57,7 +62,7 @@ jobs:
# Include env.pythonLocation in key to avoid changes in environment when setup-python updates Python. # Include env.pythonLocation in key to avoid changes in environment when setup-python updates Python.
# Include the hash of `Tools/wasm/wasi.py` as it may change the environment variables. # Include the hash of `Tools/wasm/wasi.py` as it may change the environment variables.
# (Make sure to keep the key in sync with the other config.cache step below.) # (Make sure to keep the key in sync with the other config.cache step below.)
key: ${{ github.job }}-${{ runner.os }}-${{ env.IMAGE_VERSION }}-${{ env.WASI_SDK_VERSION }}-${{ env.WASMTIME_VERSION }}-${{ inputs.config_hash }}-${{ hashFiles('Tools/wasm/wasi.py') }}-${{ env.pythonLocation }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ env.WASI_SDK_VERSION }}-${{ env.WASMTIME_VERSION }}-${{ inputs.config_hash }}-${{ hashFiles('Tools/wasm/wasi.py') }}-${{ env.pythonLocation }}
- name: "Configure build Python" - name: "Configure build Python"
run: python3 Tools/wasm/wasi.py configure-build-python -- --config-cache --with-pydebug run: python3 Tools/wasm/wasi.py configure-build-python -- --config-cache --with-pydebug
- name: "Make build Python" - name: "Make build Python"
@ -67,7 +72,7 @@ jobs:
with: with:
path: ${{ env.CROSS_BUILD_WASI }}/config.cache path: ${{ env.CROSS_BUILD_WASI }}/config.cache
# Should be kept in sync with the other config.cache step above. # Should be kept in sync with the other config.cache step above.
key: ${{ github.job }}-${{ runner.os }}-${{ env.IMAGE_VERSION }}-${{ env.WASI_SDK_VERSION }}-${{ env.WASMTIME_VERSION }}-${{ inputs.config_hash }}-${{ hashFiles('Tools/wasm/wasi.py') }}-${{ env.pythonLocation }} key: ${{ github.job }}-${{ env.IMAGE_OS_VERSION }}-${{ env.WASI_SDK_VERSION }}-${{ env.WASMTIME_VERSION }}-${{ inputs.config_hash }}-${{ hashFiles('Tools/wasm/wasi.py') }}-${{ env.pythonLocation }}
- name: "Configure host" - name: "Configure host"
# `--with-pydebug` inferred from configure-build-python # `--with-pydebug` inferred from configure-build-python
run: python3 Tools/wasm/wasi.py configure-host -- --config-cache run: python3 Tools/wasm/wasi.py configure-host -- --config-cache

View file

@ -11,10 +11,13 @@ on:
permissions: permissions:
contents: read contents: read
env:
FORCE_COLOR: 1
jobs: jobs:
build: build:
name: installer for ${{ inputs.arch }} name: installer for ${{ inputs.arch }}
runs-on: windows-latest runs-on: ${{ inputs.arch == 'arm64' && 'windows-11-arm' || 'windows-latest' }}
timeout-minutes: 60 timeout-minutes: 60
env: env:
ARCH: ${{ inputs.arch }} ARCH: ${{ inputs.arch }}

View file

@ -3,10 +3,6 @@ name: Reusable Windows
on: on:
workflow_call: workflow_call:
inputs: inputs:
os:
description: OS to run on
required: true
type: string
arch: arch:
description: CPU architecture description: CPU architecture
required: true required: true
@ -18,13 +14,14 @@ on:
default: false default: false
env: env:
FORCE_COLOR: 1
IncludeUwp: >- IncludeUwp: >-
true true
jobs: jobs:
build: build:
name: 'build and test (${{ inputs.arch }})' name: Build and test (${{ inputs.arch }})
runs-on: ${{ inputs.os }} runs-on: ${{ inputs.arch == 'arm64' && 'windows-11-arm' || 'windows-latest' }}
timeout-minutes: 60 timeout-minutes: 60
env: env:
ARCH: ${{ inputs.arch }} ARCH: ${{ inputs.arch }}
@ -42,11 +39,9 @@ jobs:
-p "${ARCH}" -p "${ARCH}"
${{ fromJSON(inputs.free-threading) && '--disable-gil' || '' }} ${{ fromJSON(inputs.free-threading) && '--disable-gil' || '' }}
shell: bash shell: bash
- name: Display build info # FIXME(diegorusso): remove the `if` - name: Display build info
if: inputs.arch != 'arm64'
run: .\\python.bat -m test.pythoninfo run: .\\python.bat -m test.pythoninfo
- name: Tests # FIXME(diegorusso): remove the `if` - name: Tests
if: inputs.arch != 'arm64'
run: >- run: >-
.\\PCbuild\\rt.bat .\\PCbuild\\rt.bat
-p "${ARCH}" -p "${ARCH}"

View file

@ -7,7 +7,6 @@ on:
jobs: jobs:
stale: stale:
if: github.repository_owner == 'python' if: github.repository_owner == 'python'
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
pull-requests: write pull-requests: write

140
.github/workflows/tail-call.yml vendored Normal file
View file

@ -0,0 +1,140 @@
name: Tail calling interpreter
on:
pull_request:
paths:
- '.github/workflows/tail-call.yml'
- 'Python/bytecodes.c'
- 'Python/ceval.c'
- 'Python/ceval_macros.h'
- 'Python/generated_cases.c.h'
push:
paths:
- '.github/workflows/tail-call.yml'
- 'Python/bytecodes.c'
- 'Python/ceval.c'
- 'Python/ceval_macros.h'
- 'Python/generated_cases.c.h'
workflow_dispatch:
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
FORCE_COLOR: 1
jobs:
tail-call:
name: ${{ matrix.target }}
runs-on: ${{ matrix.runner }}
timeout-minutes: 90
strategy:
fail-fast: false
matrix:
target:
# Un-comment as we add support for more platforms for tail-calling interpreters.
# - i686-pc-windows-msvc/msvc
- x86_64-pc-windows-msvc/msvc
# - aarch64-pc-windows-msvc/msvc
- x86_64-apple-darwin/clang
- aarch64-apple-darwin/clang
- x86_64-unknown-linux-gnu/gcc
- aarch64-unknown-linux-gnu/gcc
- free-threading
llvm:
- 20
include:
# - target: i686-pc-windows-msvc/msvc
# architecture: Win32
# runner: windows-latest
- target: x86_64-pc-windows-msvc/msvc
architecture: x64
runner: windows-latest
# - target: aarch64-pc-windows-msvc/msvc
# architecture: ARM64
# runner: windows-latest
- target: x86_64-apple-darwin/clang
architecture: x86_64
runner: macos-13
- target: aarch64-apple-darwin/clang
architecture: aarch64
runner: macos-14
- target: x86_64-unknown-linux-gnu/gcc
architecture: x86_64
runner: ubuntu-24.04
- target: aarch64-unknown-linux-gnu/gcc
architecture: aarch64
runner: ubuntu-24.04-arm
- target: free-threading
architecture: x86_64
runner: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Native Windows (debug)
if: runner.os == 'Windows' && matrix.architecture != 'ARM64'
shell: cmd
run: |
choco install llvm --allow-downgrade --no-progress --version ${{ matrix.llvm }}.1.0
set PlatformToolset=clangcl
set LLVMToolsVersion=${{ matrix.llvm }}.1.0
set LLVMInstallDir=C:\Program Files\LLVM
call ./PCbuild/build.bat --tail-call-interp -d -p ${{ matrix.architecture }}
call ./PCbuild/rt.bat -d -p ${{ matrix.architecture }} -q --multiprocess 0 --timeout 4500 --verbose2 --verbose3
# No tests (yet):
- name: Emulated Windows (release)
if: runner.os == 'Windows' && matrix.architecture == 'ARM64'
shell: cmd
run: |
choco install llvm --allow-downgrade --no-progress --version ${{ matrix.llvm }}.1.0
set PlatformToolset=clangcl
set LLVMToolsVersion=${{ matrix.llvm }}.1.0
set LLVMInstallDir=C:\Program Files\LLVM
./PCbuild/build.bat --tail-call-interp -p ${{ matrix.architecture }}
# The `find` line is required as a result of https://github.com/actions/runner-images/issues/9966.
# This is a bug in the macOS runner image where the pre-installed Python is installed in the same
# directory as the Homebrew Python, which causes the build to fail for macos-13. This line removes
# the symlink to the pre-installed Python so that the Homebrew Python is used instead.
# Note: when a new LLVM is released, the homebrew installation directory changes, so the builds will fail.
# We either need to upgrade LLVM or change the directory being pointed to.
- name: Native macOS (release)
if: runner.os == 'macOS'
run: |
brew update
find /usr/local/bin -lname '*/Library/Frameworks/Python.framework/*' -delete
brew install llvm@${{ matrix.llvm }}
export SDKROOT="$(xcrun --show-sdk-path)"
export PATH="/usr/local/opt/llvm/bin:$PATH"
export PATH="/opt/homebrew/opt/llvm/bin:$PATH"
CC=clang-20 ./configure --with-tail-call-interp
make all --jobs 4
./python.exe -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3
- name: Native Linux (debug)
if: runner.os == 'Linux' && matrix.target != 'free-threading'
run: |
sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ./llvm.sh ${{ matrix.llvm }}
export PATH="$(llvm-config-${{ matrix.llvm }} --bindir):$PATH"
CC=clang-20 ./configure --with-tail-call-interp --with-pydebug
make all --jobs 4
./python -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3
- name: Native Linux with free-threading (release)
if: matrix.target == 'free-threading'
run: |
sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ./llvm.sh ${{ matrix.llvm }}
export PATH="$(llvm-config-${{ matrix.llvm }} --bindir):$PATH"
CC=clang-20 ./configure --with-tail-call-interp --disable-gil
make all --jobs 4
./python -m test --multiprocess 0 --timeout 4500 --verbose2 --verbose3

4
.github/zizmor.yml vendored
View file

@ -4,3 +4,7 @@ rules:
dangerous-triggers: dangerous-triggers:
ignore: ignore:
- documentation-links.yml - documentation-links.yml
unpinned-uses:
config:
policies:
"*": ref-pin

4
.gitignore vendored
View file

@ -38,6 +38,7 @@ tags
TAGS TAGS
.vs/ .vs/
.vscode/ .vscode/
.cache/
gmon.out gmon.out
.coverage .coverage
.mypy_cache/ .mypy_cache/
@ -137,11 +138,12 @@ Tools/unicode/data/
# hendrikmuhs/ccache-action@v1 # hendrikmuhs/ccache-action@v1
/.ccache /.ccache
/cross-build/ /cross-build/
/jit_stencils.h /jit_stencils*.h
/platform /platform
/profile-clean-stamp /profile-clean-stamp
/profile-run-stamp /profile-run-stamp
/profile-bolt-stamp /profile-bolt-stamp
/profile-gen-stamp
/pybuilddir.txt /pybuilddir.txt
/pyconfig.h /pyconfig.h
/python-config /python-config

View file

@ -1,3 +1,4 @@
# This file sets the canonical name for contributors to the repository. # This file sets the canonical name for contributors to the repository.
# Documentation: https://git-scm.com/docs/gitmailmap # Documentation: https://git-scm.com/docs/gitmailmap
Willow Chargin <wchargin@gmail.com>
Amethyst Reese <amethyst@n7.gg> <john@noswap.com> Amethyst Reese <amethyst@n7.gg> <john@noswap.com>

View file

@ -1,6 +1,6 @@
repos: repos:
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.8.2 rev: v0.11.8
hooks: hooks:
- id: ruff - id: ruff
name: Run Ruff (lint) on Doc/ name: Run Ruff (lint) on Doc/
@ -11,9 +11,9 @@ repos:
args: [--exit-non-zero-on-fix] args: [--exit-non-zero-on-fix]
files: ^Lib/test/ files: ^Lib/test/
- id: ruff - id: ruff
name: Run Ruff (lint) on Tools/build/check_warnings.py name: Run Ruff (lint) on Tools/build/
args: [--exit-non-zero-on-fix, --config=Tools/build/.ruff.toml] args: [--exit-non-zero-on-fix, --config=Tools/build/.ruff.toml]
files: ^Tools/build/check_warnings.py files: ^Tools/build/
- id: ruff - id: ruff
name: Run Ruff (lint) on Argument Clinic name: Run Ruff (lint) on Argument Clinic
args: [--exit-non-zero-on-fix, --config=Tools/clinic/.ruff.toml] args: [--exit-non-zero-on-fix, --config=Tools/clinic/.ruff.toml]
@ -22,19 +22,17 @@ repos:
name: Run Ruff (format) on Doc/ name: Run Ruff (format) on Doc/
args: [--check] args: [--check]
files: ^Doc/ files: ^Doc/
- id: ruff-format
name: Run Ruff (format) on Tools/build/check_warnings.py
args: [--check, --config=Tools/build/.ruff.toml]
files: ^Tools/build/check_warnings.py
- repo: https://github.com/psf/black-pre-commit-mirror - repo: https://github.com/psf/black-pre-commit-mirror
rev: 24.10.0 rev: 25.1.0
hooks: hooks:
- id: black
name: Run Black on Tools/build/check_warnings.py
files: ^Tools/build/check_warnings.py
language_version: python3.12
args: [--line-length=79]
- id: black - id: black
name: Run Black on Tools/jit/ name: Run Black on Tools/jit/
files: ^Tools/jit/ files: ^Tools/jit/
language_version: python3.12
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0 rev: v5.0.0
@ -51,19 +49,19 @@ repos:
types_or: [c, inc, python, rst] types_or: [c, inc, python, rst]
- repo: https://github.com/python-jsonschema/check-jsonschema - repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.30.0 rev: 0.33.0
hooks: hooks:
- id: check-dependabot - id: check-dependabot
- id: check-github-workflows - id: check-github-workflows
- id: check-readthedocs - id: check-readthedocs
- repo: https://github.com/rhysd/actionlint - repo: https://github.com/rhysd/actionlint
rev: v1.7.4 rev: v1.7.7
hooks: hooks:
- id: actionlint - id: actionlint
- repo: https://github.com/woodruffw/zizmor-pre-commit - repo: https://github.com/woodruffw/zizmor-pre-commit
rev: v0.8.0 rev: v1.6.0
hooks: hooks:
- id: zizmor - id: zizmor

12
.ruff.toml Normal file
View file

@ -0,0 +1,12 @@
# Default settings for Ruff in CPython
# PYTHON_FOR_REGEN
target-version = "py310"
# PEP 8
line-length = 79
# Enable automatic fixes by default.
# To override this, use ``fix = false`` in a subdirectory's config file
# or ``--no-fix`` on the command line.
fix = true

View file

@ -1,19 +1,22 @@
# Python for Android # Python for Android
These instructions are only needed if you're planning to compile Python for If you obtained this README as part of a release package, then the only
Android yourself. Most users should *not* need to do this. Instead, use one of applicable sections are "Prerequisites", "Testing", and "Using in your own app".
the tools listed in `Doc/using/android.rst`, which will provide a much easier
experience. If you obtained this README as part of the CPython source tree, then you can
also follow the other sections to compile Python for Android yourself.
However, most app developers should not need to do any of these things manually.
Instead, use one of the tools listed
[here](https://docs.python.org/3/using/android.html), which will provide a much
easier experience.
## Prerequisites ## Prerequisites
First, make sure you have all the usual tools and libraries needed to build If you already have an Android SDK installed, export the `ANDROID_HOME`
Python for your development machine. environment variable to point at its location. Otherwise, here's how to install
it:
Second, you'll need an Android SDK. If you already have the SDK installed,
export the `ANDROID_HOME` environment variable to point at its location.
Otherwise, here's how to install it:
* Download the "Command line tools" from <https://developer.android.com/studio>. * Download the "Command line tools" from <https://developer.android.com/studio>.
* Create a directory `android-sdk/cmdline-tools`, and unzip the command line * Create a directory `android-sdk/cmdline-tools`, and unzip the command line
@ -22,20 +25,23 @@ ## Prerequisites
`android-sdk/cmdline-tools/latest`. `android-sdk/cmdline-tools/latest`.
* `export ANDROID_HOME=/path/to/android-sdk` * `export ANDROID_HOME=/path/to/android-sdk`
The `android.py` script also requires the following commands to be on the `PATH`: The `android.py` script will automatically use the SDK's `sdkmanager` to install
any packages it needs.
The script also requires the following commands to be on the `PATH`:
* `curl` * `curl`
* `java` (or set the `JAVA_HOME` environment variable) * `java` (or set the `JAVA_HOME` environment variable)
* `tar`
* `unzip`
## Building ## Building
Python can be built for Android on any POSIX platform supported by the Android Python can be built for Android on any POSIX platform supported by the Android
development tools, which currently means Linux or macOS. This involves doing a development tools, which currently means Linux or macOS.
cross-build where you use a "build" Python (for your development machine) to
help produce a "host" Python for Android. First we'll make a "build" Python (for your development machine), then use it to
help produce a "host" Python for Android. So make sure you have all the usual
tools and libraries needed to build Python for your development machine.
The easiest way to do a build is to use the `android.py` script. You can either The easiest way to do a build is to use the `android.py` script. You can either
have it perform the entire build process from start to finish in one step, or have it perform the entire build process from start to finish in one step, or
@ -60,8 +66,8 @@ ## Building
./android.py build HOST ./android.py build HOST
``` ```
In the end you should have a build Python in `cross-build/build`, and an Android In the end you should have a build Python in `cross-build/build`, and a host
build in `cross-build/HOST`. Python in `cross-build/HOST`.
You can use `--` as a separator for any of the `configure`-related commands You can use `--` as a separator for any of the `configure`-related commands
including `build` itself to pass arguments to the underlying `configure` including `build` itself to pass arguments to the underlying `configure`
@ -73,14 +79,27 @@ ## Building
``` ```
## Packaging
After building an architecture as described in the section above, you can
package it for release with this command:
```sh
./android.py package HOST
```
`HOST` is defined in the section above.
This will generate a tarball in `cross-build/HOST/dist`, whose structure is
similar to the `Android` directory of the CPython source tree.
## Testing ## Testing
The test suite can be run on Linux, macOS, or Windows: The Python test suite can be run on Linux, macOS, or Windows:
* On Linux, the emulator needs access to the KVM virtualization interface, and * On Linux, the emulator needs access to the KVM virtualization interface, and
a DISPLAY environment variable pointing at an X server. a DISPLAY environment variable pointing at an X server. Xvfb is acceptable.
* On Windows, you won't be able to do the build on the same machine, so you'll
have to copy the `cross-build/HOST` directory from somewhere else.
The test suite can usually be run on a device with 2 GB of RAM, but this is The test suite can usually be run on a device with 2 GB of RAM, but this is
borderline, so you may need to increase it to 4 GB. As of Android borderline, so you may need to increase it to 4 GB. As of Android
@ -90,9 +109,16 @@ ## Testing
manually to the same value, or use the Android Studio Device Manager, which will manually to the same value, or use the Android Studio Device Manager, which will
update both files. update both files.
Before running the test suite, follow the instructions in the previous section You can run the test suite either:
to build the architecture you want to test. Then run the test script in one of
the following modes: * Within the CPython repository, after doing a build as described above. On
Windows, you won't be able to do the build on the same machine, so you'll have
to copy the `cross-build/HOST/prefix` directory from somewhere else.
* Or by taking a release package built using the `package` command, extracting
it wherever you want, and using its own copy of `android.py`.
The test script supports the following modes:
* In `--connected` mode, it runs on a device or emulator you have already * In `--connected` mode, it runs on a device or emulator you have already
connected to the build machine. List the available devices with connected to the build machine. List the available devices with
@ -119,10 +145,10 @@ ## Testing
messages. messages.
Any other arguments on the `android.py test` command line will be passed through Any other arguments on the `android.py test` command line will be passed through
to `python -m test`  use `--` to separate them from android.py's own options. to `python -m test` use `--` to separate them from android.py's own options.
See the [Python Developer's See the [Python Developer's
Guide](https://devguide.python.org/testing/run-write-tests/) for common options Guide](https://devguide.python.org/testing/run-write-tests/) for common options
 most of them will work on Android, except for those that involve subprocesses, most of them will work on Android, except for those that involve subprocesses,
such as `-j`. such as `-j`.
Every time you run `android.py test`, changes in pure-Python files in the Every time you run `android.py test`, changes in pure-Python files in the
@ -133,4 +159,4 @@ ## Testing
## Using in your own app ## Using in your own app
See `Doc/using/android.rst`. See https://docs.python.org/3/using/android.html.

View file

@ -1,10 +1,10 @@
# This script must be sourced with the following variables already set: # This script must be sourced with the following variables already set:
: ${ANDROID_HOME:?} # Path to Android SDK : "${ANDROID_HOME:?}" # Path to Android SDK
: ${HOST:?} # GNU target triplet : "${HOST:?}" # GNU target triplet
# You may also override the following: # You may also override the following:
: ${api_level:=24} # Minimum Android API level the build will run on : "${api_level:=24}" # Minimum Android API level the build will run on
: ${PREFIX:-} # Path in which to find required libraries : "${PREFIX:-}" # Path in which to find required libraries
# Print all messages on stderr so they're visible when running within build-wheel. # Print all messages on stderr so they're visible when running within build-wheel.
@ -27,20 +27,20 @@ fail() {
ndk_version=27.1.12297006 ndk_version=27.1.12297006
ndk=$ANDROID_HOME/ndk/$ndk_version ndk=$ANDROID_HOME/ndk/$ndk_version
if ! [ -e $ndk ]; then if ! [ -e "$ndk" ]; then
log "Installing NDK - this may take several minutes" log "Installing NDK - this may take several minutes"
yes | $ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager "ndk;$ndk_version" yes | "$ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager" "ndk;$ndk_version"
fi fi
if [ $HOST = "arm-linux-androideabi" ]; then if [ "$HOST" = "arm-linux-androideabi" ]; then
clang_triplet=armv7a-linux-androideabi clang_triplet=armv7a-linux-androideabi
else else
clang_triplet=$HOST clang_triplet="$HOST"
fi fi
# These variables are based on BuildSystemMaintainers.md above, and # These variables are based on BuildSystemMaintainers.md above, and
# $ndk/build/cmake/android.toolchain.cmake. # $ndk/build/cmake/android.toolchain.cmake.
toolchain=$(echo $ndk/toolchains/llvm/prebuilt/*) toolchain=$(echo "$ndk"/toolchains/llvm/prebuilt/*)
export AR="$toolchain/bin/llvm-ar" export AR="$toolchain/bin/llvm-ar"
export AS="$toolchain/bin/llvm-as" export AS="$toolchain/bin/llvm-as"
export CC="$toolchain/bin/${clang_triplet}${api_level}-clang" export CC="$toolchain/bin/${clang_triplet}${api_level}-clang"
@ -72,12 +72,12 @@ LDFLAGS="$LDFLAGS -lm"
# -mstackrealign is included where necessary in the clang launcher scripts which are # -mstackrealign is included where necessary in the clang launcher scripts which are
# pointed to by $CC, so we don't need to include it here. # pointed to by $CC, so we don't need to include it here.
if [ $HOST = "arm-linux-androideabi" ]; then if [ "$HOST" = "arm-linux-androideabi" ]; then
CFLAGS="$CFLAGS -march=armv7-a -mthumb" CFLAGS="$CFLAGS -march=armv7-a -mthumb"
fi fi
if [ -n "${PREFIX:-}" ]; then if [ -n "${PREFIX:-}" ]; then
abs_prefix=$(realpath $PREFIX) abs_prefix="$(realpath "$PREFIX")"
CFLAGS="$CFLAGS -I$abs_prefix/include" CFLAGS="$CFLAGS -I$abs_prefix/include"
LDFLAGS="$LDFLAGS -L$abs_prefix/lib" LDFLAGS="$LDFLAGS -L$abs_prefix/lib"
@ -87,11 +87,13 @@ fi
# When compiling C++, some build systems will combine CFLAGS and CXXFLAGS, and some will # When compiling C++, some build systems will combine CFLAGS and CXXFLAGS, and some will
# use CXXFLAGS alone. # use CXXFLAGS alone.
export CXXFLAGS=$CFLAGS export CXXFLAGS="$CFLAGS"
# Use the same variable name as conda-build # Use the same variable name as conda-build
if [ $(uname) = "Darwin" ]; then if [ "$(uname)" = "Darwin" ]; then
export CPU_COUNT=$(sysctl -n hw.ncpu) CPU_COUNT="$(sysctl -n hw.ncpu)"
export CPU_COUNT
else else
export CPU_COUNT=$(nproc) CPU_COUNT="$(nproc)"
export CPU_COUNT
fi fi

View file

@ -2,7 +2,6 @@
import asyncio import asyncio
import argparse import argparse
from glob import glob
import os import os
import re import re
import shlex import shlex
@ -13,6 +12,8 @@
import sysconfig import sysconfig
from asyncio import wait_for from asyncio import wait_for
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from datetime import datetime, timezone
from glob import glob
from os.path import basename, relpath from os.path import basename, relpath
from pathlib import Path from pathlib import Path
from subprocess import CalledProcessError from subprocess import CalledProcessError
@ -20,11 +21,12 @@
SCRIPT_NAME = Path(__file__).name SCRIPT_NAME = Path(__file__).name
CHECKOUT = Path(__file__).resolve().parent.parent ANDROID_DIR = Path(__file__).resolve().parent
ANDROID_DIR = CHECKOUT / "Android" CHECKOUT = ANDROID_DIR.parent
TESTBED_DIR = ANDROID_DIR / "testbed" TESTBED_DIR = ANDROID_DIR / "testbed"
CROSS_BUILD_DIR = CHECKOUT / "cross-build" CROSS_BUILD_DIR = CHECKOUT / "cross-build"
HOSTS = ["aarch64-linux-android", "x86_64-linux-android"]
APP_ID = "org.python.testbed" APP_ID = "org.python.testbed"
DECODE_ARGS = ("UTF-8", "backslashreplace") DECODE_ARGS = ("UTF-8", "backslashreplace")
@ -58,12 +60,10 @@ def delete_glob(pattern):
path.unlink() path.unlink()
def subdir(name, *, clean=None): def subdir(*parts, create=False):
path = CROSS_BUILD_DIR / name path = CROSS_BUILD_DIR.joinpath(*parts)
if clean:
delete_glob(path)
if not path.exists(): if not path.exists():
if clean is None: if not create:
sys.exit( sys.exit(
f"{path} does not exist. Create it by running the appropriate " f"{path} does not exist. Create it by running the appropriate "
f"`configure` subcommand of {SCRIPT_NAME}.") f"`configure` subcommand of {SCRIPT_NAME}.")
@ -123,7 +123,9 @@ def build_python_path():
def configure_build_python(context): def configure_build_python(context):
os.chdir(subdir("build", clean=context.clean)) if context.clean:
clean("build")
os.chdir(subdir("build", create=True))
command = [relpath(CHECKOUT / "configure")] command = [relpath(CHECKOUT / "configure")]
if context.args: if context.args:
@ -136,35 +138,33 @@ def make_build_python(context):
run(["make", "-j", str(os.cpu_count())]) run(["make", "-j", str(os.cpu_count())])
def unpack_deps(host): def unpack_deps(host, prefix_dir):
deps_url = "https://github.com/beeware/cpython-android-source-deps/releases/download" deps_url = "https://github.com/beeware/cpython-android-source-deps/releases/download"
for name_ver in ["bzip2-1.0.8-2", "libffi-3.4.4-3", "openssl-3.0.15-4", for name_ver in ["bzip2-1.0.8-2", "libffi-3.4.4-3", "openssl-3.0.15-4",
"sqlite-3.45.3-3", "xz-5.4.6-1"]: "sqlite-3.49.1-0", "xz-5.4.6-1"]:
filename = f"{name_ver}-{host}.tar.gz" filename = f"{name_ver}-{host}.tar.gz"
download(f"{deps_url}/{name_ver}/{filename}") download(f"{deps_url}/{name_ver}/{filename}")
run(["tar", "-xf", filename]) shutil.unpack_archive(filename, prefix_dir)
os.remove(filename) os.remove(filename)
def download(url, target_dir="."): def download(url, target_dir="."):
out_path = f"{target_dir}/{basename(url)}" out_path = f"{target_dir}/{basename(url)}"
run(["curl", "-Lf", "-o", out_path, url]) run(["curl", "-Lf", "--retry", "5", "--retry-all-errors", "-o", out_path, url])
return out_path return out_path
def configure_host_python(context): def configure_host_python(context):
host_dir = subdir(context.host, clean=context.clean) if context.clean:
clean(context.host)
host_dir = subdir(context.host, create=True)
prefix_dir = host_dir / "prefix" prefix_dir = host_dir / "prefix"
if not prefix_dir.exists(): if not prefix_dir.exists():
prefix_dir.mkdir() prefix_dir.mkdir()
os.chdir(prefix_dir) unpack_deps(context.host, prefix_dir)
unpack_deps(context.host)
build_dir = host_dir / "build"
build_dir.mkdir(exist_ok=True)
os.chdir(build_dir)
os.chdir(host_dir)
command = [ command = [
# Basic cross-compiling configuration # Basic cross-compiling configuration
relpath(CHECKOUT / "configure"), relpath(CHECKOUT / "configure"),
@ -193,11 +193,10 @@ def make_host_python(context):
# the build. # the build.
host_dir = subdir(context.host) host_dir = subdir(context.host)
prefix_dir = host_dir / "prefix" prefix_dir = host_dir / "prefix"
delete_glob(f"{prefix_dir}/include/python*") for pattern in ("include/python*", "lib/libpython*", "lib/python*"):
delete_glob(f"{prefix_dir}/lib/libpython*") delete_glob(f"{prefix_dir}/{pattern}")
delete_glob(f"{prefix_dir}/lib/python*")
os.chdir(host_dir / "build") os.chdir(host_dir)
run(["make", "-j", str(os.cpu_count())], host=context.host) run(["make", "-j", str(os.cpu_count())], host=context.host)
run(["make", "install", f"prefix={prefix_dir}"], host=context.host) run(["make", "install", f"prefix={prefix_dir}"], host=context.host)
@ -209,8 +208,13 @@ def build_all(context):
step(context) step(context)
def clean(host):
delete_glob(CROSS_BUILD_DIR / host)
def clean_all(context): def clean_all(context):
delete_glob(CROSS_BUILD_DIR) for host in HOSTS + ["build"]:
clean(host)
def setup_sdk(): def setup_sdk():
@ -234,31 +238,26 @@ def setup_sdk():
# To avoid distributing compiled artifacts without corresponding source code, # To avoid distributing compiled artifacts without corresponding source code,
# the Gradle wrapper is not included in the CPython repository. Instead, we # the Gradle wrapper is not included in the CPython repository. Instead, we
# extract it from the Gradle release. # extract it from the Gradle GitHub repository.
def setup_testbed(): def setup_testbed():
if all((TESTBED_DIR / path).exists() for path in [ paths = ["gradlew", "gradlew.bat", "gradle/wrapper/gradle-wrapper.jar"]
"gradlew", "gradlew.bat", "gradle/wrapper/gradle-wrapper.jar", if all((TESTBED_DIR / path).exists() for path in paths):
]):
return return
ver_long = "8.7.0" # The wrapper version isn't important, as any version of the wrapper can
ver_short = ver_long.removesuffix(".0") # download any version of Gradle. The Gradle version actually used for the
# build is specified in testbed/gradle/wrapper/gradle-wrapper.properties.
version = "8.9.0"
for filename in ["gradlew", "gradlew.bat"]: for path in paths:
out_path = download( out_path = TESTBED_DIR / path
f"https://raw.githubusercontent.com/gradle/gradle/v{ver_long}/{filename}", out_path.parent.mkdir(exist_ok=True)
TESTBED_DIR) download(
f"https://raw.githubusercontent.com/gradle/gradle/v{version}/{path}",
out_path.parent,
)
os.chmod(out_path, 0o755) os.chmod(out_path, 0o755)
with TemporaryDirectory(prefix=SCRIPT_NAME) as temp_dir:
bin_zip = download(
f"https://services.gradle.org/distributions/gradle-{ver_short}-bin.zip",
temp_dir)
outer_jar = f"gradle-{ver_short}/lib/plugins/gradle-wrapper-{ver_short}.jar"
run(["unzip", "-d", temp_dir, bin_zip, outer_jar])
run(["unzip", "-o", "-d", f"{TESTBED_DIR}/gradle/wrapper",
f"{temp_dir}/{outer_jar}", "gradle-wrapper.jar"])
# run_testbed will build the app automatically, but it's useful to have this as # run_testbed will build the app automatically, but it's useful to have this as
# a separate command to allow running the app outside of this script. # a separate command to allow running the app outside of this script.
@ -538,6 +537,73 @@ async def run_testbed(context):
raise e.exceptions[0] raise e.exceptions[0]
def package_version(prefix_dir):
patchlevel_glob = f"{prefix_dir}/include/python*/patchlevel.h"
patchlevel_paths = glob(patchlevel_glob)
if len(patchlevel_paths) != 1:
sys.exit(f"{patchlevel_glob} matched {len(patchlevel_paths)} paths.")
for line in open(patchlevel_paths[0]):
if match := re.fullmatch(r'\s*#define\s+PY_VERSION\s+"(.+)"\s*', line):
version = match[1]
break
else:
sys.exit(f"Failed to find Python version in {patchlevel_paths[0]}.")
# If not building against a tagged commit, add a timestamp to the version.
# Follow the PyPA version number rules, as this will make it easier to
# process with other tools.
if version.endswith("+"):
version += datetime.now(timezone.utc).strftime("%Y%m%d.%H%M%S")
return version
def package(context):
prefix_dir = subdir(context.host, "prefix")
version = package_version(prefix_dir)
with TemporaryDirectory(prefix=SCRIPT_NAME) as temp_dir:
temp_dir = Path(temp_dir)
# Include all tracked files from the Android directory.
for line in run(
["git", "ls-files"],
cwd=ANDROID_DIR, capture_output=True, text=True, log=False,
).stdout.splitlines():
src = ANDROID_DIR / line
dst = temp_dir / line
dst.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(src, dst, follow_symlinks=False)
# Include anything from the prefix directory which could be useful
# either for embedding Python in an app, or building third-party
# packages against it.
for rel_dir, patterns in [
("include", ["openssl*", "python*", "sqlite*"]),
("lib", ["engines-3", "libcrypto*.so", "libpython*", "libsqlite*",
"libssl*.so", "ossl-modules", "python*"]),
("lib/pkgconfig", ["*crypto*", "*ssl*", "*python*", "*sqlite*"]),
]:
for pattern in patterns:
for src in glob(f"{prefix_dir}/{rel_dir}/{pattern}"):
dst = temp_dir / relpath(src, prefix_dir.parent)
dst.parent.mkdir(parents=True, exist_ok=True)
if Path(src).is_dir():
shutil.copytree(
src, dst, symlinks=True,
ignore=lambda *args: ["__pycache__"]
)
else:
shutil.copy2(src, dst, follow_symlinks=False)
dist_dir = subdir(context.host, "dist", create=True)
package_path = shutil.make_archive(
f"{dist_dir}/python-{version}-{context.host}", "gztar", temp_dir
)
print(f"Wrote {package_path}")
# Handle SIGTERM the same way as SIGINT. This ensures that if we're terminated # Handle SIGTERM the same way as SIGINT. This ensures that if we're terminated
# by the buildbot worker, we'll make an attempt to clean up our subprocesses. # by the buildbot worker, we'll make an attempt to clean up our subprocesses.
def install_signal_handler(): def install_signal_handler():
@ -550,6 +616,8 @@ def signal_handler(*args):
def parse_args(): def parse_args():
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
subcommands = parser.add_subparsers(dest="subcommand") subcommands = parser.add_subparsers(dest="subcommand")
# Subcommands
build = subcommands.add_parser("build", help="Build everything") build = subcommands.add_parser("build", help="Build everything")
configure_build = subcommands.add_parser("configure-build", configure_build = subcommands.add_parser("configure-build",
help="Run `configure` for the " help="Run `configure` for the "
@ -561,25 +629,27 @@ def parse_args():
make_host = subcommands.add_parser("make-host", make_host = subcommands.add_parser("make-host",
help="Run `make` for Android") help="Run `make` for Android")
subcommands.add_parser( subcommands.add_parser(
"clean", help="Delete the cross-build directory") "clean", help="Delete all build and prefix directories")
subcommands.add_parser(
"build-testbed", help="Build the testbed app")
test = subcommands.add_parser(
"test", help="Run the test suite")
package = subcommands.add_parser("package", help="Make a release package")
# Common arguments
for subcommand in build, configure_build, configure_host: for subcommand in build, configure_build, configure_host:
subcommand.add_argument( subcommand.add_argument(
"--clean", action="store_true", default=False, dest="clean", "--clean", action="store_true", default=False, dest="clean",
help="Delete any relevant directories before building") help="Delete the relevant build and prefix directories first")
for subcommand in build, configure_host, make_host: for subcommand in [build, configure_host, make_host, package]:
subcommand.add_argument( subcommand.add_argument(
"host", metavar="HOST", "host", metavar="HOST", choices=HOSTS,
choices=["aarch64-linux-android", "x86_64-linux-android"],
help="Host triplet: choices=[%(choices)s]") help="Host triplet: choices=[%(choices)s]")
for subcommand in build, configure_build, configure_host: for subcommand in build, configure_build, configure_host:
subcommand.add_argument("args", nargs="*", subcommand.add_argument("args", nargs="*",
help="Extra arguments to pass to `configure`") help="Extra arguments to pass to `configure`")
subcommands.add_parser( # Test arguments
"build-testbed", help="Build the testbed app")
test = subcommands.add_parser(
"test", help="Run the test suite")
test.add_argument( test.add_argument(
"-v", "--verbose", action="count", default=0, "-v", "--verbose", action="count", default=0,
help="Show Gradle output, and non-Python logcat messages. " help="Show Gradle output, and non-Python logcat messages. "
@ -608,14 +678,17 @@ def main():
stream.reconfigure(line_buffering=True) stream.reconfigure(line_buffering=True)
context = parse_args() context = parse_args()
dispatch = {"configure-build": configure_build_python, dispatch = {
"make-build": make_build_python, "configure-build": configure_build_python,
"configure-host": configure_host_python, "make-build": make_build_python,
"make-host": make_host_python, "configure-host": configure_host_python,
"build": build_all, "make-host": make_host_python,
"clean": clean_all, "build": build_all,
"build-testbed": build_testbed, "clean": clean_all,
"test": run_testbed} "build-testbed": build_testbed,
"test": run_testbed,
"package": package,
}
try: try:
result = dispatch[context.subcommand](context) result = dispatch[context.subcommand](context)

View file

@ -1,18 +1,19 @@
# The Gradle wrapper should be downloaded by running `../android.py setup-testbed`. # The Gradle wrapper can be downloaded by running the `test` or `build-testbed`
# commands of android.py.
/gradlew /gradlew
/gradlew.bat /gradlew.bat
/gradle/wrapper/gradle-wrapper.jar /gradle/wrapper/gradle-wrapper.jar
# The repository's top-level .gitignore file ignores all .idea directories, but
# we want to keep any files which can't be regenerated from the Gradle
# configuration.
!.idea/
/.idea/*
!/.idea/inspectionProfiles
*.iml *.iml
.gradle .gradle
/local.properties /local.properties
/.idea/caches
/.idea/deploymentTargetDropdown.xml
/.idea/libraries
/.idea/modules.xml
/.idea/workspace.xml
/.idea/navEditor.xml
/.idea/assetWizardSettings.xml
.DS_Store .DS_Store
/build /build
/captures /captures

View file

@ -0,0 +1,8 @@
<component name="InspectionProjectProfileManager">
<profile version="1.0">
<option name="myName" value="Project Default" />
<inspection_tool class="AndroidLintGradleDependency" enabled="true" level="WEAK WARNING" enabled_by_default="true" editorAttributes="INFO_ATTRIBUTES" />
<inspection_tool class="AndroidLintOldTargetApi" enabled="true" level="WEAK WARNING" enabled_by_default="true" editorAttributes="INFO_ATTRIBUTES" />
<inspection_tool class="UnstableApiUsage" enabled="true" level="WEAK WARNING" enabled_by_default="true" editorAttributes="INFO_ATTRIBUTES" />
</profile>
</component>

View file

@ -6,28 +6,71 @@ plugins {
id("org.jetbrains.kotlin.android") id("org.jetbrains.kotlin.android")
} }
val PYTHON_DIR = file("../../..").canonicalPath val ANDROID_DIR = file("../..")
val PYTHON_CROSS_DIR = "$PYTHON_DIR/cross-build" val PYTHON_DIR = ANDROID_DIR.parentFile!!
val PYTHON_CROSS_DIR = file("$PYTHON_DIR/cross-build")
val inSourceTree = (
ANDROID_DIR.name == "Android" && file("$PYTHON_DIR/pyconfig.h.in").exists()
)
val ABIS = mapOf( val KNOWN_ABIS = mapOf(
"arm64-v8a" to "aarch64-linux-android", "aarch64-linux-android" to "arm64-v8a",
"x86_64" to "x86_64-linux-android", "x86_64-linux-android" to "x86_64",
).filter { file("$PYTHON_CROSS_DIR/${it.value}").exists() } )
if (ABIS.isEmpty()) {
// Discover prefixes.
val prefixes = ArrayList<File>()
if (inSourceTree) {
for ((triplet, _) in KNOWN_ABIS.entries) {
val prefix = file("$PYTHON_CROSS_DIR/$triplet/prefix")
if (prefix.exists()) {
prefixes.add(prefix)
}
}
} else {
// Testbed is inside a release package.
val prefix = file("$ANDROID_DIR/prefix")
if (prefix.exists()) {
prefixes.add(prefix)
}
}
if (prefixes.isEmpty()) {
throw GradleException( throw GradleException(
"No Android ABIs found in $PYTHON_CROSS_DIR: see Android/README.md " + "No Android prefixes found: see README.md for testing instructions"
"for building instructions."
) )
} }
val PYTHON_VERSION = file("$PYTHON_DIR/Include/patchlevel.h").useLines { // Detect Python versions and ABIs.
for (line in it) { lateinit var pythonVersion: String
val match = """#define PY_VERSION\s+"(\d+\.\d+)""".toRegex().find(line) var abis = HashMap<File, String>()
if (match != null) { for ((i, prefix) in prefixes.withIndex()) {
return@useLines match.groupValues[1] val libDir = file("$prefix/lib")
val version = run {
for (filename in libDir.list()!!) {
"""python(\d+\.\d+)""".toRegex().matchEntire(filename)?.let {
return@run it.groupValues[1]
}
} }
throw GradleException("Failed to find Python version in $libDir")
} }
throw GradleException("Failed to find Python version") if (i == 0) {
pythonVersion = version
} else if (pythonVersion != version) {
throw GradleException(
"${prefixes[0]} is Python $pythonVersion, but $prefix is Python $version"
)
}
val libPythonDir = file("$libDir/python$pythonVersion")
val triplet = run {
for (filename in libPythonDir.list()!!) {
"""_sysconfigdata__android_(.+).py""".toRegex().matchEntire(filename)?.let {
return@run it.groupValues[1]
}
}
throw GradleException("Failed to find Python triplet in $libPythonDir")
}
abis[prefix] = KNOWN_ABIS[triplet]!!
} }
@ -53,10 +96,16 @@ android {
versionCode = 1 versionCode = 1
versionName = "1.0" versionName = "1.0"
ndk.abiFilters.addAll(ABIS.keys) ndk.abiFilters.addAll(abis.values)
externalNativeBuild.cmake.arguments( externalNativeBuild.cmake.arguments(
"-DPYTHON_CROSS_DIR=$PYTHON_CROSS_DIR", "-DPYTHON_PREFIX_DIR=" + if (inSourceTree) {
"-DPYTHON_VERSION=$PYTHON_VERSION", // AGP uses the ${} syntax for its own purposes, so use a Jinja style
// placeholder.
"$PYTHON_CROSS_DIR/{{triplet}}/prefix"
} else {
prefixes[0]
},
"-DPYTHON_VERSION=$pythonVersion",
"-DANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES=ON", "-DANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES=ON",
) )
@ -133,24 +182,25 @@ dependencies {
// Create some custom tasks to copy Python and its standard library from // Create some custom tasks to copy Python and its standard library from
// elsewhere in the repository. // elsewhere in the repository.
androidComponents.onVariants { variant -> androidComponents.onVariants { variant ->
val pyPlusVer = "python$PYTHON_VERSION" val pyPlusVer = "python$pythonVersion"
generateTask(variant, variant.sources.assets!!) { generateTask(variant, variant.sources.assets!!) {
into("python") { into("python") {
// Include files such as pyconfig.h are used by some of the tests.
into("include/$pyPlusVer") { into("include/$pyPlusVer") {
for (triplet in ABIS.values) { for (prefix in prefixes) {
from("$PYTHON_CROSS_DIR/$triplet/prefix/include/$pyPlusVer") from("$prefix/include/$pyPlusVer")
} }
duplicatesStrategy = DuplicatesStrategy.EXCLUDE duplicatesStrategy = DuplicatesStrategy.EXCLUDE
} }
into("lib/$pyPlusVer") { into("lib/$pyPlusVer") {
// To aid debugging, the source directory takes priority. // To aid debugging, the source directory takes priority when
from("$PYTHON_DIR/Lib") // running inside a CPython source tree.
if (inSourceTree) {
// The cross-build directory provides ABI-specific files such as from("$PYTHON_DIR/Lib")
// sysconfigdata. }
for (triplet in ABIS.values) { for (prefix in prefixes) {
from("$PYTHON_CROSS_DIR/$triplet/prefix/lib/$pyPlusVer") from("$prefix/lib/$pyPlusVer")
} }
into("site-packages") { into("site-packages") {
@ -164,9 +214,9 @@ androidComponents.onVariants { variant ->
} }
generateTask(variant, variant.sources.jniLibs!!) { generateTask(variant, variant.sources.jniLibs!!) {
for ((abi, triplet) in ABIS.entries) { for ((prefix, abi) in abis.entries) {
into(abi) { into(abi) {
from("$PYTHON_CROSS_DIR/$triplet/prefix/lib") from("$prefix/lib")
include("libpython*.*.so") include("libpython*.*.so")
include("lib*_python.so") include("lib*_python.so")
} }

View file

@ -1,9 +1,14 @@
cmake_minimum_required(VERSION 3.4.1) cmake_minimum_required(VERSION 3.4.1)
project(testbed) project(testbed)
set(PREFIX_DIR ${PYTHON_CROSS_DIR}/${CMAKE_LIBRARY_ARCHITECTURE}/prefix) # Resolve variables from the command line.
include_directories(${PREFIX_DIR}/include/python${PYTHON_VERSION}) string(
link_directories(${PREFIX_DIR}/lib) REPLACE {{triplet}} ${CMAKE_LIBRARY_ARCHITECTURE}
PYTHON_PREFIX_DIR ${PYTHON_PREFIX_DIR}
)
include_directories(${PYTHON_PREFIX_DIR}/include/python${PYTHON_VERSION})
link_directories(${PYTHON_PREFIX_DIR}/lib)
link_libraries(log python${PYTHON_VERSION}) link_libraries(log python${PYTHON_VERSION})
add_library(main_activity SHARED main_activity.c) add_library(main_activity SHARED main_activity.c)

View file

@ -1,7 +1,6 @@
extend = "../.ruff.toml" # Inherit the project-wide settings
target-version = "py312" # Align with the version in oldest_supported_sphinx target-version = "py312" # Align with the version in oldest_supported_sphinx
fix = true
output-format = "full"
line-length = 79
extend-exclude = [ extend-exclude = [
"includes/*", "includes/*",
# Temporary exclusions: # Temporary exclusions:

View file

@ -14,15 +14,15 @@ PAPER =
SOURCES = SOURCES =
DISTVERSION = $(shell $(PYTHON) tools/extensions/patchlevel.py) DISTVERSION = $(shell $(PYTHON) tools/extensions/patchlevel.py)
REQUIREMENTS = requirements.txt REQUIREMENTS = requirements.txt
SPHINXERRORHANDLING = -W SPHINXERRORHANDLING = --fail-on-warning
# Internal variables. # Internal variables.
PAPEROPT_a4 = -D latex_elements.papersize=a4paper PAPEROPT_a4 = --define latex_elements.papersize=a4paper
PAPEROPT_letter = -D latex_elements.papersize=letterpaper PAPEROPT_letter = --define latex_elements.papersize=letterpaper
ALLSPHINXOPTS = -b $(BUILDER) \ ALLSPHINXOPTS = --builder $(BUILDER) \
-d build/doctrees \ --doctree-dir build/doctrees \
-j $(JOBS) \ --jobs $(JOBS) \
$(PAPEROPT_$(PAPER)) \ $(PAPEROPT_$(PAPER)) \
$(SPHINXOPTS) $(SPHINXERRORHANDLING) \ $(SPHINXOPTS) $(SPHINXERRORHANDLING) \
. build/$(BUILDER) $(SOURCES) . build/$(BUILDER) $(SOURCES)
@ -144,7 +144,7 @@ pydoc-topics: build
.PHONY: gettext .PHONY: gettext
gettext: BUILDER = gettext gettext: BUILDER = gettext
gettext: override SPHINXOPTS := -d build/doctrees-gettext $(SPHINXOPTS) gettext: override SPHINXOPTS := --doctree-dir build/doctrees-gettext $(SPHINXOPTS)
gettext: build gettext: build
.PHONY: htmlview .PHONY: htmlview
@ -172,7 +172,7 @@ venv:
else \ else \
echo "Creating venv in $(VENVDIR)"; \ echo "Creating venv in $(VENVDIR)"; \
if $(UV) --version >/dev/null 2>&1; then \ if $(UV) --version >/dev/null 2>&1; then \
$(UV) venv $(VENVDIR); \ $(UV) venv --python=$(PYTHON) $(VENVDIR); \
VIRTUAL_ENV=$(VENVDIR) $(UV) pip install -r $(REQUIREMENTS); \ VIRTUAL_ENV=$(VENVDIR) $(UV) pip install -r $(REQUIREMENTS); \
else \ else \
$(PYTHON) -m venv $(VENVDIR); \ $(PYTHON) -m venv $(VENVDIR); \
@ -204,6 +204,7 @@ dist-html:
find dist -name 'python-$(DISTVERSION)-docs-html*' -exec rm -rf {} \; find dist -name 'python-$(DISTVERSION)-docs-html*' -exec rm -rf {} \;
$(MAKE) html $(MAKE) html
cp -pPR build/html dist/python-$(DISTVERSION)-docs-html cp -pPR build/html dist/python-$(DISTVERSION)-docs-html
rm -rf dist/python-$(DISTVERSION)-docs-html/_images/social_previews/
tar -C dist -cf dist/python-$(DISTVERSION)-docs-html.tar python-$(DISTVERSION)-docs-html tar -C dist -cf dist/python-$(DISTVERSION)-docs-html.tar python-$(DISTVERSION)-docs-html
bzip2 -9 -k dist/python-$(DISTVERSION)-docs-html.tar bzip2 -9 -k dist/python-$(DISTVERSION)-docs-html.tar
(cd dist; zip -q -r -9 python-$(DISTVERSION)-docs-html.zip python-$(DISTVERSION)-docs-html) (cd dist; zip -q -r -9 python-$(DISTVERSION)-docs-html.zip python-$(DISTVERSION)-docs-html)
@ -300,20 +301,20 @@ serve:
# By default, Sphinx only rebuilds pages where the page content has changed. # By default, Sphinx only rebuilds pages where the page content has changed.
# This means it doesn't always pick up changes to preferred link targets, etc # This means it doesn't always pick up changes to preferred link targets, etc
# To ensure such changes are picked up, we build the published docs with # To ensure such changes are picked up, we build the published docs with
# `-E` (to ignore the cached environment) and `-a` (to ignore already existing # ``--fresh-env`` (to ignore the cached environment) and ``--write-all``
# output files) # (to ignore already existing output files)
# for development releases: always build # for development releases: always build
.PHONY: autobuild-dev .PHONY: autobuild-dev
autobuild-dev: DISTVERSION = $(shell $(PYTHON) tools/extensions/patchlevel.py --short) autobuild-dev: DISTVERSION = $(shell $(PYTHON) tools/extensions/patchlevel.py --short)
autobuild-dev: autobuild-dev:
$(MAKE) dist-no-html SPHINXOPTS='$(SPHINXOPTS) -Ea -A daily=1' DISTVERSION=$(DISTVERSION) $(MAKE) dist-no-html SPHINXOPTS='$(SPHINXOPTS) --fresh-env --write-all --html-define daily=1' DISTVERSION=$(DISTVERSION)
# for HTML-only rebuilds # for HTML-only rebuilds
.PHONY: autobuild-dev-html .PHONY: autobuild-dev-html
autobuild-dev-html: DISTVERSION = $(shell $(PYTHON) tools/extensions/patchlevel.py --short) autobuild-dev-html: DISTVERSION = $(shell $(PYTHON) tools/extensions/patchlevel.py --short)
autobuild-dev-html: autobuild-dev-html:
$(MAKE) dist-html SPHINXOPTS='$(SPHINXOPTS) -Ea -A daily=1' DISTVERSION=$(DISTVERSION) $(MAKE) dist-html SPHINXOPTS='$(SPHINXOPTS) --fresh-env --write-all --html-define daily=1' DISTVERSION=$(DISTVERSION)
# for stable releases: only build if not in pre-release stage (alpha, beta) # for stable releases: only build if not in pre-release stage (alpha, beta)
# release candidate downloads are okay, since the stable tree can be in that stage # release candidate downloads are okay, since the stable tree can be in that stage

View file

@ -35,6 +35,10 @@ Allocating Objects on the Heap
The size of the memory allocation is determined from the The size of the memory allocation is determined from the
:c:member:`~PyTypeObject.tp_basicsize` field of the type object. :c:member:`~PyTypeObject.tp_basicsize` field of the type object.
Note that this function is unsuitable if *typeobj* has
:c:macro:`Py_TPFLAGS_HAVE_GC` set. For such objects,
use :c:func:`PyObject_GC_New` instead.
.. c:macro:: PyObject_NewVar(TYPE, typeobj, size) .. c:macro:: PyObject_NewVar(TYPE, typeobj, size)
@ -49,6 +53,10 @@ Allocating Objects on the Heap
fields into the same allocation decreases the number of allocations, fields into the same allocation decreases the number of allocations,
improving the memory management efficiency. improving the memory management efficiency.
Note that this function is unsuitable if *typeobj* has
:c:macro:`Py_TPFLAGS_HAVE_GC` set. For such objects,
use :c:func:`PyObject_GC_NewVar` instead.
.. c:function:: void PyObject_Del(void *op) .. c:function:: void PyObject_Del(void *op)

View file

@ -5,7 +5,7 @@
Parsing arguments and building values Parsing arguments and building values
===================================== =====================================
These functions are useful when creating your own extensions functions and These functions are useful when creating your own extension functions and
methods. Additional information and examples are available in methods. Additional information and examples are available in
:ref:`extending-index`. :ref:`extending-index`.
@ -113,14 +113,18 @@ There are three ways strings and buffers can be converted to C:
``z`` (:class:`str` or ``None``) [const char \*] ``z`` (:class:`str` or ``None``) [const char \*]
Like ``s``, but the Python object may also be ``None``, in which case the C Like ``s``, but the Python object may also be ``None``, in which case the C
pointer is set to ``NULL``. pointer is set to ``NULL``.
It is the same as ``s?`` with the C pointer was initialized to ``NULL``.
``z*`` (:class:`str`, :term:`bytes-like object` or ``None``) [Py_buffer] ``z*`` (:class:`str`, :term:`bytes-like object` or ``None``) [Py_buffer]
Like ``s*``, but the Python object may also be ``None``, in which case the Like ``s*``, but the Python object may also be ``None``, in which case the
``buf`` member of the :c:type:`Py_buffer` structure is set to ``NULL``. ``buf`` member of the :c:type:`Py_buffer` structure is set to ``NULL``.
It is the same as ``s*?`` with the ``buf`` member of the :c:type:`Py_buffer`
structure was initialized to ``NULL``.
``z#`` (:class:`str`, read-only :term:`bytes-like object` or ``None``) [const char \*, :c:type:`Py_ssize_t`] ``z#`` (:class:`str`, read-only :term:`bytes-like object` or ``None``) [const char \*, :c:type:`Py_ssize_t`]
Like ``s#``, but the Python object may also be ``None``, in which case the C Like ``s#``, but the Python object may also be ``None``, in which case the C
pointer is set to ``NULL``. pointer is set to ``NULL``.
It is the same as ``s#?`` with the C pointer was initialized to ``NULL``.
``y`` (read-only :term:`bytes-like object`) [const char \*] ``y`` (read-only :term:`bytes-like object`) [const char \*]
This format converts a bytes-like object to a C pointer to a This format converts a bytes-like object to a C pointer to a
@ -270,6 +274,9 @@ small to receive the value.
Convert a Python integer to a C :c:expr:`unsigned long` without Convert a Python integer to a C :c:expr:`unsigned long` without
overflow checking. overflow checking.
.. versionchanged:: 3.14
Use :meth:`~object.__index__` if available.
``L`` (:class:`int`) [long long] ``L`` (:class:`int`) [long long]
Convert a Python integer to a C :c:expr:`long long`. Convert a Python integer to a C :c:expr:`long long`.
@ -277,6 +284,9 @@ small to receive the value.
Convert a Python integer to a C :c:expr:`unsigned long long` Convert a Python integer to a C :c:expr:`unsigned long long`
without overflow checking. without overflow checking.
.. versionchanged:: 3.14
Use :meth:`~object.__index__` if available.
``n`` (:class:`int`) [:c:type:`Py_ssize_t`] ``n`` (:class:`int`) [:c:type:`Py_ssize_t`]
Convert a Python integer to a C :c:type:`Py_ssize_t`. Convert a Python integer to a C :c:type:`Py_ssize_t`.
@ -357,11 +367,37 @@ Other objects
.. versionadded:: 3.3 .. versionadded:: 3.3
``(items)`` (:class:`tuple`) [*matching-items*] ``(items)`` (sequence) [*matching-items*]
The object must be a Python sequence whose length is the number of format units The object must be a Python sequence (except :class:`str`, :class:`bytes`
or :class:`bytearray`) whose length is the number of format units
in *items*. The C arguments must correspond to the individual format units in in *items*. The C arguments must correspond to the individual format units in
*items*. Format units for sequences may be nested. *items*. Format units for sequences may be nested.
If *items* contains format units which store a :ref:`borrowed buffer
<c-arg-borrowed-buffer>` (``s``, ``s#``, ``z``, ``z#``, ``y``, or ``y#``)
or a :term:`borrowed reference` (``S``, ``Y``, ``U``, ``O``, or ``O!``),
the object must be a Python tuple.
The *converter* for the ``O&`` format unit in *items* must not store
a borrowed buffer or a borrowed reference.
.. versionchanged:: 3.14
:class:`str` and :class:`bytearray` objects no longer accepted as a sequence.
.. deprecated:: 3.14
Non-tuple sequences are deprecated if *items* contains format units
which store a borrowed buffer or a borrowed reference.
``unit?`` (anything or ``None``) [*matching-variable(s)*]
``?`` modifies the behavior of the preceding format unit.
The C variable(s) corresponding to that parameter should be initialized
to their default value --- when the argument is ``None``,
:c:func:`PyArg_ParseTuple` does not touch the contents of the corresponding
C variable(s).
If the argument is not ``None``, it is parsed according to the specified
format unit.
.. versionadded:: 3.14
A few other characters have a meaning in a format string. These may not occur A few other characters have a meaning in a format string. These may not occur
inside nested parentheses. They are: inside nested parentheses. They are:
@ -639,12 +675,18 @@ Building values
``L`` (:class:`int`) [long long] ``L`` (:class:`int`) [long long]
Convert a C :c:expr:`long long` to a Python integer object. Convert a C :c:expr:`long long` to a Python integer object.
.. _capi-py-buildvalue-format-K:
``K`` (:class:`int`) [unsigned long long] ``K`` (:class:`int`) [unsigned long long]
Convert a C :c:expr:`unsigned long long` to a Python integer object. Convert a C :c:expr:`unsigned long long` to a Python integer object.
``n`` (:class:`int`) [:c:type:`Py_ssize_t`] ``n`` (:class:`int`) [:c:type:`Py_ssize_t`]
Convert a C :c:type:`Py_ssize_t` to a Python integer. Convert a C :c:type:`Py_ssize_t` to a Python integer.
``p`` (:class:`bool`) [int]
Convert a C :c:expr:`int` to a Python :class:`bool` object.
.. versionadded:: 3.14
``c`` (:class:`bytes` of length 1) [char] ``c`` (:class:`bytes` of length 1) [char]
Convert a C :c:expr:`int` representing a byte to a Python :class:`bytes` object of Convert a C :c:expr:`int` representing a byte to a Python :class:`bytes` object of
length 1. length 1.

View file

@ -26,17 +26,19 @@ characteristic of being backed by a possibly large memory buffer. It is
then desirable, in some situations, to access that buffer directly and then desirable, in some situations, to access that buffer directly and
without intermediate copying. without intermediate copying.
Python provides such a facility at the C level in the form of the :ref:`buffer Python provides such a facility at the C and Python level in the form of the
protocol <bufferobjects>`. This protocol has two sides: :ref:`buffer protocol <bufferobjects>`. This protocol has two sides:
.. index:: single: PyBufferProcs (C type) .. index:: single: PyBufferProcs (C type)
- on the producer side, a type can export a "buffer interface" which allows - on the producer side, a type can export a "buffer interface" which allows
objects of that type to expose information about their underlying buffer. objects of that type to expose information about their underlying buffer.
This interface is described in the section :ref:`buffer-structs`; This interface is described in the section :ref:`buffer-structs`; for
Python see :ref:`python-buffer-protocol`.
- on the consumer side, several means are available to obtain a pointer to - on the consumer side, several means are available to obtain a pointer to
the raw underlying data of an object (for example a method parameter). the raw underlying data of an object (for example a method parameter). For
Python see :class:`memoryview`.
Simple objects such as :class:`bytes` and :class:`bytearray` expose their Simple objects such as :class:`bytes` and :class:`bytearray` expose their
underlying buffer in byte-oriented form. Other forms are possible; for example, underlying buffer in byte-oriented form. Other forms are possible; for example,
@ -62,6 +64,10 @@ In both cases, :c:func:`PyBuffer_Release` must be called when the buffer
isn't needed anymore. Failure to do so could lead to various issues such as isn't needed anymore. Failure to do so could lead to various issues such as
resource leaks. resource leaks.
.. versionadded:: 3.12
The buffer protocol is now accessible in Python, see
:ref:`python-buffer-protocol` and :class:`memoryview`.
.. _buffer-structure: .. _buffer-structure:

View file

@ -74,6 +74,11 @@ Direct API functions
.. c:function:: int PyByteArray_Resize(PyObject *bytearray, Py_ssize_t len) .. c:function:: int PyByteArray_Resize(PyObject *bytearray, Py_ssize_t len)
Resize the internal buffer of *bytearray* to *len*. Resize the internal buffer of *bytearray* to *len*.
Failure is a ``-1`` return with an exception set.
.. versionchanged:: 3.14
A negative *len* will now result in an exception being set and -1 returned.
Macros Macros
^^^^^^ ^^^^^^

View file

@ -44,36 +44,12 @@ pointers. This is consistent throughout the API.
representation. representation.
.. c:function:: Py_complex _Py_cr_sum(Py_complex left, double right)
Return the sum of a complex number and a real number, using the C :c:type:`Py_complex`
representation.
.. versionadded:: 3.14
.. c:function:: Py_complex _Py_c_diff(Py_complex left, Py_complex right) .. c:function:: Py_complex _Py_c_diff(Py_complex left, Py_complex right)
Return the difference between two complex numbers, using the C Return the difference between two complex numbers, using the C
:c:type:`Py_complex` representation. :c:type:`Py_complex` representation.
.. c:function:: Py_complex _Py_cr_diff(Py_complex left, double right)
Return the difference between a complex number and a real number, using the C
:c:type:`Py_complex` representation.
.. versionadded:: 3.14
.. c:function:: Py_complex _Py_rc_diff(double left, Py_complex right)
Return the difference between a real number and a complex number, using the C
:c:type:`Py_complex` representation.
.. versionadded:: 3.14
.. c:function:: Py_complex _Py_c_neg(Py_complex num) .. c:function:: Py_complex _Py_c_neg(Py_complex num)
Return the negation of the complex number *num*, using the C Return the negation of the complex number *num*, using the C
@ -86,14 +62,6 @@ pointers. This is consistent throughout the API.
representation. representation.
.. c:function:: Py_complex _Py_cr_prod(Py_complex left, double right)
Return the product of a complex number and a real number, using the C
:c:type:`Py_complex` representation.
.. versionadded:: 3.14
.. c:function:: Py_complex _Py_c_quot(Py_complex dividend, Py_complex divisor) .. c:function:: Py_complex _Py_c_quot(Py_complex dividend, Py_complex divisor)
Return the quotient of two complex numbers, using the C :c:type:`Py_complex` Return the quotient of two complex numbers, using the C :c:type:`Py_complex`
@ -103,28 +71,6 @@ pointers. This is consistent throughout the API.
:c:data:`errno` to :c:macro:`!EDOM`. :c:data:`errno` to :c:macro:`!EDOM`.
.. c:function:: Py_complex _Py_cr_quot(Py_complex dividend, double divisor)
Return the quotient of a complex number and a real number, using the C
:c:type:`Py_complex` representation.
If *divisor* is zero, this method returns zero and sets
:c:data:`errno` to :c:macro:`!EDOM`.
.. versionadded:: 3.14
.. c:function:: Py_complex _Py_rc_quot(double dividend, Py_complex divisor)
Return the quotient of a real number and a complex number, using the C
:c:type:`Py_complex` representation.
If *divisor* is zero, this method returns zero and sets
:c:data:`errno` to :c:macro:`!EDOM`.
.. versionadded:: 3.14
.. c:function:: Py_complex _Py_c_pow(Py_complex num, Py_complex exp) .. c:function:: Py_complex _Py_c_pow(Py_complex num, Py_complex exp)
Return the exponentiation of *num* by *exp*, using the C :c:type:`Py_complex` Return the exponentiation of *num* by *exp*, using the C :c:type:`Py_complex`

View file

@ -127,7 +127,7 @@ Dictionary Objects
Prefer the :c:func:`PyDict_GetItemWithError` function instead. Prefer the :c:func:`PyDict_GetItemWithError` function instead.
.. versionchanged:: 3.10 .. versionchanged:: 3.10
Calling this API without :term:`GIL` held had been allowed for historical Calling this API without an :term:`attached thread state` had been allowed for historical
reason. It is no longer allowed. reason. It is no longer allowed.

View file

@ -413,7 +413,7 @@ Querying the error indicator
own a reference to the return value, so you do not need to :c:func:`Py_DECREF` own a reference to the return value, so you do not need to :c:func:`Py_DECREF`
it. it.
The caller must hold the GIL. The caller must have an :term:`attached thread state`.
.. note:: .. note::
@ -675,7 +675,7 @@ Signal Handling
.. note:: .. note::
This function is async-signal-safe. It can be called without This function is async-signal-safe. It can be called without
the :term:`GIL` and from a C signal handler. an :term:`attached thread state` and from a C signal handler.
.. c:function:: int PyErr_SetInterruptEx(int signum) .. c:function:: int PyErr_SetInterruptEx(int signum)
@ -702,7 +702,7 @@ Signal Handling
.. note:: .. note::
This function is async-signal-safe. It can be called without This function is async-signal-safe. It can be called without
the :term:`GIL` and from a C signal handler. an :term:`attached thread state` and from a C signal handler.
.. versionadded:: 3.10 .. versionadded:: 3.10
@ -921,11 +921,7 @@ because the :ref:`call protocol <call>` takes care of recursion handling.
Marks a point where a recursive C-level call is about to be performed. Marks a point where a recursive C-level call is about to be performed.
If :c:macro:`!USE_STACKCHECK` is defined, this function checks if the OS The function then checks if the stack limit is reached. If this is the
stack overflowed using :c:func:`PyOS_CheckStack`. If this is the case, it
sets a :exc:`MemoryError` and returns a nonzero value.
The function then checks if the recursion limit is reached. If this is the
case, a :exc:`RecursionError` is set and a nonzero value is returned. case, a :exc:`RecursionError` is set and a nonzero value is returned.
Otherwise, zero is returned. Otherwise, zero is returned.

View file

@ -96,6 +96,9 @@ NaNs (if such things exist on the platform) isn't handled correctly, and
attempting to unpack a bytes string containing an IEEE INF or NaN will raise an attempting to unpack a bytes string containing an IEEE INF or NaN will raise an
exception. exception.
Note that NaNs type may not be preserved on IEEE platforms (silent NaN become
quiet), for example on x86 systems in 32-bit mode.
On non-IEEE platforms with more precision, or larger dynamic range, than IEEE On non-IEEE platforms with more precision, or larger dynamic range, than IEEE
754 supports, not all values can be packed; on non-IEEE platforms with less 754 supports, not all values can be packed; on non-IEEE platforms with less
precision, or smaller dynamic range, not all values can be unpacked. What precision, or smaller dynamic range, not all values can be unpacked. What

View file

@ -145,12 +145,13 @@ There are a few functions specific to Python functions.
.. c:type:: PyFunction_WatchEvent .. c:type:: PyFunction_WatchEvent
Enumeration of possible function watcher events: Enumeration of possible function watcher events:
- ``PyFunction_EVENT_CREATE``
- ``PyFunction_EVENT_DESTROY`` - ``PyFunction_EVENT_CREATE``
- ``PyFunction_EVENT_MODIFY_CODE`` - ``PyFunction_EVENT_DESTROY``
- ``PyFunction_EVENT_MODIFY_DEFAULTS`` - ``PyFunction_EVENT_MODIFY_CODE``
- ``PyFunction_EVENT_MODIFY_KWDEFAULTS`` - ``PyFunction_EVENT_MODIFY_DEFAULTS``
- ``PyFunction_EVENT_MODIFY_KWDEFAULTS``
.. versionadded:: 3.12 .. versionadded:: 3.12

View file

@ -277,7 +277,7 @@ the garbage collector.
Type of the visitor function to be passed to :c:func:`PyUnstable_GC_VisitObjects`. Type of the visitor function to be passed to :c:func:`PyUnstable_GC_VisitObjects`.
*arg* is the same as the *arg* passed to ``PyUnstable_GC_VisitObjects``. *arg* is the same as the *arg* passed to ``PyUnstable_GC_VisitObjects``.
Return ``0`` to continue iteration, return ``1`` to stop iteration. Other return Return ``1`` to continue iteration, return ``0`` to stop iteration. Other return
values are reserved for now so behavior on returning anything else is undefined. values are reserved for now so behavior on returning anything else is undefined.
.. versionadded:: 3.12 .. versionadded:: 3.12

View file

@ -16,19 +16,6 @@ Importing Modules
This is a wrapper around :c:func:`PyImport_Import()` which takes a This is a wrapper around :c:func:`PyImport_Import()` which takes a
:c:expr:`const char *` as an argument instead of a :c:expr:`PyObject *`. :c:expr:`const char *` as an argument instead of a :c:expr:`PyObject *`.
.. c:function:: PyObject* PyImport_ImportModuleNoBlock(const char *name)
This function is a deprecated alias of :c:func:`PyImport_ImportModule`.
.. versionchanged:: 3.3
This function used to fail immediately when the import lock was held
by another thread. In Python 3.3 though, the locking scheme switched
to per-module locks for most purposes, so this function's special
behaviour isn't needed anymore.
.. deprecated-removed:: 3.13 3.15
Use :c:func:`PyImport_ImportModule` instead.
.. c:function:: PyObject* PyImport_ImportModuleEx(const char *name, PyObject *globals, PyObject *locals, PyObject *fromlist) .. c:function:: PyObject* PyImport_ImportModuleEx(const char *name, PyObject *globals, PyObject *locals, PyObject *fromlist)
@ -325,3 +312,24 @@ Importing Modules
If Python is initialized multiple times, :c:func:`PyImport_AppendInittab` or If Python is initialized multiple times, :c:func:`PyImport_AppendInittab` or
:c:func:`PyImport_ExtendInittab` must be called before each Python :c:func:`PyImport_ExtendInittab` must be called before each Python
initialization. initialization.
.. c:function:: PyObject* PyImport_ImportModuleAttr(PyObject *mod_name, PyObject *attr_name)
Import the module *mod_name* and get its attribute *attr_name*.
Names must be Python :class:`str` objects.
Helper function combining :c:func:`PyImport_Import` and
:c:func:`PyObject_GetAttr`. For example, it can raise :exc:`ImportError` if
the module is not found, and :exc:`AttributeError` if the attribute doesn't
exist.
.. versionadded:: 3.14
.. c:function:: PyObject* PyImport_ImportModuleAttrString(const char *mod_name, const char *attr_name)
Similar to :c:func:`PyImport_ImportModuleAttr`, but names are UTF-8 encoded
strings instead of Python :class:`str` objects.
.. versionadded:: 3.14

View file

@ -77,10 +77,7 @@ The following functions can be safely called before Python is initialized:
Despite their apparent similarity to some of the functions listed above, Despite their apparent similarity to some of the functions listed above,
the following functions **should not be called** before the interpreter has the following functions **should not be called** before the interpreter has
been initialized: :c:func:`Py_EncodeLocale`, :c:func:`Py_GetPath`, been initialized: :c:func:`Py_EncodeLocale`, :c:func:`PyEval_InitThreads`, and
:c:func:`Py_GetPrefix`, :c:func:`Py_GetExecPrefix`,
:c:func:`Py_GetProgramFullPath`, :c:func:`Py_GetPythonHome`,
:c:func:`Py_GetProgramName`, :c:func:`PyEval_InitThreads`, and
:c:func:`Py_RunMain`. :c:func:`Py_RunMain`.
@ -109,7 +106,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-b` option. Set by the :option:`-b` option.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_DebugFlag .. c:var:: int Py_DebugFlag
@ -123,7 +120,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-d` option and the :envvar:`PYTHONDEBUG` environment Set by the :option:`-d` option and the :envvar:`PYTHONDEBUG` environment
variable. variable.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_DontWriteBytecodeFlag .. c:var:: int Py_DontWriteBytecodeFlag
@ -137,7 +134,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-B` option and the :envvar:`PYTHONDONTWRITEBYTECODE` Set by the :option:`-B` option and the :envvar:`PYTHONDONTWRITEBYTECODE`
environment variable. environment variable.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_FrozenFlag .. c:var:: int Py_FrozenFlag
@ -145,12 +142,9 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
:c:member:`PyConfig.pathconfig_warnings` should be used instead, see :c:member:`PyConfig.pathconfig_warnings` should be used instead, see
:ref:`Python Initialization Configuration <init-config>`. :ref:`Python Initialization Configuration <init-config>`.
Suppress error messages when calculating the module search path in
:c:func:`Py_GetPath`.
Private flag used by ``_freeze_module`` and ``frozenmain`` programs. Private flag used by ``_freeze_module`` and ``frozenmain`` programs.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_HashRandomizationFlag .. c:var:: int Py_HashRandomizationFlag
@ -165,7 +159,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
If the flag is non-zero, read the :envvar:`PYTHONHASHSEED` environment If the flag is non-zero, read the :envvar:`PYTHONHASHSEED` environment
variable to initialize the secret hash seed. variable to initialize the secret hash seed.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_IgnoreEnvironmentFlag .. c:var:: int Py_IgnoreEnvironmentFlag
@ -178,7 +172,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-E` and :option:`-I` options. Set by the :option:`-E` and :option:`-I` options.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_InspectFlag .. c:var:: int Py_InspectFlag
@ -193,7 +187,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-i` option and the :envvar:`PYTHONINSPECT` environment Set by the :option:`-i` option and the :envvar:`PYTHONINSPECT` environment
variable. variable.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_InteractiveFlag .. c:var:: int Py_InteractiveFlag
@ -203,7 +197,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-i` option. Set by the :option:`-i` option.
.. deprecated:: 3.12 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_IsolatedFlag .. c:var:: int Py_IsolatedFlag
@ -218,7 +212,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
.. versionadded:: 3.4 .. versionadded:: 3.4
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_LegacyWindowsFSEncodingFlag .. c:var:: int Py_LegacyWindowsFSEncodingFlag
@ -237,7 +231,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
.. availability:: Windows. .. availability:: Windows.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_LegacyWindowsStdioFlag .. c:var:: int Py_LegacyWindowsStdioFlag
@ -255,7 +249,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
.. availability:: Windows. .. availability:: Windows.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_NoSiteFlag .. c:var:: int Py_NoSiteFlag
@ -270,7 +264,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-S` option. Set by the :option:`-S` option.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_NoUserSiteDirectory .. c:var:: int Py_NoUserSiteDirectory
@ -284,7 +278,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-s` and :option:`-I` options, and the Set by the :option:`-s` and :option:`-I` options, and the
:envvar:`PYTHONNOUSERSITE` environment variable. :envvar:`PYTHONNOUSERSITE` environment variable.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_OptimizeFlag .. c:var:: int Py_OptimizeFlag
@ -295,7 +289,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-O` option and the :envvar:`PYTHONOPTIMIZE` environment Set by the :option:`-O` option and the :envvar:`PYTHONOPTIMIZE` environment
variable. variable.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_QuietFlag .. c:var:: int Py_QuietFlag
@ -309,7 +303,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
.. versionadded:: 3.2 .. versionadded:: 3.2
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_UnbufferedStdioFlag .. c:var:: int Py_UnbufferedStdioFlag
@ -322,7 +316,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-u` option and the :envvar:`PYTHONUNBUFFERED` Set by the :option:`-u` option and the :envvar:`PYTHONUNBUFFERED`
environment variable. environment variable.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
.. c:var:: int Py_VerboseFlag .. c:var:: int Py_VerboseFlag
@ -338,7 +332,7 @@ to 1 and ``-bb`` sets :c:data:`Py_BytesWarningFlag` to 2.
Set by the :option:`-v` option and the :envvar:`PYTHONVERBOSE` environment Set by the :option:`-v` option and the :envvar:`PYTHONVERBOSE` environment
variable. variable.
.. deprecated-removed:: 3.12 3.14 .. deprecated-removed:: 3.12 3.15
Initializing and finalizing the interpreter Initializing and finalizing the interpreter
@ -573,7 +567,7 @@ Initializing and finalizing the interpreter
This is similar to :c:func:`Py_AtExit`, but takes an explicit interpreter and This is similar to :c:func:`Py_AtExit`, but takes an explicit interpreter and
data pointer for the callback. data pointer for the callback.
The :term:`GIL` must be held for *interp*. There must be an :term:`attached thread state` for *interp*.
.. versionadded:: 3.13 .. versionadded:: 3.13
@ -586,7 +580,6 @@ Process-wide parameters
.. index:: .. index::
single: Py_Initialize() single: Py_Initialize()
single: main() single: main()
single: Py_GetPath()
This API is kept for backward compatibility: setting This API is kept for backward compatibility: setting
:c:member:`PyConfig.program_name` should be used instead, see :ref:`Python :c:member:`PyConfig.program_name` should be used instead, see :ref:`Python
@ -596,7 +589,7 @@ Process-wide parameters
the first time, if it is called at all. It tells the interpreter the value the first time, if it is called at all. It tells the interpreter the value
of the ``argv[0]`` argument to the :c:func:`main` function of the program of the ``argv[0]`` argument to the :c:func:`main` function of the program
(converted to wide characters). (converted to wide characters).
This is used by :c:func:`Py_GetPath` and some other functions below to find This is used by some other functions below to find
the Python run-time libraries relative to the interpreter executable. The the Python run-time libraries relative to the interpreter executable. The
default value is ``'python'``. The argument should point to a default value is ``'python'``. The argument should point to a
zero-terminated wide character string in static storage whose contents will not zero-terminated wide character string in static storage whose contents will not
@ -604,143 +597,9 @@ Process-wide parameters
interpreter will change the contents of this storage. interpreter will change the contents of this storage.
Use :c:func:`Py_DecodeLocale` to decode a bytes string to get a Use :c:func:`Py_DecodeLocale` to decode a bytes string to get a
:c:expr:`wchar_*` string. :c:expr:`wchar_t*` string.
.. deprecated:: 3.11 .. deprecated-removed:: 3.11 3.15
.. c:function:: wchar_t* Py_GetProgramName()
Return the program name set with :c:member:`PyConfig.program_name`, or the default.
The returned string points into static storage; the caller should not modify its
value.
This function should not be called before :c:func:`Py_Initialize`, otherwise
it returns ``NULL``.
.. versionchanged:: 3.10
It now returns ``NULL`` if called before :c:func:`Py_Initialize`.
.. deprecated-removed:: 3.13 3.15
Get :data:`sys.executable` instead.
.. c:function:: wchar_t* Py_GetPrefix()
Return the *prefix* for installed platform-independent files. This is derived
through a number of complicated rules from the program name set with
:c:member:`PyConfig.program_name` and some environment variables; for example, if the
program name is ``'/usr/local/bin/python'``, the prefix is ``'/usr/local'``. The
returned string points into static storage; the caller should not modify its
value. This corresponds to the :makevar:`prefix` variable in the top-level
:file:`Makefile` and the :option:`--prefix` argument to the :program:`configure`
script at build time. The value is available to Python code as ``sys.base_prefix``.
It is only useful on Unix. See also the next function.
This function should not be called before :c:func:`Py_Initialize`, otherwise
it returns ``NULL``.
.. versionchanged:: 3.10
It now returns ``NULL`` if called before :c:func:`Py_Initialize`.
.. deprecated-removed:: 3.13 3.15
Get :data:`sys.base_prefix` instead, or :data:`sys.prefix` if
:ref:`virtual environments <venv-def>` need to be handled.
.. c:function:: wchar_t* Py_GetExecPrefix()
Return the *exec-prefix* for installed platform-*dependent* files. This is
derived through a number of complicated rules from the program name set with
:c:member:`PyConfig.program_name` and some environment variables; for example, if the
program name is ``'/usr/local/bin/python'``, the exec-prefix is
``'/usr/local'``. The returned string points into static storage; the caller
should not modify its value. This corresponds to the :makevar:`exec_prefix`
variable in the top-level :file:`Makefile` and the ``--exec-prefix``
argument to the :program:`configure` script at build time. The value is
available to Python code as ``sys.base_exec_prefix``. It is only useful on
Unix.
Background: The exec-prefix differs from the prefix when platform dependent
files (such as executables and shared libraries) are installed in a different
directory tree. In a typical installation, platform dependent files may be
installed in the :file:`/usr/local/plat` subtree while platform independent may
be installed in :file:`/usr/local`.
Generally speaking, a platform is a combination of hardware and software
families, e.g. Sparc machines running the Solaris 2.x operating system are
considered the same platform, but Intel machines running Solaris 2.x are another
platform, and Intel machines running Linux are yet another platform. Different
major revisions of the same operating system generally also form different
platforms. Non-Unix operating systems are a different story; the installation
strategies on those systems are so different that the prefix and exec-prefix are
meaningless, and set to the empty string. Note that compiled Python bytecode
files are platform independent (but not independent from the Python version by
which they were compiled!).
System administrators will know how to configure the :program:`mount` or
:program:`automount` programs to share :file:`/usr/local` between platforms
while having :file:`/usr/local/plat` be a different filesystem for each
platform.
This function should not be called before :c:func:`Py_Initialize`, otherwise
it returns ``NULL``.
.. versionchanged:: 3.10
It now returns ``NULL`` if called before :c:func:`Py_Initialize`.
.. deprecated-removed:: 3.13 3.15
Get :data:`sys.base_exec_prefix` instead, or :data:`sys.exec_prefix` if
:ref:`virtual environments <venv-def>` need to be handled.
.. c:function:: wchar_t* Py_GetProgramFullPath()
.. index::
single: executable (in module sys)
Return the full program name of the Python executable; this is computed as a
side-effect of deriving the default module search path from the program name
(set by :c:member:`PyConfig.program_name`). The returned string points into
static storage; the caller should not modify its value. The value is available
to Python code as ``sys.executable``.
This function should not be called before :c:func:`Py_Initialize`, otherwise
it returns ``NULL``.
.. versionchanged:: 3.10
It now returns ``NULL`` if called before :c:func:`Py_Initialize`.
.. deprecated-removed:: 3.13 3.15
Get :data:`sys.executable` instead.
.. c:function:: wchar_t* Py_GetPath()
.. index::
triple: module; search; path
single: path (in module sys)
Return the default module search path; this is computed from the program name
(set by :c:member:`PyConfig.program_name`) and some environment variables.
The returned string consists of a series of directory names separated by a
platform dependent delimiter character. The delimiter character is ``':'``
on Unix and macOS, ``';'`` on Windows. The returned string points into
static storage; the caller should not modify its value. The list
:data:`sys.path` is initialized with this value on interpreter startup; it
can be (and usually is) modified later to change the search path for loading
modules.
This function should not be called before :c:func:`Py_Initialize`, otherwise
it returns ``NULL``.
.. XXX should give the exact rules
.. versionchanged:: 3.10
It now returns ``NULL`` if called before :c:func:`Py_Initialize`.
.. deprecated-removed:: 3.13 3.15
Get :data:`sys.path` instead.
.. c:function:: const char* Py_GetVersion() .. c:function:: const char* Py_GetVersion()
@ -846,7 +705,7 @@ Process-wide parameters
directory (``"."``). directory (``"."``).
Use :c:func:`Py_DecodeLocale` to decode a bytes string to get a Use :c:func:`Py_DecodeLocale` to decode a bytes string to get a
:c:expr:`wchar_*` string. :c:expr:`wchar_t*` string.
See also :c:member:`PyConfig.orig_argv` and :c:member:`PyConfig.argv` See also :c:member:`PyConfig.orig_argv` and :c:member:`PyConfig.argv`
members of the :ref:`Python Initialization Configuration <init-config>`. members of the :ref:`Python Initialization Configuration <init-config>`.
@ -868,7 +727,7 @@ Process-wide parameters
.. XXX impl. doesn't seem consistent in allowing ``0``/``NULL`` for the params; .. XXX impl. doesn't seem consistent in allowing ``0``/``NULL`` for the params;
check w/ Guido. check w/ Guido.
.. deprecated:: 3.11 .. deprecated-removed:: 3.11 3.15
.. c:function:: void PySys_SetArgv(int argc, wchar_t **argv) .. c:function:: void PySys_SetArgv(int argc, wchar_t **argv)
@ -882,14 +741,14 @@ Process-wide parameters
:option:`-I`. :option:`-I`.
Use :c:func:`Py_DecodeLocale` to decode a bytes string to get a Use :c:func:`Py_DecodeLocale` to decode a bytes string to get a
:c:expr:`wchar_*` string. :c:expr:`wchar_t*` string.
See also :c:member:`PyConfig.orig_argv` and :c:member:`PyConfig.argv` See also :c:member:`PyConfig.orig_argv` and :c:member:`PyConfig.argv`
members of the :ref:`Python Initialization Configuration <init-config>`. members of the :ref:`Python Initialization Configuration <init-config>`.
.. versionchanged:: 3.4 The *updatepath* value depends on :option:`-I`. .. versionchanged:: 3.4 The *updatepath* value depends on :option:`-I`.
.. deprecated:: 3.11 .. deprecated-removed:: 3.11 3.15
.. c:function:: void Py_SetPythonHome(const wchar_t *home) .. c:function:: void Py_SetPythonHome(const wchar_t *home)
@ -908,26 +767,9 @@ Process-wide parameters
this storage. this storage.
Use :c:func:`Py_DecodeLocale` to decode a bytes string to get a Use :c:func:`Py_DecodeLocale` to decode a bytes string to get a
:c:expr:`wchar_*` string. :c:expr:`wchar_t*` string.
.. deprecated:: 3.11 .. deprecated-removed:: 3.11 3.15
.. c:function:: wchar_t* Py_GetPythonHome()
Return the default "home", that is, the value set by
:c:member:`PyConfig.home`, or the value of the :envvar:`PYTHONHOME`
environment variable if it is set.
This function should not be called before :c:func:`Py_Initialize`, otherwise
it returns ``NULL``.
.. versionchanged:: 3.10
It now returns ``NULL`` if called before :c:func:`Py_Initialize`.
.. deprecated-removed:: 3.13 3.15
Get :c:member:`PyConfig.home` or :envvar:`PYTHONHOME` environment
variable instead.
.. _threads: .. _threads:
@ -940,7 +782,8 @@ Thread State and the Global Interpreter Lock
single: interpreter lock single: interpreter lock
single: lock, interpreter single: lock, interpreter
The Python interpreter is not fully thread-safe. In order to support Unless on a :term:`free-threaded <free threading>` build of :term:`CPython`,
the Python interpreter is not fully thread-safe. In order to support
multi-threaded Python programs, there's a global lock, called the :term:`global multi-threaded Python programs, there's a global lock, called the :term:`global
interpreter lock` or :term:`GIL`, that must be held by the current thread before interpreter lock` or :term:`GIL`, that must be held by the current thread before
it can safely access Python objects. Without the lock, even the simplest it can safely access Python objects. Without the lock, even the simplest
@ -961,20 +804,30 @@ a file, so that other Python threads can run in the meantime.
single: PyThreadState (C type) single: PyThreadState (C type)
The Python interpreter keeps some thread-specific bookkeeping information The Python interpreter keeps some thread-specific bookkeeping information
inside a data structure called :c:type:`PyThreadState`. There's also one inside a data structure called :c:type:`PyThreadState`, known as a :term:`thread state`.
global variable pointing to the current :c:type:`PyThreadState`: it can Each OS thread has a thread-local pointer to a :c:type:`PyThreadState`; a thread state
be retrieved using :c:func:`PyThreadState_Get`. referenced by this pointer is considered to be :term:`attached <attached thread state>`.
Releasing the GIL from extension code A thread can only have one :term:`attached thread state` at a time. An attached
------------------------------------- thread state is typically analogous with holding the :term:`GIL`, except on
:term:`free-threaded <free threading>` builds. On builds with the :term:`GIL` enabled,
:term:`attaching <attached thread state>` a thread state will block until the :term:`GIL`
can be acquired. However, even on builds with the :term:`GIL` disabled, it is still required
to have an attached thread state to call most of the C API.
Most extension code manipulating the :term:`GIL` has the following simple In general, there will always be an :term:`attached thread state` when using Python's C API.
Only in some specific cases (such as in a :c:macro:`Py_BEGIN_ALLOW_THREADS` block) will the
thread not have an attached thread state. If uncertain, check if :c:func:`PyThreadState_GetUnchecked` returns
``NULL``.
Detaching the thread state from extension code
----------------------------------------------
Most extension code manipulating the :term:`thread state` has the following simple
structure:: structure::
Save the thread state in a local variable. Save the thread state in a local variable.
Release the global interpreter lock.
... Do some blocking I/O operation ... ... Do some blocking I/O operation ...
Reacquire the global interpreter lock.
Restore the thread state from the local variable. Restore the thread state from the local variable.
This is so common that a pair of macros exists to simplify it:: This is so common that a pair of macros exists to simplify it::
@ -1003,21 +856,30 @@ The block above expands to the following code::
single: PyEval_RestoreThread (C function) single: PyEval_RestoreThread (C function)
single: PyEval_SaveThread (C function) single: PyEval_SaveThread (C function)
Here is how these functions work: the global interpreter lock is used to protect the pointer to the Here is how these functions work:
current thread state. When releasing the lock and saving the thread state,
the current thread state pointer must be retrieved before the lock is released The :term:`attached thread state` holds the :term:`GIL` for the entire interpreter. When detaching
(since another thread could immediately acquire the lock and store its own thread the :term:`attached thread state`, the :term:`GIL` is released, allowing other threads to attach
state in the global variable). Conversely, when acquiring the lock and restoring a thread state to their own thread, thus getting the :term:`GIL` and can start executing.
the thread state, the lock must be acquired before storing the thread state The pointer to the prior :term:`attached thread state` is stored as a local variable.
pointer. Upon reaching :c:macro:`Py_END_ALLOW_THREADS`, the thread state that was
previously :term:`attached <attached thread state>` is passed to :c:func:`PyEval_RestoreThread`.
This function will block until another releases its :term:`thread state <attached thread state>`,
thus allowing the old :term:`thread state <attached thread state>` to get re-attached and the
C API can be called again.
For :term:`free-threaded <free threading>` builds, the :term:`GIL` is normally
out of the question, but detaching the :term:`thread state <attached thread state>` is still required
for blocking I/O and long operations. The difference is that threads don't have to wait for the :term:`GIL`
to be released to attach their thread state, allowing true multi-core parallelism.
.. note:: .. note::
Calling system I/O functions is the most common use case for releasing Calling system I/O functions is the most common use case for detaching
the GIL, but it can also be useful before calling long-running computations the :term:`thread state <attached thread state>`, but it can also be useful before calling
which don't need access to Python objects, such as compression or long-running computations which don't need access to Python objects, such
cryptographic functions operating over memory buffers. For example, the as compression or cryptographic functions operating over memory buffers.
standard :mod:`zlib` and :mod:`hashlib` modules release the GIL when For example, the standard :mod:`zlib` and :mod:`hashlib` modules detach the
compressing or hashing data. :term:`thread state <attached thread state>` when compressing or hashing data.
.. _gilstate: .. _gilstate:
@ -1029,16 +891,15 @@ When threads are created using the dedicated Python APIs (such as the
:mod:`threading` module), a thread state is automatically associated to them :mod:`threading` module), a thread state is automatically associated to them
and the code showed above is therefore correct. However, when threads are and the code showed above is therefore correct. However, when threads are
created from C (for example by a third-party library with its own thread created from C (for example by a third-party library with its own thread
management), they don't hold the GIL, nor is there a thread state structure management), they don't hold the :term:`GIL`, because they don't have an
for them. :term:`attached thread state`.
If you need to call Python code from these threads (often this will be part If you need to call Python code from these threads (often this will be part
of a callback API provided by the aforementioned third-party library), of a callback API provided by the aforementioned third-party library),
you must first register these threads with the interpreter by you must first register these threads with the interpreter by
creating a thread state data structure, then acquiring the GIL, and finally creating an :term:`attached thread state` before you can start using the Python/C
storing their thread state pointer, before you can start using the Python/C API. When you are done, you should detach the :term:`thread state <attached thread state>`, and
API. When you are done, you should reset the thread state pointer, release finally free it.
the GIL, and finally free the thread state data structure.
The :c:func:`PyGILState_Ensure` and :c:func:`PyGILState_Release` functions do The :c:func:`PyGILState_Ensure` and :c:func:`PyGILState_Release` functions do
all of the above automatically. The typical idiom for calling into Python all of the above automatically. The typical idiom for calling into Python
@ -1106,26 +967,23 @@ Cautions regarding runtime finalization
In the late stage of :term:`interpreter shutdown`, after attempting to wait for In the late stage of :term:`interpreter shutdown`, after attempting to wait for
non-daemon threads to exit (though this can be interrupted by non-daemon threads to exit (though this can be interrupted by
:class:`KeyboardInterrupt`) and running the :mod:`atexit` functions, the runtime :class:`KeyboardInterrupt`) and running the :mod:`atexit` functions, the runtime
is marked as *finalizing*: :c:func:`_Py_IsFinalizing` and is marked as *finalizing*: :c:func:`Py_IsFinalizing` and
:func:`sys.is_finalizing` return true. At this point, only the *finalization :func:`sys.is_finalizing` return true. At this point, only the *finalization
thread* that initiated finalization (typically the main thread) is allowed to thread* that initiated finalization (typically the main thread) is allowed to
acquire the :term:`GIL`. acquire the :term:`GIL`.
If any thread, other than the finalization thread, attempts to acquire the GIL If any thread, other than the finalization thread, attempts to attach a :term:`thread state`
during finalization, either explicitly via :c:func:`PyGILState_Ensure`, during finalization, either explicitly or
:c:macro:`Py_END_ALLOW_THREADS`, :c:func:`PyEval_AcquireThread`, or implicitly, the thread enters **a permanently blocked state**
:c:func:`PyEval_AcquireLock`, or implicitly when the interpreter attempts to where it remains until the program exits. In most cases this is harmless, but this can result
reacquire it after having yielded it, the thread enters **a permanently blocked in deadlock if a later stage of finalization attempts to acquire a lock owned by the
state** where it remains until the program exits. In most cases this is blocked thread, or otherwise waits on the blocked thread.
harmless, but this can result in deadlock if a later stage of finalization
attempts to acquire a lock owned by the blocked thread, or otherwise waits on
the blocked thread.
Gross? Yes. This prevents random crashes and/or unexpectedly skipped C++ Gross? Yes. This prevents random crashes and/or unexpectedly skipped C++
finalizations further up the call stack when such threads were forcibly exited finalizations further up the call stack when such threads were forcibly exited
here in CPython 3.13 and earlier. The CPython runtime GIL acquiring C APIs here in CPython 3.13 and earlier. The CPython runtime :term:`thread state` C APIs
have never had any error reporting or handling expectations at GIL acquisition have never had any error reporting or handling expectations at :term:`thread state`
time that would've allowed for graceful exit from this situation. Changing that attachment time that would've allowed for graceful exit from this situation. Changing that
would require new stable C APIs and rewriting the majority of C code in the would require new stable C APIs and rewriting the majority of C code in the
CPython ecosystem to use those with error handling. CPython ecosystem to use those with error handling.
@ -1188,18 +1046,15 @@ code, or when embedding the Python interpreter:
.. c:function:: PyThreadState* PyEval_SaveThread() .. c:function:: PyThreadState* PyEval_SaveThread()
Release the global interpreter lock (if it has been created) and reset the Detach the :term:`attached thread state` and return it.
thread state to ``NULL``, returning the previous thread state (which is not The thread will have no :term:`thread state` upon returning.
``NULL``). If the lock has been created, the current thread must have
acquired it.
.. c:function:: void PyEval_RestoreThread(PyThreadState *tstate) .. c:function:: void PyEval_RestoreThread(PyThreadState *tstate)
Acquire the global interpreter lock (if it has been created) and set the Set the :term:`attached thread state` to *tstate*.
thread state to *tstate*, which must not be ``NULL``. If the lock has been The passed :term:`thread state` **should not** be :term:`attached <attached thread state>`,
created, the current thread must not have acquired it, otherwise deadlock otherwise deadlock ensues. *tstate* will be attached upon returning.
ensues.
.. note:: .. note::
Calling this function from a thread when the runtime is finalizing will Calling this function from a thread when the runtime is finalizing will
@ -1213,13 +1068,13 @@ code, or when embedding the Python interpreter:
.. c:function:: PyThreadState* PyThreadState_Get() .. c:function:: PyThreadState* PyThreadState_Get()
Return the current thread state. The global interpreter lock must be held. Return the :term:`attached thread state`. If the thread has no attached
When the current thread state is ``NULL``, this issues a fatal error (so that thread state, (such as when inside of :c:macro:`Py_BEGIN_ALLOW_THREADS`
the caller needn't check for ``NULL``). block), then this issues a fatal error (so that the caller needn't check
for ``NULL``).
See also :c:func:`PyThreadState_GetUnchecked`. See also :c:func:`PyThreadState_GetUnchecked`.
.. c:function:: PyThreadState* PyThreadState_GetUnchecked() .. c:function:: PyThreadState* PyThreadState_GetUnchecked()
Similar to :c:func:`PyThreadState_Get`, but don't kill the process with a Similar to :c:func:`PyThreadState_Get`, but don't kill the process with a
@ -1233,9 +1088,14 @@ code, or when embedding the Python interpreter:
.. c:function:: PyThreadState* PyThreadState_Swap(PyThreadState *tstate) .. c:function:: PyThreadState* PyThreadState_Swap(PyThreadState *tstate)
Swap the current thread state with the thread state given by the argument Set the :term:`attached thread state` to *tstate*, and return the
*tstate*, which may be ``NULL``. The global interpreter lock must be held :term:`thread state` that was attached prior to calling.
and is not released.
This function is safe to call without an :term:`attached thread state`; it
will simply return ``NULL`` indicating that there was no prior thread state.
.. seealso:
:c:func:`PyEval_ReleaseThread`
The following functions use thread-local storage, and are not compatible The following functions use thread-local storage, and are not compatible
@ -1244,7 +1104,7 @@ with sub-interpreters:
.. c:function:: PyGILState_STATE PyGILState_Ensure() .. c:function:: PyGILState_STATE PyGILState_Ensure()
Ensure that the current thread is ready to call the Python C API regardless Ensure that the current thread is ready to call the Python C API regardless
of the current state of Python, or of the global interpreter lock. This may of the current state of Python, or of the :term:`attached thread state`. This may
be called as many times as desired by a thread as long as each call is be called as many times as desired by a thread as long as each call is
matched with a call to :c:func:`PyGILState_Release`. In general, other matched with a call to :c:func:`PyGILState_Release`. In general, other
thread-related APIs may be used between :c:func:`PyGILState_Ensure` and thread-related APIs may be used between :c:func:`PyGILState_Ensure` and
@ -1253,15 +1113,15 @@ with sub-interpreters:
:c:macro:`Py_BEGIN_ALLOW_THREADS` and :c:macro:`Py_END_ALLOW_THREADS` macros is :c:macro:`Py_BEGIN_ALLOW_THREADS` and :c:macro:`Py_END_ALLOW_THREADS` macros is
acceptable. acceptable.
The return value is an opaque "handle" to the thread state when The return value is an opaque "handle" to the :term:`attached thread state` when
:c:func:`PyGILState_Ensure` was called, and must be passed to :c:func:`PyGILState_Ensure` was called, and must be passed to
:c:func:`PyGILState_Release` to ensure Python is left in the same state. Even :c:func:`PyGILState_Release` to ensure Python is left in the same state. Even
though recursive calls are allowed, these handles *cannot* be shared - each though recursive calls are allowed, these handles *cannot* be shared - each
unique call to :c:func:`PyGILState_Ensure` must save the handle for its call unique call to :c:func:`PyGILState_Ensure` must save the handle for its call
to :c:func:`PyGILState_Release`. to :c:func:`PyGILState_Release`.
When the function returns, the current thread will hold the GIL and be able When the function returns, there will be an :term:`attached thread state`
to call arbitrary Python code. Failure is a fatal error. and the thread will be able to call arbitrary Python code. Failure is a fatal error.
.. note:: .. note::
Calling this function from a thread when the runtime is finalizing will Calling this function from a thread when the runtime is finalizing will
@ -1286,21 +1146,23 @@ with sub-interpreters:
.. c:function:: PyThreadState* PyGILState_GetThisThreadState() .. c:function:: PyThreadState* PyGILState_GetThisThreadState()
Get the current thread state for this thread. May return ``NULL`` if no Get the :term:`attached thread state` for this thread. May return ``NULL`` if no
GILState API has been used on the current thread. Note that the main thread GILState API has been used on the current thread. Note that the main thread
always has such a thread-state, even if no auto-thread-state call has been always has such a thread-state, even if no auto-thread-state call has been
made on the main thread. This is mainly a helper/diagnostic function. made on the main thread. This is mainly a helper/diagnostic function.
.. seealso: :c:func:`PyThreadState_Get``
.. c:function:: int PyGILState_Check() .. c:function:: int PyGILState_Check()
Return ``1`` if the current thread is holding the GIL and ``0`` otherwise. Return ``1`` if the current thread is holding the :term:`GIL` and ``0`` otherwise.
This function can be called from any thread at any time. This function can be called from any thread at any time.
Only if it has had its Python thread state initialized and currently is Only if it has had its Python thread state initialized and currently is
holding the GIL will it return ``1``. holding the :term:`GIL` will it return ``1``.
This is mainly a helper/diagnostic function. It can be useful This is mainly a helper/diagnostic function. It can be useful
for example in callback contexts or memory allocation functions when for example in callback contexts or memory allocation functions when
knowing that the GIL is locked can allow the caller to perform sensitive knowing that the :term:`GIL` is locked can allow the caller to perform sensitive
actions or otherwise behave differently. actions or otherwise behave differently.
.. versionadded:: 3.4 .. versionadded:: 3.4
@ -1345,13 +1207,14 @@ Low-level API
All of the following functions must be called after :c:func:`Py_Initialize`. All of the following functions must be called after :c:func:`Py_Initialize`.
.. versionchanged:: 3.7 .. versionchanged:: 3.7
:c:func:`Py_Initialize()` now initializes the :term:`GIL`. :c:func:`Py_Initialize()` now initializes the :term:`GIL`
and sets an :term:`attached thread state`.
.. c:function:: PyInterpreterState* PyInterpreterState_New() .. c:function:: PyInterpreterState* PyInterpreterState_New()
Create a new interpreter state object. The global interpreter lock need not Create a new interpreter state object. An :term:`attached thread state` is not needed,
be held, but may be held if it is necessary to serialize calls to this but may optionally exist if it is necessary to serialize calls to this
function. function.
.. audit-event:: cpython.PyInterpreterState_New "" c.PyInterpreterState_New .. audit-event:: cpython.PyInterpreterState_New "" c.PyInterpreterState_New
@ -1359,30 +1222,28 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
.. c:function:: void PyInterpreterState_Clear(PyInterpreterState *interp) .. c:function:: void PyInterpreterState_Clear(PyInterpreterState *interp)
Reset all information in an interpreter state object. The global interpreter Reset all information in an interpreter state object. There must be
lock must be held. an :term:`attached thread state` for the the interpreter.
.. audit-event:: cpython.PyInterpreterState_Clear "" c.PyInterpreterState_Clear .. audit-event:: cpython.PyInterpreterState_Clear "" c.PyInterpreterState_Clear
.. c:function:: void PyInterpreterState_Delete(PyInterpreterState *interp) .. c:function:: void PyInterpreterState_Delete(PyInterpreterState *interp)
Destroy an interpreter state object. The global interpreter lock need not be Destroy an interpreter state object. There **should not** be an
held. The interpreter state must have been reset with a previous call to :term:`attached thread state` for the target interpreter. The interpreter
:c:func:`PyInterpreterState_Clear`. state must have been reset with a previous call to :c:func:`PyInterpreterState_Clear`.
.. c:function:: PyThreadState* PyThreadState_New(PyInterpreterState *interp) .. c:function:: PyThreadState* PyThreadState_New(PyInterpreterState *interp)
Create a new thread state object belonging to the given interpreter object. Create a new thread state object belonging to the given interpreter object.
The global interpreter lock need not be held, but may be held if it is An :term:`attached thread state` is not needed.
necessary to serialize calls to this function.
.. c:function:: void PyThreadState_Clear(PyThreadState *tstate) .. c:function:: void PyThreadState_Clear(PyThreadState *tstate)
Reset all information in a thread state object. The global interpreter lock Reset all information in a :term:`thread state` object. *tstate*
must be held. must be :term:`attached <attached thread state>`
.. versionchanged:: 3.9 .. versionchanged:: 3.9
This function now calls the :c:member:`PyThreadState.on_delete` callback. This function now calls the :c:member:`PyThreadState.on_delete` callback.
@ -1394,18 +1255,19 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
.. c:function:: void PyThreadState_Delete(PyThreadState *tstate) .. c:function:: void PyThreadState_Delete(PyThreadState *tstate)
Destroy a thread state object. The global interpreter lock need not be held. Destroy a :term:`thread state` object. *tstate* should not
The thread state must have been reset with a previous call to be :term:`attached <attached thread state>` to any thread.
*tstate* must have been reset with a previous call to
:c:func:`PyThreadState_Clear`. :c:func:`PyThreadState_Clear`.
.. c:function:: void PyThreadState_DeleteCurrent(void) .. c:function:: void PyThreadState_DeleteCurrent(void)
Destroy the current thread state and release the global interpreter lock. Detach the :term:`attached thread state` (which must have been reset
Like :c:func:`PyThreadState_Delete`, the global interpreter lock must with a previous call to :c:func:`PyThreadState_Clear`) and then destroy it.
be held. The thread state must have been reset with a previous call
to :c:func:`PyThreadState_Clear`.
No :term:`thread state` will be :term:`attached <attached thread state>` upon
returning.
.. c:function:: PyFrameObject* PyThreadState_GetFrame(PyThreadState *tstate) .. c:function:: PyFrameObject* PyThreadState_GetFrame(PyThreadState *tstate)
@ -1416,16 +1278,16 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
See also :c:func:`PyEval_GetFrame`. See also :c:func:`PyEval_GetFrame`.
*tstate* must not be ``NULL``. *tstate* must not be ``NULL``, and must be :term:`attached <attached thread state>`.
.. versionadded:: 3.9 .. versionadded:: 3.9
.. c:function:: uint64_t PyThreadState_GetID(PyThreadState *tstate) .. c:function:: uint64_t PyThreadState_GetID(PyThreadState *tstate)
Get the unique thread state identifier of the Python thread state *tstate*. Get the unique :term:`thread state` identifier of the Python thread state *tstate*.
*tstate* must not be ``NULL``. *tstate* must not be ``NULL``, and must be :term:`attached <attached thread state>`.
.. versionadded:: 3.9 .. versionadded:: 3.9
@ -1434,7 +1296,7 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
Get the interpreter of the Python thread state *tstate*. Get the interpreter of the Python thread state *tstate*.
*tstate* must not be ``NULL``. *tstate* must not be ``NULL``, and must be :term:`attached <attached thread state>`.
.. versionadded:: 3.9 .. versionadded:: 3.9
@ -1463,10 +1325,8 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
Get the current interpreter. Get the current interpreter.
Issue a fatal error if there no current Python thread state or no current Issue a fatal error if there no :term:`attached thread state`.
interpreter. It cannot return NULL. It cannot return NULL.
The caller must hold the GIL.
.. versionadded:: 3.9 .. versionadded:: 3.9
@ -1476,7 +1336,7 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
Return the interpreter's unique ID. If there was any error in doing Return the interpreter's unique ID. If there was any error in doing
so then ``-1`` is returned and an error is set. so then ``-1`` is returned and an error is set.
The caller must hold the GIL. The caller must have an :term:`attached thread state`.
.. versionadded:: 3.7 .. versionadded:: 3.7
@ -1493,16 +1353,6 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
.. versionadded:: 3.8 .. versionadded:: 3.8
.. c:function:: PyObject* PyUnstable_InterpreterState_GetMainModule(PyInterpreterState *interp)
Return a :term:`strong reference` to the ``__main__`` `module object <moduleobjects>`_
for the given interpreter.
The caller must hold the GIL.
.. versionadded:: 3.13
.. c:type:: PyObject* (*_PyFrameEvalFunction)(PyThreadState *tstate, _PyInterpreterFrame *frame, int throwflag) .. c:type:: PyObject* (*_PyFrameEvalFunction)(PyThreadState *tstate, _PyInterpreterFrame *frame, int throwflag)
Type of a frame evaluation function. Type of a frame evaluation function.
@ -1537,9 +1387,10 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
Return a dictionary in which extensions can store thread-specific state Return a dictionary in which extensions can store thread-specific state
information. Each extension should use a unique key to use to store state in information. Each extension should use a unique key to use to store state in
the dictionary. It is okay to call this function when no current thread state the dictionary. It is okay to call this function when no :term:`thread state`
is available. If this function returns ``NULL``, no exception has been raised and is :term:`attached <attached thread state>`. If this function returns
the caller should assume no current thread state is available. ``NULL``, no exception has been raised and the caller should assume no
thread state is attached.
.. c:function:: int PyThreadState_SetAsyncExc(unsigned long id, PyObject *exc) .. c:function:: int PyThreadState_SetAsyncExc(unsigned long id, PyObject *exc)
@ -1547,7 +1398,7 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
Asynchronously raise an exception in a thread. The *id* argument is the thread Asynchronously raise an exception in a thread. The *id* argument is the thread
id of the target thread; *exc* is the exception object to be raised. This id of the target thread; *exc* is the exception object to be raised. This
function does not steal any references to *exc*. To prevent naive misuse, you function does not steal any references to *exc*. To prevent naive misuse, you
must write your own C extension to call this. Must be called with the GIL held. must write your own C extension to call this. Must be called with an :term:`attached thread state`.
Returns the number of thread states modified; this is normally one, but will be Returns the number of thread states modified; this is normally one, but will be
zero if the thread id isn't found. If *exc* is ``NULL``, the pending zero if the thread id isn't found. If *exc* is ``NULL``, the pending
exception (if any) for the thread is cleared. This raises no exceptions. exception (if any) for the thread is cleared. This raises no exceptions.
@ -1558,9 +1409,10 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
.. c:function:: void PyEval_AcquireThread(PyThreadState *tstate) .. c:function:: void PyEval_AcquireThread(PyThreadState *tstate)
Acquire the global interpreter lock and set the current thread state to :term:`Attach <attached thread state>` *tstate* to the current thread,
*tstate*, which must not be ``NULL``. The lock must have been created earlier. which must not be ``NULL`` or already :term:`attached <attached thread state>`.
If this thread already has the lock, deadlock ensues.
The calling thread must not already have an :term:`attached thread state`.
.. note:: .. note::
Calling this function from a thread when the runtime is finalizing will Calling this function from a thread when the runtime is finalizing will
@ -1583,10 +1435,9 @@ All of the following functions must be called after :c:func:`Py_Initialize`.
.. c:function:: void PyEval_ReleaseThread(PyThreadState *tstate) .. c:function:: void PyEval_ReleaseThread(PyThreadState *tstate)
Reset the current thread state to ``NULL`` and release the global interpreter Detach the :term:`attached thread state`.
lock. The lock must have been created earlier and must be held by the current The *tstate* argument, which must not be ``NULL``, is only used to check
thread. The *tstate* argument, which must not be ``NULL``, is only used to check that it represents the :term:`attached thread state` --- if it isn't, a fatal error is
that it represents the current thread state --- if it isn't, a fatal error is
reported. reported.
:c:func:`PyEval_SaveThread` is a higher-level function which is always :c:func:`PyEval_SaveThread` is a higher-level function which is always
@ -1726,23 +1577,23 @@ function. You can create and destroy them using the following functions:
The given *config* controls the options with which the interpreter The given *config* controls the options with which the interpreter
is initialized. is initialized.
Upon success, *tstate_p* will be set to the first thread state Upon success, *tstate_p* will be set to the first :term:`thread state`
created in the new created in the new sub-interpreter. This thread state is
sub-interpreter. This thread state is made in the current thread state. :term:`attached <attached thread state>`.
Note that no actual thread is created; see the discussion of thread states Note that no actual thread is created; see the discussion of thread states
below. If creation of the new interpreter is unsuccessful, below. If creation of the new interpreter is unsuccessful,
*tstate_p* is set to ``NULL``; *tstate_p* is set to ``NULL``;
no exception is set since the exception state is stored in the no exception is set since the exception state is stored in the
current thread state and there may not be a current thread state. :term:`attached thread state`, which might not exist.
Like all other Python/C API functions, the global interpreter lock Like all other Python/C API functions, an :term:`attached thread state`
must be held before calling this function and is still held when it must be present before calling this function, but it might be detached upon
returns. Likewise a current thread state must be set on entry. On returning. On success, the returned thread state will be :term:`attached <attached thread state>`.
success, the returned thread state will be set as current. If the If the sub-interpreter is created with its own :term:`GIL` then the
sub-interpreter is created with its own GIL then the GIL of the :term:`attached thread state` of the calling interpreter will be detached.
calling interpreter will be released. When the function returns, When the function returns, the new interpreter's :term:`thread state`
the new interpreter's GIL will be held by the current thread and will be :term:`attached <attached thread state>` to the current thread and
the previously interpreter's GIL will remain released here. the previous interpreter's :term:`attached thread state` will remain detached.
.. versionadded:: 3.12 .. versionadded:: 3.12
@ -1824,13 +1675,10 @@ function. You can create and destroy them using the following functions:
.. index:: single: Py_FinalizeEx (C function) .. index:: single: Py_FinalizeEx (C function)
Destroy the (sub-)interpreter represented by the given thread state. Destroy the (sub-)interpreter represented by the given :term:`thread state`.
The given thread state must be the current thread state. See the The given thread state must be :term:`attached <attached thread state>`.
discussion of thread states below. When the call returns, When the call returns, there will be no :term:`attached thread state`.
the current thread state is ``NULL``. All thread states associated All thread states associated with this interpreter are destroyed.
with this interpreter are destroyed. The global interpreter lock
used by the target interpreter must be held before calling this
function. No GIL is held when it returns.
:c:func:`Py_FinalizeEx` will destroy all sub-interpreters that :c:func:`Py_FinalizeEx` will destroy all sub-interpreters that
haven't been explicitly destroyed at that point. haven't been explicitly destroyed at that point.
@ -1924,20 +1772,17 @@ pointer and a void pointer argument.
both these conditions met: both these conditions met:
* on a :term:`bytecode` boundary; * on a :term:`bytecode` boundary;
* with the main thread holding the :term:`global interpreter lock` * with the main thread holding an :term:`attached thread state`
(*func* can therefore use the full C API). (*func* can therefore use the full C API).
*func* must return ``0`` on success, or ``-1`` on failure with an exception *func* must return ``0`` on success, or ``-1`` on failure with an exception
set. *func* won't be interrupted to perform another asynchronous set. *func* won't be interrupted to perform another asynchronous
notification recursively, but it can still be interrupted to switch notification recursively, but it can still be interrupted to switch
threads if the global interpreter lock is released. threads if the :term:`thread state <attached thread state>` is detached.
This function doesn't need a current thread state to run, and it doesn't This function doesn't need an :term:`attached thread state`. However, to call this
need the global interpreter lock. function in a subinterpreter, the caller must have an :term:`attached thread state`.
Otherwise, the function *func* can be scheduled to be called from the wrong interpreter.
To call this function in a subinterpreter, the caller must hold the GIL.
Otherwise, the function *func* can be scheduled to be called from the wrong
interpreter.
.. warning:: .. warning::
This is a low-level function, only useful for very special cases. This is a low-level function, only useful for very special cases.
@ -2078,14 +1923,14 @@ Python-level trace functions in previous versions.
See also the :func:`sys.setprofile` function. See also the :func:`sys.setprofile` function.
The caller must hold the :term:`GIL`. The caller must have an :term:`attached thread state`.
.. c:function:: void PyEval_SetProfileAllThreads(Py_tracefunc func, PyObject *obj) .. c:function:: void PyEval_SetProfileAllThreads(Py_tracefunc func, PyObject *obj)
Like :c:func:`PyEval_SetProfile` but sets the profile function in all running threads Like :c:func:`PyEval_SetProfile` but sets the profile function in all running threads
belonging to the current interpreter instead of the setting it only on the current thread. belonging to the current interpreter instead of the setting it only on the current thread.
The caller must hold the :term:`GIL`. The caller must have an :term:`attached thread state`.
As :c:func:`PyEval_SetProfile`, this function ignores any exceptions raised while As :c:func:`PyEval_SetProfile`, this function ignores any exceptions raised while
setting the profile functions in all threads. setting the profile functions in all threads.
@ -2104,14 +1949,14 @@ Python-level trace functions in previous versions.
See also the :func:`sys.settrace` function. See also the :func:`sys.settrace` function.
The caller must hold the :term:`GIL`. The caller must have an :term:`attached thread state`.
.. c:function:: void PyEval_SetTraceAllThreads(Py_tracefunc func, PyObject *obj) .. c:function:: void PyEval_SetTraceAllThreads(Py_tracefunc func, PyObject *obj)
Like :c:func:`PyEval_SetTrace` but sets the tracing function in all running threads Like :c:func:`PyEval_SetTrace` but sets the tracing function in all running threads
belonging to the current interpreter instead of the setting it only on the current thread. belonging to the current interpreter instead of the setting it only on the current thread.
The caller must hold the :term:`GIL`. The caller must have an :term:`attached thread state`.
As :c:func:`PyEval_SetTrace`, this function ignores any exceptions raised while As :c:func:`PyEval_SetTrace`, this function ignores any exceptions raised while
setting the trace functions in all threads. setting the trace functions in all threads.
@ -2153,10 +1998,10 @@ Reference tracing
Not that tracer functions **must not** create Python objects inside or Not that tracer functions **must not** create Python objects inside or
otherwise the call will be re-entrant. The tracer also **must not** clear otherwise the call will be re-entrant. The tracer also **must not** clear
any existing exception or set an exception. The GIL will be held every time any existing exception or set an exception. A :term:`thread state` will be active
the tracer function is called. every time the tracer function is called.
The GIL must be held when calling this function. There must be an :term:`attached thread state` when calling this function.
.. versionadded:: 3.13 .. versionadded:: 3.13
@ -2167,7 +2012,7 @@ Reference tracing
If no tracer was registered this function will return NULL and will set the If no tracer was registered this function will return NULL and will set the
**data** pointer to NULL. **data** pointer to NULL.
The GIL must be held when calling this function. There must be an :term:`attached thread state` when calling this function.
.. versionadded:: 3.13 .. versionadded:: 3.13
@ -2224,8 +2069,8 @@ CPython C level APIs are similar to those offered by pthreads and Windows:
use a thread key and functions to associate a :c:expr:`void*` value per use a thread key and functions to associate a :c:expr:`void*` value per
thread. thread.
The GIL does *not* need to be held when calling these functions; they supply A :term:`thread state` does *not* need to be :term:`attached <attached thread state>`
their own locking. when calling these functions; they suppl their own locking.
Note that :file:`Python.h` does not include the declaration of the TLS APIs, Note that :file:`Python.h` does not include the declaration of the TLS APIs,
you need to include :file:`pythread.h` to use thread-local storage. you need to include :file:`pythread.h` to use thread-local storage.
@ -2394,7 +2239,7 @@ The C-API provides a basic mutual exclusion lock.
Lock mutex *m*. If another thread has already locked it, the calling Lock mutex *m*. If another thread has already locked it, the calling
thread will block until the mutex is unlocked. While blocked, the thread thread will block until the mutex is unlocked. While blocked, the thread
will temporarily release the :term:`GIL` if it is held. will temporarily detach the :term:`thread state <attached thread state>` if one exists.
.. versionadded:: 3.13 .. versionadded:: 3.13

File diff suppressed because it is too large Load diff

View file

@ -30,6 +30,16 @@ familiar with writing an extension before attempting to embed Python in a real
application. application.
Language version compatibility
==============================
Python's C API is compatible with C11 and C++11 versions of C and C++.
This is a lower limit: the C API does not require features from later
C/C++ versions.
You do *not* need to enable your compiler's "c11 mode".
Coding standards Coding standards
================ ================
@ -138,7 +148,7 @@ complete listing.
.. c:macro:: Py_ALWAYS_INLINE .. c:macro:: Py_ALWAYS_INLINE
Ask the compiler to always inline a static inline function. The compiler can Ask the compiler to always inline a static inline function. The compiler can
ignore it and decides to not inline the function. ignore it and decide to not inline the function.
It can be used to inline performance critical static inline functions when It can be used to inline performance critical static inline functions when
building Python in debug mode with function inlining disabled. For example, building Python in debug mode with function inlining disabled. For example,
@ -769,20 +779,11 @@ found along :envvar:`PATH`.) The user can override this behavior by setting the
environment variable :envvar:`PYTHONHOME`, or insert additional directories in environment variable :envvar:`PYTHONHOME`, or insert additional directories in
front of the standard path by setting :envvar:`PYTHONPATH`. front of the standard path by setting :envvar:`PYTHONPATH`.
.. index::
single: Py_GetPath (C function)
single: Py_GetPrefix (C function)
single: Py_GetExecPrefix (C function)
single: Py_GetProgramFullPath (C function)
The embedding application can steer the search by setting The embedding application can steer the search by setting
:c:member:`PyConfig.program_name` *before* calling :c:member:`PyConfig.program_name` *before* calling
:c:func:`Py_InitializeFromConfig`. Note that :c:func:`Py_InitializeFromConfig`. Note that
:envvar:`PYTHONHOME` still overrides this and :envvar:`PYTHONPATH` is still :envvar:`PYTHONHOME` still overrides this and :envvar:`PYTHONPATH` is still
inserted in front of the standard path. An application that requires total inserted in front of the standard path.
control has to provide its own implementation of :c:func:`Py_GetPath`,
:c:func:`Py_GetPrefix`, :c:func:`Py_GetExecPrefix`, and
:c:func:`Py_GetProgramFullPath` (all defined in :file:`Modules/getpath.c`).
.. index:: single: Py_IsInitialized (C function) .. index:: single: Py_IsInitialized (C function)
@ -816,14 +817,17 @@ frequently used builds will be described in the remainder of this section.
Compiling the interpreter with the :c:macro:`!Py_DEBUG` macro defined produces Compiling the interpreter with the :c:macro:`!Py_DEBUG` macro defined produces
what is generally meant by :ref:`a debug build of Python <debug-build>`. what is generally meant by :ref:`a debug build of Python <debug-build>`.
:c:macro:`!Py_DEBUG` is enabled in the Unix build by adding
:option:`--with-pydebug` to the :file:`./configure` command. On Unix, :c:macro:`!Py_DEBUG` can be enabled by adding :option:`--with-pydebug`
It is also implied by the presence of the to the :file:`./configure` command. This will also disable compiler optimization.
not-Python-specific :c:macro:`!_DEBUG` macro. When :c:macro:`!Py_DEBUG` is enabled
in the Unix build, compiler optimization is disabled. On Windows, selecting a debug build (e.g., by passing the :option:`-d` option to
:file:`PCbuild/build.bat`) automatically enables :c:macro:`!Py_DEBUG`.
Additionally, the presence of the not-Python-specific :c:macro:`!_DEBUG` macro,
when defined by the compiler, will also implicitly enable :c:macro:`!Py_DEBUG`.
In addition to the reference count debugging described below, extra checks are In addition to the reference count debugging described below, extra checks are
performed, see :ref:`Python Debug Build <debug-build>`. performed. See :ref:`Python Debug Build <debug-build>` for more details.
Defining :c:macro:`Py_TRACE_REFS` enables reference tracing Defining :c:macro:`Py_TRACE_REFS` enables reference tracing
(see the :option:`configure --with-trace-refs option <--with-trace-refs>`). (see the :option:`configure --with-trace-refs option <--with-trace-refs>`).

View file

@ -824,6 +824,6 @@ The :c:type:`PyLongWriter` API can be used to import an integer.
Discard a :c:type:`PyLongWriter` created by :c:func:`PyLongWriter_Create`. Discard a :c:type:`PyLongWriter` created by :c:func:`PyLongWriter_Create`.
*writer* must not be ``NULL``. If *writer* is ``NULL``, no operation is performed.
The writer instance and the *digits* array are invalid after the call. The writer instance and the *digits* array are invalid after the call.

View file

@ -110,12 +110,12 @@ The three allocation domains are:
* Raw domain: intended for allocating memory for general-purpose memory * Raw domain: intended for allocating memory for general-purpose memory
buffers where the allocation *must* go to the system allocator or where the buffers where the allocation *must* go to the system allocator or where the
allocator can operate without the :term:`GIL`. The memory is requested directly allocator can operate without an :term:`attached thread state`. The memory
from the system. See :ref:`Raw Memory Interface <raw-memoryinterface>`. is requested directly from the system. See :ref:`Raw Memory Interface <raw-memoryinterface>`.
* "Mem" domain: intended for allocating memory for Python buffers and * "Mem" domain: intended for allocating memory for Python buffers and
general-purpose memory buffers where the allocation must be performed with general-purpose memory buffers where the allocation must be performed with
the :term:`GIL` held. The memory is taken from the Python private heap. an :term:`attached thread state`. The memory is taken from the Python private heap.
See :ref:`Memory Interface <memoryinterface>`. See :ref:`Memory Interface <memoryinterface>`.
* Object domain: intended for allocating memory for Python objects. The * Object domain: intended for allocating memory for Python objects. The
@ -139,8 +139,8 @@ Raw Memory Interface
==================== ====================
The following function sets are wrappers to the system allocator. These The following function sets are wrappers to the system allocator. These
functions are thread-safe, the :term:`GIL <global interpreter lock>` does not functions are thread-safe, so a :term:`thread state` does not
need to be held. need to be :term:`attached <attached thread state>`.
The :ref:`default raw memory allocator <default-memory-allocators>` uses The :ref:`default raw memory allocator <default-memory-allocators>` uses
the following functions: :c:func:`malloc`, :c:func:`calloc`, :c:func:`realloc` the following functions: :c:func:`malloc`, :c:func:`calloc`, :c:func:`realloc`
@ -213,8 +213,7 @@ The :ref:`default memory allocator <default-memory-allocators>` uses the
.. warning:: .. warning::
The :term:`GIL <global interpreter lock>` must be held when using these There must be an :term:`attached thread state` when using these functions.
functions.
.. versionchanged:: 3.6 .. versionchanged:: 3.6
@ -327,8 +326,7 @@ The :ref:`default object allocator <default-memory-allocators>` uses the
.. warning:: .. warning::
The :term:`GIL <global interpreter lock>` must be held when using these There must be an :term:`attached thread state` when using these functions.
functions.
.. c:function:: void* PyObject_Malloc(size_t n) .. c:function:: void* PyObject_Malloc(size_t n)
@ -485,12 +483,12 @@ Customize Memory Allocators
zero bytes. zero bytes.
For the :c:macro:`PYMEM_DOMAIN_RAW` domain, the allocator must be For the :c:macro:`PYMEM_DOMAIN_RAW` domain, the allocator must be
thread-safe: the :term:`GIL <global interpreter lock>` is not held when the thread-safe: a :term:`thread state` is not :term:`attached <attached thread state>`
allocator is called. when the allocator is called.
For the remaining domains, the allocator must also be thread-safe: For the remaining domains, the allocator must also be thread-safe:
the allocator may be called in different interpreters that do not the allocator may be called in different interpreters that do not
share a ``GIL``. share a :term:`GIL`.
If the new allocator is not a hook (does not call the previous allocator), If the new allocator is not a hook (does not call the previous allocator),
the :c:func:`PyMem_SetupDebugHooks` function must be called to reinstall the the :c:func:`PyMem_SetupDebugHooks` function must be called to reinstall the
@ -507,8 +505,8 @@ Customize Memory Allocators
:c:func:`Py_InitializeFromConfig` to install a custom memory :c:func:`Py_InitializeFromConfig` to install a custom memory
allocator. There are no restrictions over the installed allocator allocator. There are no restrictions over the installed allocator
other than the ones imposed by the domain (for instance, the Raw other than the ones imposed by the domain (for instance, the Raw
Domain allows the allocator to be called without the GIL held). See Domain allows the allocator to be called without an :term:`attached thread state`).
:ref:`the section on allocator domains <allocator-domains>` for more See :ref:`the section on allocator domains <allocator-domains>` for more
information. information.
* If called after Python has finish initializing (after * If called after Python has finish initializing (after
@ -555,7 +553,7 @@ Runtime checks:
called on a memory block allocated by :c:func:`PyMem_Malloc`. called on a memory block allocated by :c:func:`PyMem_Malloc`.
- Detect write before the start of the buffer (buffer underflow). - Detect write before the start of the buffer (buffer underflow).
- Detect write after the end of the buffer (buffer overflow). - Detect write after the end of the buffer (buffer overflow).
- Check that the :term:`GIL <global interpreter lock>` is held when - Check that there is an :term:`attached thread state` when
allocator functions of :c:macro:`PYMEM_DOMAIN_OBJ` (ex: allocator functions of :c:macro:`PYMEM_DOMAIN_OBJ` (ex:
:c:func:`PyObject_Malloc`) and :c:macro:`PYMEM_DOMAIN_MEM` (ex: :c:func:`PyObject_Malloc`) and :c:macro:`PYMEM_DOMAIN_MEM` (ex:
:c:func:`PyMem_Malloc`) domains are called. :c:func:`PyMem_Malloc`) domains are called.
@ -620,8 +618,8 @@ PYMEM_CLEANBYTE (meaning uninitialized memory is getting used).
The :c:func:`PyMem_SetupDebugHooks` function now also works on Python The :c:func:`PyMem_SetupDebugHooks` function now also works on Python
compiled in release mode. On error, the debug hooks now use compiled in release mode. On error, the debug hooks now use
:mod:`tracemalloc` to get the traceback where a memory block was allocated. :mod:`tracemalloc` to get the traceback where a memory block was allocated.
The debug hooks now also check if the GIL is held when functions of The debug hooks now also check if there is an :term:`attached thread state` when
:c:macro:`PYMEM_DOMAIN_OBJ` and :c:macro:`PYMEM_DOMAIN_MEM` domains are functions of :c:macro:`PYMEM_DOMAIN_OBJ` and :c:macro:`PYMEM_DOMAIN_MEM` domains are
called. called.
.. versionchanged:: 3.8 .. versionchanged:: 3.8

View file

@ -415,7 +415,7 @@ The available slot types are:
in one module definition. in one module definition.
If ``Py_mod_multiple_interpreters`` is not specified, the import If ``Py_mod_multiple_interpreters`` is not specified, the import
machinery defaults to ``Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED``. machinery defaults to ``Py_MOD_MULTIPLE_INTERPRETERS_SUPPORTED``.
.. versionadded:: 3.12 .. versionadded:: 3.12
@ -523,9 +523,6 @@ state:
On success, return ``0``. On error, raise an exception and return ``-1``. On success, return ``0``. On error, raise an exception and return ``-1``.
Return ``-1`` if *value* is ``NULL``. It must be called with an exception
raised in this case.
Example usage:: Example usage::
static int static int
@ -540,6 +537,10 @@ state:
return res; return res;
} }
To be convenient, the function accepts ``NULL`` *value* with an exception
set. In this case, return ``-1`` and just leave the raised exception
unchanged.
The example can also be written without checking explicitly if *obj* is The example can also be written without checking explicitly if *obj* is
``NULL``:: ``NULL``::
@ -708,7 +709,7 @@ since multiple such modules can be created from a single definition.
mechanisms (either by calling it directly, or by referring to its mechanisms (either by calling it directly, or by referring to its
implementation for details of the required state updates). implementation for details of the required state updates).
The caller must hold the GIL. The caller must have an :term:`attached thread state`.
Return ``-1`` with an exception set on error, ``0`` on success. Return ``-1`` with an exception set on error, ``0`` on success.
@ -719,6 +720,6 @@ since multiple such modules can be created from a single definition.
Removes the module object created from *def* from the interpreter state. Removes the module object created from *def* from the interpreter state.
Return ``-1`` with an exception set on error, ``0`` on success. Return ``-1`` with an exception set on error, ``0`` on success.
The caller must hold the GIL. The caller must have an :term:`attached thread state`.
.. versionadded:: 3.3 .. versionadded:: 3.3

View file

@ -196,3 +196,15 @@ would typically correspond to a python function.
.. c:function:: int PyMonitoring_ExitScope(void) .. c:function:: int PyMonitoring_ExitScope(void)
Exit the last scope that was entered with :c:func:`!PyMonitoring_EnterScope`. Exit the last scope that was entered with :c:func:`!PyMonitoring_EnterScope`.
.. c:function:: int PY_MONITORING_IS_INSTRUMENTED_EVENT(uint8_t ev)
Return true if the event corresponding to the event ID *ev* is
a :ref:`local event <monitoring-event-local>`.
.. versionadded:: 3.13
.. deprecated:: 3.14
This function is :term:`soft deprecated`.

View file

@ -613,3 +613,145 @@ Object Protocol
.. versionadded:: 3.14 .. versionadded:: 3.14
.. c:function:: int PyUnstable_Object_IsUniqueReferencedTemporary(PyObject *obj)
Check if *obj* is a unique temporary object.
Returns ``1`` if *obj* is known to be a unique temporary object,
and ``0`` otherwise. This function cannot fail, but the check is
conservative, and may return ``0`` in some cases even if *obj* is a unique
temporary object.
If an object is a unique temporary, it is guaranteed that the current code
has the only reference to the object. For arguments to C functions, this
should be used instead of checking if the reference count is ``1``. Starting
with Python 3.14, the interpreter internally avoids some reference count
modifications when loading objects onto the operands stack by
:term:`borrowing <borrowed reference>` references when possible, which means
that a reference count of ``1`` by itself does not guarantee that a function
argument uniquely referenced.
In the example below, ``my_func`` is called with a unique temporary object
as its argument::
my_func([1, 2, 3])
In the example below, ``my_func`` is **not** called with a unique temporary
object as its argument, even if its refcount is ``1``::
my_list = [1, 2, 3]
my_func(my_list)
See also the function :c:func:`Py_REFCNT`.
.. versionadded:: 3.14
.. c:function:: int PyUnstable_IsImmortal(PyObject *obj)
This function returns non-zero if *obj* is :term:`immortal`, and zero
otherwise. This function cannot fail.
.. note::
Objects that are immortal in one CPython version are not guaranteed to
be immortal in another.
.. versionadded:: 3.14
.. c:function:: int PyUnstable_TryIncRef(PyObject *obj)
Increments the reference count of *obj* if it is not zero. Returns ``1``
if the object's reference count was successfully incremented. Otherwise,
this function returns ``0``.
:c:func:`PyUnstable_EnableTryIncRef` must have been called
earlier on *obj* or this function may spuriously return ``0`` in the
:term:`free threading` build.
This function is logically equivalent to the following C code, except that
it behaves atomically in the :term:`free threading` build::
if (Py_REFCNT(op) > 0) {
Py_INCREF(op);
return 1;
}
return 0;
This is intended as a building block for managing weak references
without the overhead of a Python :ref:`weak reference object <weakrefobjects>`.
Typically, correct use of this function requires support from *obj*'s
deallocator (:c:member:`~PyTypeObject.tp_dealloc`).
For example, the following sketch could be adapted to implement a
"weakmap" that works like a :py:class:`~weakref.WeakValueDictionary`
for a specific type:
.. code-block:: c
PyMutex mutex;
PyObject *
add_entry(weakmap_key_type *key, PyObject *value)
{
PyUnstable_EnableTryIncRef(value);
weakmap_type weakmap = ...;
PyMutex_Lock(&mutex);
weakmap_add_entry(weakmap, key, value);
PyMutex_Unlock(&mutex);
Py_RETURN_NONE;
}
PyObject *
get_value(weakmap_key_type *key)
{
weakmap_type weakmap = ...;
PyMutex_Lock(&mutex);
PyObject *result = weakmap_find(weakmap, key);
if (PyUnstable_TryIncRef(result)) {
// `result` is safe to use
PyMutex_Unlock(&mutex);
return result;
}
// if we get here, `result` is starting to be garbage-collected,
// but has not been removed from the weakmap yet
PyMutex_Unlock(&mutex);
return NULL;
}
// tp_dealloc function for weakmap values
void
value_dealloc(PyObject *value)
{
weakmap_type weakmap = ...;
PyMutex_Lock(&mutex);
weakmap_remove_value(weakmap, value);
...
PyMutex_Unlock(&mutex);
}
.. versionadded:: 3.14
.. c:function:: void PyUnstable_EnableTryIncRef(PyObject *obj)
Enables subsequent uses of :c:func:`PyUnstable_TryIncRef` on *obj*. The
caller must hold a :term:`strong reference` to *obj* when calling this.
.. versionadded:: 3.14
.. c:function:: int PyUnstable_Object_IsUniquelyReferenced(PyObject *op)
Determine if *op* only has one reference.
On GIL-enabled builds, this function is equivalent to
:c:expr:`Py_REFCNT(op) == 1`.
On a :term:`free threaded <free threading>` build, this checks if *op*'s
:term:`reference count` is equal to one and additionally checks if *op*
is only used by this thread. :c:expr:`Py_REFCNT(op) == 1` is **not**
thread-safe on free threaded builds; prefer this function.
The caller must hold an :term:`attached thread state`, despite the fact
that this function doesn't call into the Python interpreter. This function
cannot fail.
.. versionadded:: 3.14

View file

@ -16,7 +16,7 @@ kernel/git/torvalds/linux.git/tree/tools/perf/Documentation/jit-interface.txt>`_
In Python, these helper APIs can be used by libraries and features that rely In Python, these helper APIs can be used by libraries and features that rely
on generating machine code on the fly. on generating machine code on the fly.
Note that holding the Global Interpreter Lock (GIL) is not required for these APIs. Note that holding an :term:`attached thread state` is not required for these APIs.
.. c:function:: int PyUnstable_PerfMapState_Init(void) .. c:function:: int PyUnstable_PerfMapState_Init(void)

View file

@ -23,6 +23,15 @@ of Python objects.
Use the :c:func:`Py_SET_REFCNT()` function to set an object reference count. Use the :c:func:`Py_SET_REFCNT()` function to set an object reference count.
.. note::
On :term:`free threaded <free threading>` builds of Python, returning 1
isn't sufficient to determine if it's safe to treat *o* as having no
access by other threads. Use :c:func:`PyUnstable_Object_IsUniquelyReferenced`
for that instead.
See also the function :c:func:`PyUnstable_Object_IsUniqueReferencedTemporary()`.
.. versionchanged:: 3.10 .. versionchanged:: 3.10
:c:func:`Py_REFCNT()` is changed to the inline static function. :c:func:`Py_REFCNT()` is changed to the inline static function.

View file

@ -55,7 +55,7 @@ Reflection
.. c:function:: PyFrameObject* PyEval_GetFrame(void) .. c:function:: PyFrameObject* PyEval_GetFrame(void)
Return the current thread state's frame, which is ``NULL`` if no frame is Return the :term:`attached thread state`'s frame, which is ``NULL`` if no frame is
currently executing. currently executing.
See also :c:func:`PyThreadState_GetFrame`. See also :c:func:`PyThreadState_GetFrame`.

View file

@ -118,6 +118,12 @@ Ellipsis Object
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
.. c:var:: PyTypeObject PyEllipsis_Type
The type of Python :const:`Ellipsis` object. Same as :class:`types.EllipsisType`
in the Python layer.
.. c:var:: PyObject *Py_Ellipsis .. c:var:: PyObject *Py_Ellipsis
The Python ``Ellipsis`` object. This object has no methods. Like The Python ``Ellipsis`` object. This object has no methods. Like

View file

@ -63,6 +63,11 @@ under :ref:`reference counting <countingrefs>`.
See documentation of :c:type:`PyVarObject` above. See documentation of :c:type:`PyVarObject` above.
.. c:var:: PyTypeObject PyBaseObject_Type
The base class of all other objects, the same as :class:`object` in Python.
.. c:function:: int Py_Is(PyObject *x, PyObject *y) .. c:function:: int Py_Is(PyObject *x, PyObject *y)
Test if the *x* object is the *y* object, the same as ``x is y`` in Python. Test if the *x* object is the *y* object, the same as ``x is y`` in Python.

View file

@ -232,7 +232,7 @@ Operating System Utilities
The file descriptor is created non-inheritable (:pep:`446`). The file descriptor is created non-inheritable (:pep:`446`).
The caller must hold the GIL. The caller must have an :term:`attached thread state`.
.. versionadded:: 3.14 .. versionadded:: 3.14
@ -378,8 +378,8 @@ accessible to C code. They all work with the current interpreter thread's
silently abort the operation by raising an error subclassed from silently abort the operation by raising an error subclassed from
:class:`Exception` (other errors will not be silenced). :class:`Exception` (other errors will not be silenced).
The hook function is always called with the GIL held by the Python The hook function is always called with an :term:`attached thread state` by
interpreter that raised the event. the Python interpreter that raised the event.
See :pep:`578` for a detailed description of auditing. Functions in the See :pep:`578` for a detailed description of auditing. Functions in the
runtime and standard library that raise events are listed in the runtime and standard library that raise events are listed in the

View file

@ -56,7 +56,7 @@ range.
system time.) system time.)
As any other C API (unless otherwise specified), the functions must be called As any other C API (unless otherwise specified), the functions must be called
with the :term:`GIL` held. with an :term:`attached thread state`.
.. c:function:: int PyTime_Monotonic(PyTime_t *result) .. c:function:: int PyTime_Monotonic(PyTime_t *result)
@ -78,29 +78,29 @@ Raw Clock Functions
------------------- -------------------
Similar to clock functions, but don't set an exception on error and don't Similar to clock functions, but don't set an exception on error and don't
require the caller to hold the GIL. require the caller to have an :term:`attached thread state`.
On success, the functions return ``0``. On success, the functions return ``0``.
On failure, they set ``*result`` to ``0`` and return ``-1``, *without* setting On failure, they set ``*result`` to ``0`` and return ``-1``, *without* setting
an exception. To get the cause of the error, acquire the GIL and call the an exception. To get the cause of the error, :term:`attach <attached thread state>` a :term:`thread state`,
regular (non-``Raw``) function. Note that the regular function may succeed after and call the regular (non-``Raw``) function. Note that the regular function may succeed after
the ``Raw`` one failed. the ``Raw`` one failed.
.. c:function:: int PyTime_MonotonicRaw(PyTime_t *result) .. c:function:: int PyTime_MonotonicRaw(PyTime_t *result)
Similar to :c:func:`PyTime_Monotonic`, Similar to :c:func:`PyTime_Monotonic`,
but don't set an exception on error and don't require holding the GIL. but don't set an exception on error and don't require an :term:`attached thread state`.
.. c:function:: int PyTime_PerfCounterRaw(PyTime_t *result) .. c:function:: int PyTime_PerfCounterRaw(PyTime_t *result)
Similar to :c:func:`PyTime_PerfCounter`, Similar to :c:func:`PyTime_PerfCounter`,
but don't set an exception on error and don't require holding the GIL. but don't set an exception on error and don't require an :term:`attached thread state`.
.. c:function:: int PyTime_TimeRaw(PyTime_t *result) .. c:function:: int PyTime_TimeRaw(PyTime_t *result)
Similar to :c:func:`PyTime_Time`, Similar to :c:func:`PyTime_Time`,
but don't set an exception on error and don't require holding the GIL. but don't set an exception on error and don't require an :term:`attached thread state`.
Conversion functions Conversion functions

View file

@ -82,6 +82,9 @@ Type Objects
error (e.g. no more watcher IDs available), return ``-1`` and set an error (e.g. no more watcher IDs available), return ``-1`` and set an
exception. exception.
In free-threaded builds, :c:func:`PyType_AddWatcher` is not thread-safe,
so it must be called at start up (before spawning the first thread).
.. versionadded:: 3.12 .. versionadded:: 3.12
@ -311,10 +314,6 @@ The following functions and structs are used to create
Metaclasses that override :c:member:`~PyTypeObject.tp_new` are not Metaclasses that override :c:member:`~PyTypeObject.tp_new` are not
supported, except if ``tp_new`` is ``NULL``. supported, except if ``tp_new`` is ``NULL``.
(For backwards compatibility, other ``PyType_From*`` functions allow
such metaclasses. They ignore ``tp_new``, which may result in incomplete
initialization. This is deprecated and in Python 3.14+ such metaclasses will
not be supported.)
The *bases* argument can be used to specify base classes; it can either The *bases* argument can be used to specify base classes; it can either
be only one class or a tuple of classes. be only one class or a tuple of classes.
@ -456,6 +455,9 @@ The following functions and structs are used to create
class need *in addition* to the superclass. class need *in addition* to the superclass.
Use :c:func:`PyObject_GetTypeData` to get a pointer to subclass-specific Use :c:func:`PyObject_GetTypeData` to get a pointer to subclass-specific
memory reserved this way. memory reserved this way.
For negative :c:member:`!basicsize`, Python will insert padding when
needed to meet :c:member:`~PyTypeObject.tp_basicsize`'s alignment
requirements.
.. versionchanged:: 3.12 .. versionchanged:: 3.12

View file

@ -2,8 +2,8 @@
.. _type-structs: .. _type-structs:
Type Objects Type Object Structures
============ ======================
Perhaps one of the most important structures of the Python object system is the Perhaps one of the most important structures of the Python object system is the
structure that defines a new type: the :c:type:`PyTypeObject` structure. Type structure that defines a new type: the :c:type:`PyTypeObject` structure. Type
@ -473,7 +473,7 @@ PyTypeObject Definition
----------------------- -----------------------
The structure definition for :c:type:`PyTypeObject` can be found in The structure definition for :c:type:`PyTypeObject` can be found in
:file:`Include/object.h`. For convenience of reference, this repeats the :file:`Include/cpython/object.h`. For convenience of reference, this repeats the
definition found there: definition found there:
.. XXX Drop this? .. XXX Drop this?
@ -537,6 +537,9 @@ PyVarObject Slots
initialized to zero. For :ref:`dynamically allocated type objects initialized to zero. For :ref:`dynamically allocated type objects
<heap-types>`, this field has a special internal meaning. <heap-types>`, this field has a special internal meaning.
This field should be accessed using the :c:func:`Py_SIZE()` and
:c:func:`Py_SET_SIZE()` macros.
**Inheritance:** **Inheritance:**
This field is not inherited by subtypes. This field is not inherited by subtypes.
@ -587,47 +590,86 @@ and :c:data:`PyType_Type` effectively act as defaults.)
.. c:member:: Py_ssize_t PyTypeObject.tp_basicsize .. c:member:: Py_ssize_t PyTypeObject.tp_basicsize
Py_ssize_t PyTypeObject.tp_itemsize Py_ssize_t PyTypeObject.tp_itemsize
These fields allow calculating the size in bytes of instances of the type. These fields allow calculating the size in bytes of instances of the type.
There are two kinds of types: types with fixed-length instances have a zero There are two kinds of types: types with fixed-length instances have a zero
:c:member:`~PyTypeObject.tp_itemsize` field, types with variable-length instances have a non-zero :c:member:`!tp_itemsize` field, types with variable-length instances have a non-zero
:c:member:`~PyTypeObject.tp_itemsize` field. For a type with fixed-length instances, all :c:member:`!tp_itemsize` field. For a type with fixed-length instances, all
instances have the same size, given in :c:member:`~PyTypeObject.tp_basicsize`. instances have the same size, given in :c:member:`!tp_basicsize`.
(Exceptions to this rule can be made using
:c:func:`PyUnstable_Object_GC_NewWithExtraData`.)
For a type with variable-length instances, the instances must have an For a type with variable-length instances, the instances must have an
:c:member:`~PyVarObject.ob_size` field, and the instance size is :c:member:`~PyTypeObject.tp_basicsize` plus N :c:member:`~PyVarObject.ob_size` field, and the instance size is
times :c:member:`~PyTypeObject.tp_itemsize`, where N is the "length" of the object. The value of :c:member:`!tp_basicsize` plus N times :c:member:`!tp_itemsize`,
N is typically stored in the instance's :c:member:`~PyVarObject.ob_size` field. There are where N is the "length" of the object.
exceptions: for example, ints use a negative :c:member:`~PyVarObject.ob_size` to indicate a
negative number, and N is ``abs(ob_size)`` there. Also, the presence of an
:c:member:`~PyVarObject.ob_size` field in the instance layout doesn't mean that the instance
structure is variable-length (for example, the structure for the list type has
fixed-length instances, yet those instances have a meaningful :c:member:`~PyVarObject.ob_size`
field).
The basic size includes the fields in the instance declared by the macro Functions like :c:func:`PyObject_NewVar` will take the value of N as an
:c:macro:`PyObject_HEAD` or :c:macro:`PyObject_VAR_HEAD` (whichever is used to argument, and store in the instance's :c:member:`~PyVarObject.ob_size` field.
declare the instance struct) and this in turn includes the :c:member:`~PyObject._ob_prev` and Note that the :c:member:`~PyVarObject.ob_size` field may later be used for
:c:member:`~PyObject._ob_next` fields if they are present. This means that the only correct other purposes. For example, :py:type:`int` instances use the bits of
way to get an initializer for the :c:member:`~PyTypeObject.tp_basicsize` is to use the :c:member:`~PyVarObject.ob_size` in an implementation-defined
way; the underlying storage and its size should be accessed using
:c:func:`PyLong_Export`.
.. note::
The :c:member:`~PyVarObject.ob_size` field should be accessed using
the :c:func:`Py_SIZE()` and :c:func:`Py_SET_SIZE()` macros.
Also, the presence of an :c:member:`~PyVarObject.ob_size` field in the
instance layout doesn't mean that the instance structure is variable-length.
For example, the :py:type:`list` type has fixed-length instances, yet those
instances have a :c:member:`~PyVarObject.ob_size` field.
(As with :py:type:`int`, avoid reading lists' :c:member:`!ob_size` directly.
Call :c:func:`PyList_Size` instead.)
The :c:member:`!tp_basicsize` includes size needed for data of the type's
:c:member:`~PyTypeObject.tp_base`, plus any extra data needed
by each instance.
The correct way to set :c:member:`!tp_basicsize` is to use the
``sizeof`` operator on the struct used to declare the instance layout. ``sizeof`` operator on the struct used to declare the instance layout.
The basic size does not include the GC header size. This struct must include the struct used to declare the base type.
In other words, :c:member:`!tp_basicsize` must be greater than or equal
to the base's :c:member:`!tp_basicsize`.
A note about alignment: if the variable items require a particular alignment, Since every type is a subtype of :py:type:`object`, this struct must
this should be taken care of by the value of :c:member:`~PyTypeObject.tp_basicsize`. Example: include :c:type:`PyObject` or :c:type:`PyVarObject` (depending on
suppose a type implements an array of ``double``. :c:member:`~PyTypeObject.tp_itemsize` is whether :c:member:`~PyVarObject.ob_size` should be included). These are
``sizeof(double)``. It is the programmer's responsibility that usually defined by the macro :c:macro:`PyObject_HEAD` or
:c:member:`~PyTypeObject.tp_basicsize` is a multiple of ``sizeof(double)`` (assuming this is the :c:macro:`PyObject_VAR_HEAD`, respectively.
alignment requirement for ``double``).
For any type with variable-length instances, this field must not be ``NULL``. The basic size does not include the GC header size, as that header is not
part of :c:macro:`PyObject_HEAD`.
For cases where struct used to declare the base type is unknown,
see :c:member:`PyType_Spec.basicsize` and :c:func:`PyType_FromMetaclass`.
Notes about alignment:
- :c:member:`!tp_basicsize` must be a multiple of ``_Alignof(PyObject)``.
When using ``sizeof`` on a ``struct`` that includes
:c:macro:`PyObject_HEAD`, as recommended, the compiler ensures this.
When not using a C ``struct``, or when using compiler
extensions like ``__attribute__((packed))``, it is up to you.
- If the variable items require a particular alignment,
:c:member:`!tp_basicsize` and :c:member:`!tp_itemsize` must each be a
multiple of that alignment.
For example, if a type's variable part stores a ``double``, it is
your responsibility that both fields are a multiple of
``_Alignof(double)``.
**Inheritance:** **Inheritance:**
These fields are inherited separately by subtypes. If the base type has a These fields are inherited separately by subtypes.
non-zero :c:member:`~PyTypeObject.tp_itemsize`, it is generally not safe to set (That is, if the field is set to zero, :c:func:`PyType_Ready` will copy
the value from the base type, indicating that the instances do not
need additional storage.)
If the base type has a non-zero :c:member:`~PyTypeObject.tp_itemsize`, it is generally not safe to set
:c:member:`~PyTypeObject.tp_itemsize` to a different non-zero value in a subtype (though this :c:member:`~PyTypeObject.tp_itemsize` to a different non-zero value in a subtype (though this
depends on the implementation of the base type). depends on the implementation of the base type).
@ -661,10 +703,13 @@ and :c:data:`PyType_Type` effectively act as defaults.)
.. code-block:: c .. code-block:: c
static void foo_dealloc(foo_object *self) { static void
foo_dealloc(PyObject *op)
{
foo_object *self = (foo_object *) op;
PyObject_GC_UnTrack(self); PyObject_GC_UnTrack(self);
Py_CLEAR(self->ref); Py_CLEAR(self->ref);
Py_TYPE(self)->tp_free((PyObject *)self); Py_TYPE(self)->tp_free(self);
} }
Finally, if the type is heap allocated (:c:macro:`Py_TPFLAGS_HEAPTYPE`), the Finally, if the type is heap allocated (:c:macro:`Py_TPFLAGS_HEAPTYPE`), the
@ -675,10 +720,12 @@ and :c:data:`PyType_Type` effectively act as defaults.)
.. code-block:: c .. code-block:: c
static void foo_dealloc(foo_object *self) { static void
PyTypeObject *tp = Py_TYPE(self); foo_dealloc(PyObject *op)
{
PyTypeObject *tp = Py_TYPE(op);
// free references and buffers here // free references and buffers here
tp->tp_free(self); tp->tp_free(op);
Py_DECREF(tp); Py_DECREF(tp);
} }
@ -689,7 +736,7 @@ and :c:data:`PyType_Type` effectively act as defaults.)
object becomes part of a refcount cycle, that cycle might be collected by object becomes part of a refcount cycle, that cycle might be collected by
a garbage collection on any thread). This is not a problem for Python a garbage collection on any thread). This is not a problem for Python
API calls, since the thread on which :c:member:`!tp_dealloc` is called API calls, since the thread on which :c:member:`!tp_dealloc` is called
will own the Global Interpreter Lock (GIL). However, if the object being with an :term:`attached thread state`. However, if the object being
destroyed in turn destroys objects from some other C or C++ library, care destroyed in turn destroys objects from some other C or C++ library, care
should be taken to ensure that destroying those objects on the thread should be taken to ensure that destroying those objects on the thread
which called :c:member:`!tp_dealloc` will not violate any assumptions of which called :c:member:`!tp_dealloc` will not violate any assumptions of
@ -1374,8 +1421,9 @@ and :c:data:`PyType_Type` effectively act as defaults.)
:mod:`!_thread` extension module:: :mod:`!_thread` extension module::
static int static int
local_traverse(localobject *self, visitproc visit, void *arg) local_traverse(PyObject *op, visitproc visit, void *arg)
{ {
localobject *self = (localobject *) op;
Py_VISIT(self->args); Py_VISIT(self->args);
Py_VISIT(self->kw); Py_VISIT(self->kw);
Py_VISIT(self->dict); Py_VISIT(self->dict);
@ -1469,8 +1517,9 @@ and :c:data:`PyType_Type` effectively act as defaults.)
members to ``NULL``, as in the following example:: members to ``NULL``, as in the following example::
static int static int
local_clear(localobject *self) local_clear(PyObject *op)
{ {
localobject *self = (localobject *) op;
Py_CLEAR(self->key); Py_CLEAR(self->key);
Py_CLEAR(self->args); Py_CLEAR(self->args);
Py_CLEAR(self->kw); Py_CLEAR(self->kw);
@ -1830,7 +1879,7 @@ and :c:data:`PyType_Type` effectively act as defaults.)
dictionary, so it is may be more efficient to call :c:func:`PyObject_GetAttr` dictionary, so it is may be more efficient to call :c:func:`PyObject_GetAttr`
when accessing an attribute on the object. when accessing an attribute on the object.
It is an error to set both the :c:macro:`Py_TPFLAGS_MANAGED_WEAKREF` bit and It is an error to set both the :c:macro:`Py_TPFLAGS_MANAGED_DICT` bit and
:c:member:`~PyTypeObject.tp_dictoffset`. :c:member:`~PyTypeObject.tp_dictoffset`.
**Inheritance:** **Inheritance:**
@ -2112,15 +2161,13 @@ and :c:data:`PyType_Type` effectively act as defaults.)
static void static void
local_finalize(PyObject *self) local_finalize(PyObject *self)
{ {
PyObject *error_type, *error_value, *error_traceback;
/* Save the current exception, if any. */ /* Save the current exception, if any. */
PyErr_Fetch(&error_type, &error_value, &error_traceback); PyObject *exc = PyErr_GetRaisedException();
/* ... */ /* ... */
/* Restore the saved exception. */ /* Restore the saved exception. */
PyErr_Restore(error_type, error_value, error_traceback); PyErr_SetRaisedException(exc);
} }
**Inheritance:** **Inheritance:**

View file

@ -31,6 +31,18 @@ Unicode Type
These are the basic Unicode object types used for the Unicode implementation in These are the basic Unicode object types used for the Unicode implementation in
Python: Python:
.. c:var:: PyTypeObject PyUnicode_Type
This instance of :c:type:`PyTypeObject` represents the Python Unicode type.
It is exposed to Python code as :py:class:`str`.
.. c:var:: PyTypeObject PyUnicodeIter_Type
This instance of :c:type:`PyTypeObject` represents the Python Unicode
iterator type. It is used to iterate over Unicode string objects.
.. c:type:: Py_UCS4 .. c:type:: Py_UCS4
Py_UCS2 Py_UCS2
Py_UCS1 Py_UCS1
@ -42,19 +54,6 @@ Python:
.. versionadded:: 3.3 .. versionadded:: 3.3
.. c:type:: Py_UNICODE
This is a typedef of :c:type:`wchar_t`, which is a 16-bit type or 32-bit type
depending on the platform.
.. versionchanged:: 3.3
In previous versions, this was a 16-bit type or a 32-bit type depending on
whether you selected a "narrow" or "wide" Unicode version of Python at
build time.
.. deprecated-removed:: 3.13 3.15
.. c:type:: PyASCIIObject .. c:type:: PyASCIIObject
PyCompactUnicodeObject PyCompactUnicodeObject
PyUnicodeObject PyUnicodeObject
@ -66,12 +65,6 @@ Python:
.. versionadded:: 3.3 .. versionadded:: 3.3
.. c:var:: PyTypeObject PyUnicode_Type
This instance of :c:type:`PyTypeObject` represents the Python Unicode type. It
is exposed to Python code as ``str``.
The following APIs are C macros and static inlined functions for fast checks and The following APIs are C macros and static inlined functions for fast checks and
access to internal read-only data of Unicode objects: access to internal read-only data of Unicode objects:
@ -87,16 +80,6 @@ access to internal read-only data of Unicode objects:
subtype. This function always succeeds. subtype. This function always succeeds.
.. c:function:: int PyUnicode_READY(PyObject *unicode)
Returns ``0``. This API is kept only for backward compatibility.
.. versionadded:: 3.3
.. deprecated:: 3.10
This API does nothing since Python 3.12.
.. c:function:: Py_ssize_t PyUnicode_GET_LENGTH(PyObject *unicode) .. c:function:: Py_ssize_t PyUnicode_GET_LENGTH(PyObject *unicode)
Return the length of the Unicode string, in code points. *unicode* has to be a Return the length of the Unicode string, in code points. *unicode* has to be a
@ -149,12 +132,16 @@ access to internal read-only data of Unicode objects:
.. c:function:: void PyUnicode_WRITE(int kind, void *data, \ .. c:function:: void PyUnicode_WRITE(int kind, void *data, \
Py_ssize_t index, Py_UCS4 value) Py_ssize_t index, Py_UCS4 value)
Write into a canonical representation *data* (as obtained with Write the code point *value* to the given zero-based *index* in a string.
:c:func:`PyUnicode_DATA`). This function performs no sanity checks, and is
intended for usage in loops. The caller should cache the *kind* value and The *kind* value and *data* pointer must have been obtained from a
*data* pointer as obtained from other calls. *index* is the index in string using :c:func:`PyUnicode_KIND` and :c:func:`PyUnicode_DATA`
the string (starts at 0) and *value* is the new code point value which should respectively. You must hold a reference to that string while calling
be written to that location. :c:func:`!PyUnicode_WRITE`. All requirements of
:c:func:`PyUnicode_WriteChar` also apply.
The function performs no checks for any of its requirements,
and is intended for usage in loops.
.. versionadded:: 3.3 .. versionadded:: 3.3
@ -196,6 +183,14 @@ access to internal read-only data of Unicode objects:
is not ready. is not ready.
.. c:function:: unsigned int PyUnicode_IS_ASCII(PyObject *unicode)
Return true if the string only contains ASCII characters.
Equivalent to :py:meth:`str.isascii`.
.. versionadded:: 3.2
Unicode Character Properties Unicode Character Properties
"""""""""""""""""""""""""""" """"""""""""""""""""""""""""
@ -256,13 +251,8 @@ the Python configuration.
.. c:function:: int Py_UNICODE_ISPRINTABLE(Py_UCS4 ch) .. c:function:: int Py_UNICODE_ISPRINTABLE(Py_UCS4 ch)
Return ``1`` or ``0`` depending on whether *ch* is a printable character. Return ``1`` or ``0`` depending on whether *ch* is a printable character,
Nonprintable characters are those characters defined in the Unicode character in the sense of :meth:`str.isprintable`.
database as "Other" or "Separator", excepting the ASCII space (0x20) which is
considered printable. (Note that printable characters in this context are
those which should not be escaped when :func:`repr` is invoked on a string.
It has no bearing on the handling of strings written to :data:`sys.stdout` or
:data:`sys.stderr`.)
These APIs can be used for fast direct character conversions: These APIs can be used for fast direct character conversions:
@ -335,11 +325,29 @@ APIs:
to be placed in the string. As an approximation, it can be rounded up to the to be placed in the string. As an approximation, it can be rounded up to the
nearest value in the sequence 127, 255, 65535, 1114111. nearest value in the sequence 127, 255, 65535, 1114111.
This is the recommended way to allocate a new Unicode object. Objects
created using this function are not resizable.
On error, set an exception and return ``NULL``. On error, set an exception and return ``NULL``.
After creation, the string can be filled by :c:func:`PyUnicode_WriteChar`,
:c:func:`PyUnicode_CopyCharacters`, :c:func:`PyUnicode_Fill`,
:c:func:`PyUnicode_WRITE` or similar.
Since strings are supposed to be immutable, take care to not “use” the
result while it is being modified. In particular, before it's filled
with its final contents, a string:
- must not be hashed,
- must not be :c:func:`converted to UTF-8 <PyUnicode_AsUTF8AndSize>`,
or another non-"canonical" representation,
- must not have its reference count changed,
- must not be shared with code that might do one of the above.
This list is not exhaustive. Avoiding these uses is your responsibility;
Python does not always check these requirements.
To avoid accidentally exposing a partially-written string object, prefer
using the :c:type:`PyUnicodeWriter` API, or one of the ``PyUnicode_From*``
functions below.
.. versionadded:: 3.3 .. versionadded:: 3.3
@ -594,6 +602,14 @@ APIs:
Objects other than Unicode or its subtypes will cause a :exc:`TypeError`. Objects other than Unicode or its subtypes will cause a :exc:`TypeError`.
.. c:function:: PyObject* PyUnicode_FromOrdinal(int ordinal)
Create a Unicode Object from the given Unicode code point *ordinal*.
The ordinal must be in ``range(0x110000)``. A :exc:`ValueError` is
raised in the case it is not.
.. c:function:: PyObject* PyUnicode_FromEncodedObject(PyObject *obj, \ .. c:function:: PyObject* PyUnicode_FromEncodedObject(PyObject *obj, \
const char *encoding, const char *errors) const char *encoding, const char *errors)
@ -612,6 +628,43 @@ APIs:
decref'ing the returned objects. decref'ing the returned objects.
.. c:function:: void PyUnicode_Append(PyObject **p_left, PyObject *right)
Append the string *right* to the end of *p_left*.
*p_left* must point to a :term:`strong reference` to a Unicode object;
:c:func:`!PyUnicode_Append` releases ("steals") this reference.
On error, set *\*p_left* to ``NULL`` and set an exception.
On success, set *\*p_left* to a new strong reference to the result.
.. c:function:: void PyUnicode_AppendAndDel(PyObject **p_left, PyObject *right)
The function is similar to :c:func:`PyUnicode_Append`, with the only
difference being that it decrements the reference count of *right* by one.
.. c:function:: PyObject* PyUnicode_BuildEncodingMap(PyObject* string)
Return a mapping suitable for decoding a custom single-byte encoding.
Given a Unicode string *string* of up to 256 characters representing an encoding
table, returns either a compact internal mapping object or a dictionary
mapping character ordinals to byte values. Raises a :exc:`TypeError` and
return ``NULL`` on invalid input.
.. versionadded:: 3.2
.. c:function:: const char* PyUnicode_GetDefaultEncoding(void)
Return the name of the default string encoding, ``"utf-8"``.
See :func:`sys.getdefaultencoding`.
The returned string does not need to be freed, and is valid
until interpreter shutdown.
.. c:function:: Py_ssize_t PyUnicode_GetLength(PyObject *unicode) .. c:function:: Py_ssize_t PyUnicode_GetLength(PyObject *unicode)
Return the length of the Unicode object, in code points. Return the length of the Unicode object, in code points.
@ -632,9 +685,27 @@ APIs:
possible. Returns ``-1`` and sets an exception on error, otherwise returns possible. Returns ``-1`` and sets an exception on error, otherwise returns
the number of copied characters. the number of copied characters.
The string must not have been “used” yet.
See :c:func:`PyUnicode_New` for details.
.. versionadded:: 3.3 .. versionadded:: 3.3
.. c:function:: int PyUnicode_Resize(PyObject **unicode, Py_ssize_t length);
Resize a Unicode object *\*unicode* to the new *length* in code points.
Try to resize the string in place (which is usually faster than allocating
a new string and copying characters), or create a new string.
*\*unicode* is modified to point to the new (resized) object and ``0`` is
returned on success. Otherwise, ``-1`` is returned and an exception is set,
and *\*unicode* is left untouched.
The function doesn't check string content, the result may not be a
string in canonical representation.
.. c:function:: Py_ssize_t PyUnicode_Fill(PyObject *unicode, Py_ssize_t start, \ .. c:function:: Py_ssize_t PyUnicode_Fill(PyObject *unicode, Py_ssize_t start, \
Py_ssize_t length, Py_UCS4 fill_char) Py_ssize_t length, Py_UCS4 fill_char)
@ -644,6 +715,9 @@ APIs:
Fail if *fill_char* is bigger than the string maximum character, or if the Fail if *fill_char* is bigger than the string maximum character, or if the
string has more than 1 reference. string has more than 1 reference.
The string must not have been “used” yet.
See :c:func:`PyUnicode_New` for details.
Return the number of written character, or return ``-1`` and raise an Return the number of written character, or return ``-1`` and raise an
exception on error. exception on error.
@ -653,15 +727,16 @@ APIs:
.. c:function:: int PyUnicode_WriteChar(PyObject *unicode, Py_ssize_t index, \ .. c:function:: int PyUnicode_WriteChar(PyObject *unicode, Py_ssize_t index, \
Py_UCS4 character) Py_UCS4 character)
Write a character to a string. The string must have been created through Write a *character* to the string *unicode* at the zero-based *index*.
:c:func:`PyUnicode_New`. Since Unicode strings are supposed to be immutable, Return ``0`` on success, ``-1`` on error with an exception set.
the string must not be shared, or have been hashed yet.
This function checks that *unicode* is a Unicode object, that the index is This function checks that *unicode* is a Unicode object, that the index is
not out of bounds, and that the object can be modified safely (i.e. that it not out of bounds, and that the object's reference count is one).
its reference count is one). See :c:func:`PyUnicode_WRITE` for a version that skips these checks,
making them your responsibility.
Return ``0`` on success, ``-1`` on error with an exception set. The string must not have been “used” yet.
See :c:func:`PyUnicode_New` for details.
.. versionadded:: 3.3 .. versionadded:: 3.3
@ -968,6 +1043,17 @@ generic ones are documented for simplicity.
Generic Codecs Generic Codecs
"""""""""""""" """"""""""""""
The following macro is provided:
.. c:macro:: Py_UNICODE_REPLACEMENT_CHARACTER
The Unicode code point ``U+FFFD`` (replacement character).
This Unicode character is used as the replacement character during
decoding if the *errors* argument is set to "replace".
These are the generic codec APIs: These are the generic codec APIs:
@ -1054,6 +1140,15 @@ These are the UTF-8 codec APIs:
As :c:func:`PyUnicode_AsUTF8AndSize`, but does not store the size. As :c:func:`PyUnicode_AsUTF8AndSize`, but does not store the size.
.. warning::
This function does not have any special behavior for
`null characters <https://en.wikipedia.org/wiki/Null_character>`_ embedded within
*unicode*. As a result, strings containing null characters will remain in the returned
string, which some C functions might interpret as the end of the string, leading to
truncation. If truncation is an issue, it is recommended to use :c:func:`PyUnicode_AsUTF8AndSize`
instead.
.. versionadded:: 3.3 .. versionadded:: 3.3
.. versionchanged:: 3.7 .. versionchanged:: 3.7
@ -1343,6 +1438,13 @@ the user settings on the machine running the codec.
in *consumed*. in *consumed*.
.. c:function:: PyObject* PyUnicode_DecodeCodePageStateful(int code_page, const char *str, \
Py_ssize_t size, const char *errors, Py_ssize_t *consumed)
Similar to :c:func:`PyUnicode_DecodeMBCSStateful`, except uses the code page
specified by *code_page*.
.. c:function:: PyObject* PyUnicode_AsMBCSString(PyObject *unicode) .. c:function:: PyObject* PyUnicode_AsMBCSString(PyObject *unicode)
Encode a Unicode object using MBCS and return the result as Python bytes Encode a Unicode object using MBCS and return the result as Python bytes
@ -1387,6 +1489,20 @@ They all return ``NULL`` or ``-1`` if an exception occurs.
separator. At most *maxsplit* splits will be done. If negative, no limit is separator. At most *maxsplit* splits will be done. If negative, no limit is
set. Separators are not included in the resulting list. set. Separators are not included in the resulting list.
On error, return ``NULL`` with an exception set.
Equivalent to :py:meth:`str.split`.
.. c:function:: PyObject* PyUnicode_RSplit(PyObject *unicode, PyObject *sep, Py_ssize_t maxsplit)
Similar to :c:func:`PyUnicode_Split`, but splitting will be done beginning
at the end of the string.
On error, return ``NULL`` with an exception set.
Equivalent to :py:meth:`str.rsplit`.
.. c:function:: PyObject* PyUnicode_Splitlines(PyObject *unicode, int keepends) .. c:function:: PyObject* PyUnicode_Splitlines(PyObject *unicode, int keepends)
@ -1395,6 +1511,33 @@ They all return ``NULL`` or ``-1`` if an exception occurs.
characters are not included in the resulting strings. characters are not included in the resulting strings.
.. c:function:: PyObject* PyUnicode_Partition(PyObject *unicode, PyObject *sep)
Split a Unicode string at the first occurrence of *sep*, and return
a 3-tuple containing the part before the separator, the separator itself,
and the part after the separator. If the separator is not found,
return a 3-tuple containing the string itself, followed by two empty strings.
*sep* must not be empty.
On error, return ``NULL`` with an exception set.
Equivalent to :py:meth:`str.partition`.
.. c:function:: PyObject* PyUnicode_RPartition(PyObject *unicode, PyObject *sep)
Similar to :c:func:`PyUnicode_Partition`, but split a Unicode string at the
last occurrence of *sep*. If the separator is not found, return a 3-tuple
containing two empty strings, followed by the string itself.
*sep* must not be empty.
On error, return ``NULL`` with an exception set.
Equivalent to :py:meth:`str.rpartition`.
.. c:function:: PyObject* PyUnicode_Join(PyObject *separator, PyObject *seq) .. c:function:: PyObject* PyUnicode_Join(PyObject *separator, PyObject *seq)
Join a sequence of strings using the given *separator* and return the resulting Join a sequence of strings using the given *separator* and return the resulting
@ -1588,6 +1731,20 @@ They all return ``NULL`` or ``-1`` if an exception occurs.
Strings interned this way are made :term:`immortal`. Strings interned this way are made :term:`immortal`.
.. c:function:: unsigned int PyUnicode_CHECK_INTERNED(PyObject *str)
Return a non-zero value if *str* is interned, zero if not.
The *str* argument must be a string; this is not checked.
This function always succeeds.
.. impl-detail::
A non-zero return value may carry additional information
about *how* the string is interned.
The meaning of such non-zero values, as well as each specific string's
intern-related details, may change between CPython versions.
PyUnicodeWriter PyUnicodeWriter
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
@ -1708,8 +1865,8 @@ object.
*size* is the string length in bytes. If *size* is equal to ``-1``, call *size* is the string length in bytes. If *size* is equal to ``-1``, call
``strlen(str)`` to get the string length. ``strlen(str)`` to get the string length.
*errors* is an error handler name, such as ``"replace"``. If *errors* is *errors* is an :ref:`error handler <error-handlers>` name, such as
``NULL``, use the strict error handler. ``"replace"``. If *errors* is ``NULL``, use the strict error handler.
If *consumed* is not ``NULL``, set *\*consumed* to the number of decoded If *consumed* is not ``NULL``, set *\*consumed* to the number of decoded
bytes on success. bytes on success.
@ -1720,3 +1877,49 @@ object.
On error, set an exception, leave the writer unchanged, and return ``-1``. On error, set an exception, leave the writer unchanged, and return ``-1``.
See also :c:func:`PyUnicodeWriter_WriteUTF8`. See also :c:func:`PyUnicodeWriter_WriteUTF8`.
Deprecated API
^^^^^^^^^^^^^^
The following API is deprecated.
.. c:type:: Py_UNICODE
This is a typedef of :c:type:`wchar_t`, which is a 16-bit type or 32-bit type
depending on the platform.
Please use :c:type:`wchar_t` directly instead.
.. versionchanged:: 3.3
In previous versions, this was a 16-bit type or a 32-bit type depending on
whether you selected a "narrow" or "wide" Unicode version of Python at
build time.
.. deprecated-removed:: 3.13 3.15
.. c:function:: int PyUnicode_READY(PyObject *unicode)
Do nothing and return ``0``.
This API is kept only for backward compatibility, but there are no plans
to remove it.
.. versionadded:: 3.3
.. deprecated:: 3.10
This API does nothing since Python 3.12.
Previously, this needed to be called for each string created using
the old API (:c:func:`!PyUnicode_FromUnicode` or similar).
.. c:function:: unsigned int PyUnicode_IS_READY(PyObject *unicode)
Do nothing and return ``1``.
This API is kept only for backward compatibility, but there are no plans
to remove it.
.. versionadded:: 3.3
.. deprecated:: 3.14
This API does nothing since Python 3.12.
Previously, this could be called to check if
:c:func:`PyUnicode_READY` is necessary.

View file

@ -348,8 +348,20 @@ the same library that the Python runtime is using.
.. versionchanged:: 3.8 .. versionchanged:: 3.8
Added *cf_feature_version* field. Added *cf_feature_version* field.
The available compiler flags are accessible as macros:
.. c:var:: int CO_FUTURE_DIVISION .. c:namespace:: NULL
This bit can be set in *flags* to cause division operator ``/`` to be .. c:macro:: PyCF_ALLOW_TOP_LEVEL_AWAIT
interpreted as "true division" according to :pep:`238`. PyCF_ONLY_AST
PyCF_OPTIMIZED_AST
PyCF_TYPE_COMMENTS
See :ref:`compiler flags <ast-compiler-flags>` in documentation of the
:py:mod:`!ast` Python module, which exports these constants under
the same names.
.. c:var:: int CO_FUTURE_DIVISION
This bit can be set in *flags* to cause division operator ``/`` to be
interpreted as "true division" according to :pep:`238`.

View file

@ -6,12 +6,10 @@
# The contents of this file are pickled, so don't put values in the namespace # The contents of this file are pickled, so don't put values in the namespace
# that aren't pickleable (module imports are okay, they're removed automatically). # that aren't pickleable (module imports are okay, they're removed automatically).
import importlib
import os import os
import sys import sys
import time from importlib import import_module
from importlib.util import find_spec
import sphinx
# Make our custom extensions available to Sphinx # Make our custom extensions available to Sphinx
sys.path.append(os.path.abspath('tools/extensions')) sys.path.append(os.path.abspath('tools/extensions'))
@ -28,8 +26,14 @@
'audit_events', 'audit_events',
'availability', 'availability',
'c_annotations', 'c_annotations',
'changes',
'glossary_search', 'glossary_search',
'grammar_snippet',
'implementation_detail',
'issue_role',
'lexers', 'lexers',
'misc_news',
'pydoc_topics',
'pyspecific', 'pyspecific',
'sphinx.ext.coverage', 'sphinx.ext.coverage',
'sphinx.ext.doctest', 'sphinx.ext.doctest',
@ -37,19 +41,17 @@
] ]
# Skip if downstream redistributors haven't installed them # Skip if downstream redistributors haven't installed them
try: _OPTIONAL_EXTENSIONS = (
import notfound.extension # noqa: F401 'notfound.extension',
except ImportError: 'sphinxext.opengraph',
pass )
else: for optional_ext in _OPTIONAL_EXTENSIONS:
extensions.append('notfound.extension') try:
try: if find_spec(optional_ext) is not None:
import sphinxext.opengraph # noqa: F401 extensions.append(optional_ext)
except ImportError: except (ImportError, ValueError):
pass pass
else: del _OPTIONAL_EXTENSIONS
extensions.append('sphinxext.opengraph')
doctest_global_setup = ''' doctest_global_setup = '''
try: try:
@ -72,7 +74,7 @@
# We look for the Include/patchlevel.h file in the current Python source tree # We look for the Include/patchlevel.h file in the current Python source tree
# and replace the values accordingly. # and replace the values accordingly.
# See Doc/tools/extensions/patchlevel.py # See Doc/tools/extensions/patchlevel.py
version, release = importlib.import_module('patchlevel').get_version_info() version, release = import_module('patchlevel').get_version_info()
rst_epilog = f""" rst_epilog = f"""
.. |python_version_literal| replace:: ``Python {version}`` .. |python_version_literal| replace:: ``Python {version}``
@ -97,7 +99,8 @@
highlight_language = 'python3' highlight_language = 'python3'
# Minimum version of sphinx required # Minimum version of sphinx required
needs_sphinx = '7.2.6' # Keep this version in sync with ``Doc/requirements.txt``.
needs_sphinx = '8.2.0'
# Create table of contents entries for domain objects (e.g. functions, classes, # Create table of contents entries for domain objects (e.g. functions, classes,
# attributes, etc.). Default is True. # attributes, etc.). Default is True.
@ -376,13 +379,7 @@
# This 'Last updated on:' timestamp is inserted at the bottom of every page. # This 'Last updated on:' timestamp is inserted at the bottom of every page.
html_last_updated_fmt = '%b %d, %Y (%H:%M UTC)' html_last_updated_fmt = '%b %d, %Y (%H:%M UTC)'
if sphinx.version_info[:2] >= (8, 1): html_last_updated_use_utc = True
html_last_updated_use_utc = True
else:
html_time = int(os.environ.get('SOURCE_DATE_EPOCH', time.time()))
html_last_updated_fmt = time.strftime(
html_last_updated_fmt, time.gmtime(html_time)
)
# Path to find HTML templates to override theme # Path to find HTML templates to override theme
templates_path = ['tools/templates'] templates_path = ['tools/templates']
@ -566,8 +563,6 @@
r'https://github.com/python/cpython/tree/.*': 'https://github.com/python/cpython/blob/.*', r'https://github.com/python/cpython/tree/.*': 'https://github.com/python/cpython/blob/.*',
# Intentional HTTP use at Misc/NEWS.d/3.5.0a1.rst # Intentional HTTP use at Misc/NEWS.d/3.5.0a1.rst
r'http://www.python.org/$': 'https://www.python.org/$', r'http://www.python.org/$': 'https://www.python.org/$',
# Used in license page, keep as is
r'https://www.zope.org/': r'https://www.zope.dev/',
# Microsoft's redirects to learn.microsoft.com # Microsoft's redirects to learn.microsoft.com
r'https://msdn.microsoft.com/.*': 'https://learn.microsoft.com/.*', r'https://msdn.microsoft.com/.*': 'https://learn.microsoft.com/.*',
r'https://docs.microsoft.com/.*': 'https://learn.microsoft.com/.*', r'https://docs.microsoft.com/.*': 'https://learn.microsoft.com/.*',
@ -619,16 +614,6 @@
} }
extlinks_detect_hardcoded_links = True extlinks_detect_hardcoded_links = True
if sphinx.version_info[:2] < (8, 1):
# Sphinx 8.1 has in-built CVE and CWE roles.
extlinks |= {
"cve": (
"https://www.cve.org/CVERecord?id=CVE-%s",
"CVE-%s",
),
"cwe": ("https://cwe.mitre.org/data/definitions/%s.html", "CWE-%s"),
}
# Options for c_annotations extension # Options for c_annotations extension
# ----------------------------------- # -----------------------------------
@ -639,11 +624,19 @@
# Options for sphinxext-opengraph # Options for sphinxext-opengraph
# ------------------------------- # -------------------------------
ogp_site_url = 'https://docs.python.org/3/' ogp_canonical_url = 'https://docs.python.org/3/'
ogp_site_name = 'Python documentation' ogp_site_name = 'Python documentation'
ogp_image = '_static/og-image.png' ogp_social_cards = { # Used when matplotlib is installed
'image': '_static/og-image.png',
'line_color': '#3776ab',
}
ogp_custom_meta_tags = [ ogp_custom_meta_tags = [
'<meta property="og:image:width" content="200" />', '<meta name="theme-color" content="#3776ab">',
'<meta property="og:image:height" content="200" />',
'<meta name="theme-color" content="#3776ab" />',
] ]
if 'create-social-cards' not in tags: # noqa: F821
# Define a static preview image when not creating social cards
ogp_image = '_static/og-image.png'
ogp_custom_meta_tags += [
'<meta property="og:image:width" content="200">',
'<meta property="og:image:height" content="200">',
]

View file

@ -13,14 +13,12 @@ packaging<25
Pygments<3 Pygments<3
requests<3 requests<3
snowballstemmer<3 snowballstemmer<3
# keep lower-bounds until Sphinx 8.1 is released sphinxcontrib-applehelp<3
# https://github.com/sphinx-doc/sphinx/pull/12756 sphinxcontrib-devhelp<3
sphinxcontrib-applehelp>=1.0.7,<3 sphinxcontrib-htmlhelp<3
sphinxcontrib-devhelp>=1.0.6,<3 sphinxcontrib-jsmath<2
sphinxcontrib-htmlhelp>=2.0.6,<3 sphinxcontrib-qthelp<3
sphinxcontrib-jsmath>=1.0.1,<2 sphinxcontrib-serializinghtml<3
sphinxcontrib-qthelp>=1.0.6,<3
sphinxcontrib-serializinghtml>=1.1.9,<3
# Direct dependencies of Jinja2 (Jinja is a dependency of Sphinx, see above) # Direct dependencies of Jinja2 (Jinja is a dependency of Sphinx, see above)
MarkupSafe<3 MarkupSafe<3

View file

@ -1093,9 +1093,6 @@ PyImport_ImportModuleLevelObject:PyObject*:locals:0:???
PyImport_ImportModuleLevelObject:PyObject*:fromlist:0:??? PyImport_ImportModuleLevelObject:PyObject*:fromlist:0:???
PyImport_ImportModuleLevelObject:int:level:: PyImport_ImportModuleLevelObject:int:level::
PyImport_ImportModuleNoBlock:PyObject*::+1:
PyImport_ImportModuleNoBlock:const char*:name::
PyImport_ReloadModule:PyObject*::+1: PyImport_ReloadModule:PyObject*::+1:
PyImport_ReloadModule:PyObject*:m:0: PyImport_ReloadModule:PyObject*:m:0:
@ -2636,6 +2633,13 @@ PyUnicode_DecodeMBCSStateful:Py_ssize_t:size::
PyUnicode_DecodeMBCSStateful:const char*:errors:: PyUnicode_DecodeMBCSStateful:const char*:errors::
PyUnicode_DecodeMBCSStateful:Py_ssize_t*:consumed:: PyUnicode_DecodeMBCSStateful:Py_ssize_t*:consumed::
PyUnicode_DecodeCodePageStateful:PyObject*::+1:
PyUnicode_DecodeCodePageStateful:int:code_page::
PyUnicode_DecodeCodePageStateful:const char*:s::
PyUnicode_DecodeCodePageStateful:Py_ssize_t:size::
PyUnicode_DecodeCodePageStateful:const char*:errors::
PyUnicode_DecodeCodePageStateful:Py_ssize_t*:consumed::
PyUnicode_EncodeCodePage:PyObject*::+1: PyUnicode_EncodeCodePage:PyObject*::+1:
PyUnicode_EncodeCodePage:int:code_page:: PyUnicode_EncodeCodePage:int:code_page::
PyUnicode_EncodeCodePage:PyObject*:unicode:0: PyUnicode_EncodeCodePage:PyObject*:unicode:0:
@ -2648,13 +2652,26 @@ PyUnicode_Concat:PyObject*::+1:
PyUnicode_Concat:PyObject*:left:0: PyUnicode_Concat:PyObject*:left:0:
PyUnicode_Concat:PyObject*:right:0: PyUnicode_Concat:PyObject*:right:0:
PyUnicode_Partition:PyObject*::+1:
PyUnicode_Partition:PyObject*:unicode:0:
PyUnicode_Partition:PyObject*:sep:0:
PyUnicode_RPartition:PyObject*::+1:
PyUnicode_RPartition:PyObject*:unicode:0:
PyUnicode_RPartition:PyObject*:sep:0:
PyUnicode_RSplit:PyObject*::+1:
PyUnicode_RSplit:PyObject*:unicode:0:
PyUnicode_RSplit:PyObject*:sep:0:
PyUnicode_RSplit:Py_ssize_t:maxsplit::
PyUnicode_Split:PyObject*::+1: PyUnicode_Split:PyObject*::+1:
PyUnicode_Split:PyObject*:left:0: PyUnicode_Split:PyObject*:unicode:0:
PyUnicode_Split:PyObject*:right:0: PyUnicode_Split:PyObject*:sep:0:
PyUnicode_Split:Py_ssize_t:maxsplit:: PyUnicode_Split:Py_ssize_t:maxsplit::
PyUnicode_Splitlines:PyObject*::+1: PyUnicode_Splitlines:PyObject*::+1:
PyUnicode_Splitlines:PyObject*:s:0: PyUnicode_Splitlines:PyObject*:unicode:0:
PyUnicode_Splitlines:int:keepend:: PyUnicode_Splitlines:int:keepend::
PyUnicode_Translate:PyObject*::+1: PyUnicode_Translate:PyObject*::+1:
@ -2750,6 +2767,23 @@ PyUnicode_FromFormatV:PyObject*::+1:
PyUnicode_FromFormatV:const char*:format:: PyUnicode_FromFormatV:const char*:format::
PyUnicode_FromFormatV:va_list:args:: PyUnicode_FromFormatV:va_list:args::
PyUnicode_FromOrdinal:PyObject*::+1:
PyUnicode_FromOrdinal:int:ordinal::
PyUnicode_Append:void:::
PyUnicode_Append:PyObject**:p_left:0:
PyUnicode_Append:PyObject*:right::
PyUnicode_AppendAndDel:void:::
PyUnicode_AppendAndDel:PyObject**:p_left:0:
PyUnicode_AppendAndDel:PyObject*:right:-1:
PyUnicode_BuildEncodingMap:PyObject*::+1:
PyUnicode_BuildEncodingMap:PyObject*:string:::
PyUnicode_GetDefaultEncoding:const char*:::
PyUnicode_GetDefaultEncoding::void::
PyUnicode_GetLength:Py_ssize_t::: PyUnicode_GetLength:Py_ssize_t:::
PyUnicode_GetLength:PyObject*:unicode:0: PyUnicode_GetLength:PyObject*:unicode:0:
@ -2760,6 +2794,10 @@ PyUnicode_CopyCharacters:PyObject*:from:0:
PyUnicode_CopyCharacters:Py_ssize_t:from_start:: PyUnicode_CopyCharacters:Py_ssize_t:from_start::
PyUnicode_CopyCharacters:Py_ssize_t:how_many:: PyUnicode_CopyCharacters:Py_ssize_t:how_many::
PyUnicode_Resize:int:::
PyUnicode_Resize:PyObject**:unicode:0:
PyUnicode_Resize:Py_ssize_t:length::
PyUnicode_Fill:Py_ssize_t::: PyUnicode_Fill:Py_ssize_t:::
PyUnicode_Fill:PyObject*:unicode:0: PyUnicode_Fill:PyObject*:unicode:0:
PyUnicode_Fill:Py_ssize_t:start:: PyUnicode_Fill:Py_ssize_t:start::
@ -2969,18 +3007,8 @@ Py_GetCompiler:const char*:::
Py_GetCopyright:const char*::: Py_GetCopyright:const char*:::
Py_GetExecPrefix:wchar_t*:::
Py_GetPath:wchar_t*:::
Py_GetPlatform:const char*::: Py_GetPlatform:const char*:::
Py_GetPrefix:wchar_t*:::
Py_GetProgramFullPath:wchar_t*:::
Py_GetProgramName:wchar_t*:::
Py_GetVersion:const char*::: Py_GetVersion:const char*:::
Py_INCREF:void::: Py_INCREF:void:::
@ -3052,3 +3080,11 @@ _Py_c_quot:Py_complex:divisor::
_Py_c_sum:Py_complex::: _Py_c_sum:Py_complex:::
_Py_c_sum:Py_complex:left:: _Py_c_sum:Py_complex:left::
_Py_c_sum:Py_complex:right:: _Py_c_sum:Py_complex:right::
PyImport_ImportModuleAttr:PyObject*::+1:
PyImport_ImportModuleAttr:PyObject*:mod_name:0:
PyImport_ImportModuleAttr:PyObject*:attr_name:0:
PyImport_ImportModuleAttrString:PyObject*::+1:
PyImport_ImportModuleAttrString:const char *:mod_name::
PyImport_ImportModuleAttrString:const char *:attr_name::

View file

@ -323,7 +323,6 @@ func,PyImport_ImportFrozenModuleObject,3.7,,
func,PyImport_ImportModule,3.2,, func,PyImport_ImportModule,3.2,,
func,PyImport_ImportModuleLevel,3.2,, func,PyImport_ImportModuleLevel,3.2,,
func,PyImport_ImportModuleLevelObject,3.7,, func,PyImport_ImportModuleLevelObject,3.7,,
func,PyImport_ImportModuleNoBlock,3.2,,
func,PyImport_ReloadModule,3.2,, func,PyImport_ReloadModule,3.2,,
func,PyIndex_Check,3.8,, func,PyIndex_Check,3.8,,
type,PyInterpreterState,3.2,,opaque type,PyInterpreterState,3.2,,opaque
@ -362,6 +361,7 @@ func,PyLong_AsLong,3.2,,
func,PyLong_AsLongAndOverflow,3.2,, func,PyLong_AsLongAndOverflow,3.2,,
func,PyLong_AsLongLong,3.2,, func,PyLong_AsLongLong,3.2,,
func,PyLong_AsLongLongAndOverflow,3.2,, func,PyLong_AsLongLongAndOverflow,3.2,,
func,PyLong_AsNativeBytes,3.14,,
func,PyLong_AsSize_t,3.2,, func,PyLong_AsSize_t,3.2,,
func,PyLong_AsSsize_t,3.2,, func,PyLong_AsSsize_t,3.2,,
func,PyLong_AsUInt32,3.14,, func,PyLong_AsUInt32,3.14,,
@ -376,6 +376,7 @@ func,PyLong_FromInt32,3.14,,
func,PyLong_FromInt64,3.14,, func,PyLong_FromInt64,3.14,,
func,PyLong_FromLong,3.2,, func,PyLong_FromLong,3.2,,
func,PyLong_FromLongLong,3.2,, func,PyLong_FromLongLong,3.2,,
func,PyLong_FromNativeBytes,3.14,,
func,PyLong_FromSize_t,3.2,, func,PyLong_FromSize_t,3.2,,
func,PyLong_FromSsize_t,3.2,, func,PyLong_FromSsize_t,3.2,,
func,PyLong_FromString,3.2,, func,PyLong_FromString,3.2,,
@ -383,6 +384,7 @@ func,PyLong_FromUInt32,3.14,,
func,PyLong_FromUInt64,3.14,, func,PyLong_FromUInt64,3.14,,
func,PyLong_FromUnsignedLong,3.2,, func,PyLong_FromUnsignedLong,3.2,,
func,PyLong_FromUnsignedLongLong,3.2,, func,PyLong_FromUnsignedLongLong,3.2,,
func,PyLong_FromUnsignedNativeBytes,3.14,,
func,PyLong_FromVoidPtr,3.2,, func,PyLong_FromVoidPtr,3.2,,
func,PyLong_GetInfo,3.2,, func,PyLong_GetInfo,3.2,,
data,PyLong_Type,3.2,, data,PyLong_Type,3.2,,
@ -737,11 +739,7 @@ func,PyUnicode_Append,3.2,,
func,PyUnicode_AppendAndDel,3.2,, func,PyUnicode_AppendAndDel,3.2,,
func,PyUnicode_AsASCIIString,3.2,, func,PyUnicode_AsASCIIString,3.2,,
func,PyUnicode_AsCharmapString,3.2,, func,PyUnicode_AsCharmapString,3.2,,
func,PyUnicode_AsDecodedObject,3.2,,
func,PyUnicode_AsDecodedUnicode,3.2,,
func,PyUnicode_AsEncodedObject,3.2,,
func,PyUnicode_AsEncodedString,3.2,, func,PyUnicode_AsEncodedString,3.2,,
func,PyUnicode_AsEncodedUnicode,3.2,,
func,PyUnicode_AsLatin1String,3.2,, func,PyUnicode_AsLatin1String,3.2,,
func,PyUnicode_AsMBCSString,3.7,on Windows, func,PyUnicode_AsMBCSString,3.7,on Windows,
func,PyUnicode_AsRawUnicodeEscapeString,3.2,, func,PyUnicode_AsRawUnicodeEscapeString,3.2,,
@ -859,13 +857,7 @@ func,Py_GetCompiler,3.2,,
func,Py_GetConstant,3.13,, func,Py_GetConstant,3.13,,
func,Py_GetConstantBorrowed,3.13,, func,Py_GetConstantBorrowed,3.13,,
func,Py_GetCopyright,3.2,, func,Py_GetCopyright,3.2,,
func,Py_GetExecPrefix,3.2,,
func,Py_GetPath,3.2,,
func,Py_GetPlatform,3.2,, func,Py_GetPlatform,3.2,,
func,Py_GetPrefix,3.2,,
func,Py_GetProgramFullPath,3.2,,
func,Py_GetProgramName,3.2,,
func,Py_GetPythonHome,3.2,,
func,Py_GetRecursionLimit,3.2,, func,Py_GetRecursionLimit,3.2,,
func,Py_GetVersion,3.2,, func,Py_GetVersion,3.2,,
data,Py_HasFileSystemDefaultEncoding,3.2,, data,Py_HasFileSystemDefaultEncoding,3.2,,

View file

@ -6,67 +6,3 @@ Pending removal in Python 3.14
* Creating :c:data:`immutable types <Py_TPFLAGS_IMMUTABLETYPE>` with mutable * Creating :c:data:`immutable types <Py_TPFLAGS_IMMUTABLETYPE>` with mutable
bases (:gh:`95388`). bases (:gh:`95388`).
* Functions to configure Python's initialization, deprecated in Python 3.11:
* :c:func:`!PySys_SetArgvEx()`:
Set :c:member:`PyConfig.argv` instead.
* :c:func:`!PySys_SetArgv()`:
Set :c:member:`PyConfig.argv` instead.
* :c:func:`!Py_SetProgramName()`:
Set :c:member:`PyConfig.program_name` instead.
* :c:func:`!Py_SetPythonHome()`:
Set :c:member:`PyConfig.home` instead.
The :c:func:`Py_InitializeFromConfig` API should be used with
:c:type:`PyConfig` instead.
* Global configuration variables:
* :c:var:`Py_DebugFlag`:
Use :c:member:`PyConfig.parser_debug` instead.
* :c:var:`Py_VerboseFlag`:
Use :c:member:`PyConfig.verbose` instead.
* :c:var:`Py_QuietFlag`:
Use :c:member:`PyConfig.quiet` instead.
* :c:var:`Py_InteractiveFlag`:
Use :c:member:`PyConfig.interactive` instead.
* :c:var:`Py_InspectFlag`:
Use :c:member:`PyConfig.inspect` instead.
* :c:var:`Py_OptimizeFlag`:
Use :c:member:`PyConfig.optimization_level` instead.
* :c:var:`Py_NoSiteFlag`:
Use :c:member:`PyConfig.site_import` instead.
* :c:var:`Py_BytesWarningFlag`:
Use :c:member:`PyConfig.bytes_warning` instead.
* :c:var:`Py_FrozenFlag`:
Use :c:member:`PyConfig.pathconfig_warnings` instead.
* :c:var:`Py_IgnoreEnvironmentFlag`:
Use :c:member:`PyConfig.use_environment` instead.
* :c:var:`Py_DontWriteBytecodeFlag`:
Use :c:member:`PyConfig.write_bytecode` instead.
* :c:var:`Py_NoUserSiteDirectory`:
Use :c:member:`PyConfig.user_site_directory` instead.
* :c:var:`Py_UnbufferedStdioFlag`:
Use :c:member:`PyConfig.buffered_stdio` instead.
* :c:var:`Py_HashRandomizationFlag`:
Use :c:member:`PyConfig.use_hash_seed`
and :c:member:`PyConfig.hash_seed` instead.
* :c:var:`Py_IsolatedFlag`:
Use :c:member:`PyConfig.isolated` instead.
* :c:var:`Py_LegacyWindowsFSEncodingFlag`:
Use :c:member:`PyPreConfig.legacy_windows_fs_encoding` instead.
* :c:var:`Py_LegacyWindowsStdioFlag`:
Use :c:member:`PyConfig.legacy_windows_stdio` instead.
* :c:var:`!Py_FileSystemDefaultEncoding`:
Use :c:member:`PyConfig.filesystem_encoding` instead.
* :c:var:`!Py_HasFileSystemDefaultEncoding`:
Use :c:member:`PyConfig.filesystem_encoding` instead.
* :c:var:`!Py_FileSystemDefaultEncodeErrors`:
Use :c:member:`PyConfig.filesystem_errors` instead.
* :c:var:`!Py_UTF8Mode`:
Use :c:member:`PyPreConfig.utf8_mode` instead.
(see :c:func:`Py_PreInitialize`)
The :c:func:`Py_InitializeFromConfig` API should be used with
:c:type:`PyConfig` instead.

View file

@ -2,26 +2,135 @@ Pending removal in Python 3.15
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* The bundled copy of ``libmpdecimal``. * The bundled copy of ``libmpdecimal``.
* The :c:func:`PyImport_ImportModuleNoBlock`: * The :c:func:`!PyImport_ImportModuleNoBlock`:
Use :c:func:`PyImport_ImportModule` instead. Use :c:func:`PyImport_ImportModule` instead.
* :c:func:`PyWeakref_GetObject` and :c:func:`PyWeakref_GET_OBJECT`: * :c:func:`PyWeakref_GetObject` and :c:func:`PyWeakref_GET_OBJECT`:
Use :c:func:`PyWeakref_GetRef` instead. Use :c:func:`PyWeakref_GetRef` instead. The `pythoncapi-compat project
<https://github.com/python/pythoncapi-compat/>`__ can be used to get
:c:func:`PyWeakref_GetRef` on Python 3.12 and older.
* :c:type:`Py_UNICODE` type and the :c:macro:`!Py_UNICODE_WIDE` macro: * :c:type:`Py_UNICODE` type and the :c:macro:`!Py_UNICODE_WIDE` macro:
Use :c:type:`wchar_t` instead. Use :c:type:`wchar_t` instead.
* Python initialization functions: * :c:func:`!PyUnicode_AsDecodedObject`:
Use :c:func:`PyCodec_Decode` instead.
* :c:func:`!PyUnicode_AsDecodedUnicode`:
Use :c:func:`PyCodec_Decode` instead; Note that some codecs (for example, "base64")
may return a type other than :class:`str`, such as :class:`bytes`.
* :c:func:`!PyUnicode_AsEncodedObject`:
Use :c:func:`PyCodec_Encode` instead.
* :c:func:`!PyUnicode_AsEncodedUnicode`:
Use :c:func:`PyCodec_Encode` instead; Note that some codecs (for example, "base64")
may return a type other than :class:`bytes`, such as :class:`str`.
* Python initialization functions, deprecated in Python 3.13:
* :c:func:`!Py_GetPath`:
Use :c:func:`PyConfig_Get("module_search_paths") <PyConfig_Get>`
(:data:`sys.path`) instead.
* :c:func:`!Py_GetPrefix`:
Use :c:func:`PyConfig_Get("base_prefix") <PyConfig_Get>`
(:data:`sys.base_prefix`) instead. Use :c:func:`PyConfig_Get("prefix")
<PyConfig_Get>` (:data:`sys.prefix`) if :ref:`virtual environments
<venv-def>` need to be handled.
* :c:func:`!Py_GetExecPrefix`:
Use :c:func:`PyConfig_Get("base_exec_prefix") <PyConfig_Get>`
(:data:`sys.base_exec_prefix`) instead. Use
:c:func:`PyConfig_Get("exec_prefix") <PyConfig_Get>`
(:data:`sys.exec_prefix`) if :ref:`virtual environments <venv-def>` need to
be handled.
* :c:func:`!Py_GetProgramFullPath`:
Use :c:func:`PyConfig_Get("executable") <PyConfig_Get>`
(:data:`sys.executable`) instead.
* :c:func:`!Py_GetProgramName`:
Use :c:func:`PyConfig_Get("executable") <PyConfig_Get>`
(:data:`sys.executable`) instead.
* :c:func:`!Py_GetPythonHome`:
Use :c:func:`PyConfig_Get("home") <PyConfig_Get>` or the
:envvar:`PYTHONHOME` environment variable instead.
The `pythoncapi-compat project
<https://github.com/python/pythoncapi-compat/>`__ can be used to get
:c:func:`PyConfig_Get` on Python 3.13 and older.
* Functions to configure Python's initialization, deprecated in Python 3.11:
* :c:func:`!PySys_SetArgvEx()`:
Set :c:member:`PyConfig.argv` instead.
* :c:func:`!PySys_SetArgv()`:
Set :c:member:`PyConfig.argv` instead.
* :c:func:`!Py_SetProgramName()`:
Set :c:member:`PyConfig.program_name` instead.
* :c:func:`!Py_SetPythonHome()`:
Set :c:member:`PyConfig.home` instead.
* :c:func:`PySys_ResetWarnOptions`: * :c:func:`PySys_ResetWarnOptions`:
Clear :data:`sys.warnoptions` and :data:`!warnings.filters` instead. Clear :data:`sys.warnoptions` and :data:`!warnings.filters` instead.
* :c:func:`Py_GetExecPrefix`:
Get :data:`sys.base_exec_prefix` and :data:`sys.exec_prefix` instead. The :c:func:`Py_InitializeFromConfig` API should be used with
* :c:func:`Py_GetPath`: :c:type:`PyConfig` instead.
Get :data:`sys.path` instead.
* :c:func:`Py_GetPrefix`: * Global configuration variables:
Get :data:`sys.base_prefix` and :data:`sys.prefix` instead.
* :c:func:`Py_GetProgramFullPath`: * :c:var:`Py_DebugFlag`:
Get :data:`sys.executable` instead. Use :c:member:`PyConfig.parser_debug` or
* :c:func:`Py_GetProgramName`: :c:func:`PyConfig_Get("parser_debug") <PyConfig_Get>` instead.
Get :data:`sys.executable` instead. * :c:var:`Py_VerboseFlag`:
* :c:func:`Py_GetPythonHome`: Use :c:member:`PyConfig.verbose` or
Get :c:member:`PyConfig.home` :c:func:`PyConfig_Get("verbose") <PyConfig_Get>` instead.
or the :envvar:`PYTHONHOME` environment variable instead. * :c:var:`Py_QuietFlag`:
Use :c:member:`PyConfig.quiet` or
:c:func:`PyConfig_Get("quiet") <PyConfig_Get>` instead.
* :c:var:`Py_InteractiveFlag`:
Use :c:member:`PyConfig.interactive` or
:c:func:`PyConfig_Get("interactive") <PyConfig_Get>` instead.
* :c:var:`Py_InspectFlag`:
Use :c:member:`PyConfig.inspect` or
:c:func:`PyConfig_Get("inspect") <PyConfig_Get>` instead.
* :c:var:`Py_OptimizeFlag`:
Use :c:member:`PyConfig.optimization_level` or
:c:func:`PyConfig_Get("optimization_level") <PyConfig_Get>` instead.
* :c:var:`Py_NoSiteFlag`:
Use :c:member:`PyConfig.site_import` or
:c:func:`PyConfig_Get("site_import") <PyConfig_Get>` instead.
* :c:var:`Py_BytesWarningFlag`:
Use :c:member:`PyConfig.bytes_warning` or
:c:func:`PyConfig_Get("bytes_warning") <PyConfig_Get>` instead.
* :c:var:`Py_FrozenFlag`:
Use :c:member:`PyConfig.pathconfig_warnings` or
:c:func:`PyConfig_Get("pathconfig_warnings") <PyConfig_Get>` instead.
* :c:var:`Py_IgnoreEnvironmentFlag`:
Use :c:member:`PyConfig.use_environment` or
:c:func:`PyConfig_Get("use_environment") <PyConfig_Get>` instead.
* :c:var:`Py_DontWriteBytecodeFlag`:
Use :c:member:`PyConfig.write_bytecode` or
:c:func:`PyConfig_Get("write_bytecode") <PyConfig_Get>` instead.
* :c:var:`Py_NoUserSiteDirectory`:
Use :c:member:`PyConfig.user_site_directory` or
:c:func:`PyConfig_Get("user_site_directory") <PyConfig_Get>` instead.
* :c:var:`Py_UnbufferedStdioFlag`:
Use :c:member:`PyConfig.buffered_stdio` or
:c:func:`PyConfig_Get("buffered_stdio") <PyConfig_Get>` instead.
* :c:var:`Py_HashRandomizationFlag`:
Use :c:member:`PyConfig.use_hash_seed`
and :c:member:`PyConfig.hash_seed` or
:c:func:`PyConfig_Get("hash_seed") <PyConfig_Get>` instead.
* :c:var:`Py_IsolatedFlag`:
Use :c:member:`PyConfig.isolated` or
:c:func:`PyConfig_Get("isolated") <PyConfig_Get>` instead.
* :c:var:`Py_LegacyWindowsFSEncodingFlag`:
Use :c:member:`PyPreConfig.legacy_windows_fs_encoding` or
:c:func:`PyConfig_Get("legacy_windows_fs_encoding") <PyConfig_Get>` instead.
* :c:var:`Py_LegacyWindowsStdioFlag`:
Use :c:member:`PyConfig.legacy_windows_stdio` or
:c:func:`PyConfig_Get("legacy_windows_stdio") <PyConfig_Get>` instead.
* :c:var:`!Py_FileSystemDefaultEncoding`, :c:var:`!Py_HasFileSystemDefaultEncoding`:
Use :c:member:`PyConfig.filesystem_encoding` or
:c:func:`PyConfig_Get("filesystem_encoding") <PyConfig_Get>` instead.
* :c:var:`!Py_FileSystemDefaultEncodeErrors`:
Use :c:member:`PyConfig.filesystem_errors` or
:c:func:`PyConfig_Get("filesystem_errors") <PyConfig_Get>` instead.
* :c:var:`!Py_UTF8Mode`:
Use :c:member:`PyPreConfig.utf8_mode` or
:c:func:`PyConfig_Get("utf8_mode") <PyConfig_Get>` instead.
(see :c:func:`Py_PreInitialize`)
The :c:func:`Py_InitializeFromConfig` API should be used with
:c:type:`PyConfig` to set these options. Or :c:func:`PyConfig_Get` can be
used to get these options at runtime.

View file

@ -0,0 +1,45 @@
Pending removal in Python 3.18
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Deprecated private functions (:gh:`128863`):
* :c:func:`!_PyBytes_Join`: use :c:func:`PyBytes_Join`.
* :c:func:`!_PyDict_GetItemStringWithError`: use :c:func:`PyDict_GetItemStringRef`.
* :c:func:`!_PyDict_Pop()`: :c:func:`PyDict_Pop`.
* :c:func:`!_PyLong_Sign()`: use :c:func:`PyLong_GetSign`.
* :c:func:`!_PyLong_FromDigits` and :c:func:`!_PyLong_New`:
use :c:func:`PyLongWriter_Create`.
* :c:func:`!_PyThreadState_UncheckedGet`: use :c:func:`PyThreadState_GetUnchecked`.
* :c:func:`!_PyUnicode_AsString`: use :c:func:`PyUnicode_AsUTF8`.
* :c:func:`!_PyUnicodeWriter_Init`:
replace ``_PyUnicodeWriter_Init(&writer)`` with
:c:func:`writer = PyUnicodeWriter_Create(0) <PyUnicodeWriter_Create>`.
* :c:func:`!_PyUnicodeWriter_Finish`:
replace ``_PyUnicodeWriter_Finish(&writer)`` with
:c:func:`PyUnicodeWriter_Finish(writer) <PyUnicodeWriter_Finish>`.
* :c:func:`!_PyUnicodeWriter_Dealloc`:
replace ``_PyUnicodeWriter_Dealloc(&writer)`` with
:c:func:`PyUnicodeWriter_Discard(writer) <PyUnicodeWriter_Discard>`.
* :c:func:`!_PyUnicodeWriter_WriteChar`:
replace ``_PyUnicodeWriter_WriteChar(&writer, ch)`` with
:c:func:`PyUnicodeWriter_WriteChar(writer, ch) <PyUnicodeWriter_WriteChar>`.
* :c:func:`!_PyUnicodeWriter_WriteStr`:
replace ``_PyUnicodeWriter_WriteStr(&writer, str)`` with
:c:func:`PyUnicodeWriter_WriteStr(writer, str) <PyUnicodeWriter_WriteStr>`.
* :c:func:`!_PyUnicodeWriter_WriteSubstring`:
replace ``_PyUnicodeWriter_WriteSubstring(&writer, str, start, end)`` with
:c:func:`PyUnicodeWriter_WriteSubstring(writer, str, start, end) <PyUnicodeWriter_WriteSubstring>`.
* :c:func:`!_PyUnicodeWriter_WriteASCIIString`:
replace ``_PyUnicodeWriter_WriteASCIIString(&writer, str)`` with
:c:func:`PyUnicodeWriter_WriteUTF8(writer, str) <PyUnicodeWriter_WriteUTF8>`.
* :c:func:`!_PyUnicodeWriter_WriteLatin1String`:
replace ``_PyUnicodeWriter_WriteLatin1String(&writer, str)`` with
:c:func:`PyUnicodeWriter_WriteUTF8(writer, str) <PyUnicodeWriter_WriteUTF8>`.
* :c:func:`!_PyUnicodeWriter_Prepare`: (no replacement).
* :c:func:`!_PyUnicodeWriter_PrepareKind`: (no replacement).
* :c:func:`!_Py_HashPointer`: use :c:func:`Py_HashPointer`.
* :c:func:`!_Py_fopen_obj`: use :c:func:`Py_fopen`.
The `pythoncapi-compat project
<https://github.com/python/pythoncapi-compat/>`__ can be used to get these
new public functions on Python 3.13 and older.

View file

@ -18,14 +18,6 @@ although there is currently no date scheduled for their removal.
Use :c:func:`PyOS_AfterFork_Child` instead. Use :c:func:`PyOS_AfterFork_Child` instead.
* :c:func:`PySlice_GetIndicesEx`: * :c:func:`PySlice_GetIndicesEx`:
Use :c:func:`PySlice_Unpack` and :c:func:`PySlice_AdjustIndices` instead. Use :c:func:`PySlice_Unpack` and :c:func:`PySlice_AdjustIndices` instead.
* :c:func:`!PyUnicode_AsDecodedObject`:
Use :c:func:`PyCodec_Decode` instead.
* :c:func:`!PyUnicode_AsDecodedUnicode`:
Use :c:func:`PyCodec_Decode` instead.
* :c:func:`!PyUnicode_AsEncodedObject`:
Use :c:func:`PyCodec_Encode` instead.
* :c:func:`!PyUnicode_AsEncodedUnicode`:
Use :c:func:`PyCodec_Encode` instead.
* :c:func:`PyUnicode_READY`: * :c:func:`PyUnicode_READY`:
Unneeded since Python 3.12 Unneeded since Python 3.12
* :c:func:`!PyErr_Display`: * :c:func:`!PyErr_Display`:
@ -34,7 +26,6 @@ although there is currently no date scheduled for their removal.
Use :c:func:`!_PyErr_ChainExceptions1` instead. Use :c:func:`!_PyErr_ChainExceptions1` instead.
* :c:member:`!PyBytesObject.ob_shash` member: * :c:member:`!PyBytesObject.ob_shash` member:
call :c:func:`PyObject_Hash` instead. call :c:func:`PyObject_Hash` instead.
* :c:member:`!PyDictObject.ma_version_tag` member.
* Thread Local Storage (TLS) API: * Thread Local Storage (TLS) API:
* :c:func:`PyThread_create_key`: * :c:func:`PyThread_create_key`:

View file

@ -5,6 +5,10 @@ Deprecations
.. include:: pending-removal-in-3.16.rst .. include:: pending-removal-in-3.16.rst
.. include:: pending-removal-in-3.17.rst
.. include:: pending-removal-in-3.19.rst
.. include:: pending-removal-in-future.rst .. include:: pending-removal-in-future.rst
C API deprecations C API deprecations
@ -12,4 +16,6 @@ C API deprecations
.. include:: c-api-pending-removal-in-3.15.rst .. include:: c-api-pending-removal-in-3.15.rst
.. include:: c-api-pending-removal-in-3.18.rst
.. include:: c-api-pending-removal-in-future.rst .. include:: c-api-pending-removal-in-future.rst

View file

@ -78,7 +78,7 @@ Pending removal in Python 3.14
:meth:`~pathlib.PurePath.relative_to`: passing additional arguments is :meth:`~pathlib.PurePath.relative_to`: passing additional arguments is
deprecated. deprecated.
* :mod:`pkgutil`: :func:`!pkgutil.find_loader` and :func:!pkgutil.get_loader` * :mod:`pkgutil`: :func:`!pkgutil.find_loader` and :func:`!pkgutil.get_loader`
now raise :exc:`DeprecationWarning`; now raise :exc:`DeprecationWarning`;
use :func:`importlib.util.find_spec` instead. use :func:`importlib.util.find_spec` instead.
(Contributed by Nikita Sobolev in :gh:`97850`.) (Contributed by Nikita Sobolev in :gh:`97850`.)

View file

@ -29,6 +29,10 @@ Pending removal in Python 3.15
* The :option:`!--cgi` flag to the :program:`python -m http.server` * The :option:`!--cgi` flag to the :program:`python -m http.server`
command-line interface has been deprecated since Python 3.13. command-line interface has been deprecated since Python 3.13.
* :mod:`importlib`:
* ``load_module()`` method: use ``exec_module()`` instead.
* :class:`locale`: * :class:`locale`:
* The :func:`~locale.getdefaultlocale` function * The :func:`~locale.getdefaultlocale` function
@ -51,6 +55,11 @@ Pending removal in Python 3.15
This function is only useful for Jython support, has a confusing API, This function is only useful for Jython support, has a confusing API,
and is largely untested. and is largely untested.
* :mod:`sysconfig`:
* The *check_home* argument of :func:`sysconfig.is_python_build` has been
deprecated since Python 3.12.
* :mod:`threading`: * :mod:`threading`:
* :func:`~threading.RLock` will take no arguments in Python 3.15. * :func:`~threading.RLock` will take no arguments in Python 3.15.
@ -76,6 +85,13 @@ Pending removal in Python 3.15
has been deprecated since Python 3.13. has been deprecated since Python 3.13.
Use the class-based syntax or the functional syntax instead. Use the class-based syntax or the functional syntax instead.
* When using the functional syntax of :class:`~typing.TypedDict`\s, failing
to pass a value to the *fields* parameter (``TD = TypedDict("TD")``) or
passing ``None`` (``TD = TypedDict("TD", None)``) has been deprecated
since Python 3.13.
Use ``class TD(TypedDict): pass`` or ``TD = TypedDict("TD", {})``
to create a TypedDict with zero field.
* The :func:`typing.no_type_check_decorator` decorator function * The :func:`typing.no_type_check_decorator` decorator function
has been deprecated since Python 3.13. has been deprecated since Python 3.13.
After eight years in the :mod:`typing` module, After eight years in the :mod:`typing` module,
@ -87,3 +103,9 @@ Pending removal in Python 3.15
and :meth:`~wave.Wave_read.getmarkers` methods of and :meth:`~wave.Wave_read.getmarkers` methods of
the :class:`~wave.Wave_read` and :class:`~wave.Wave_write` classes the :class:`~wave.Wave_read` and :class:`~wave.Wave_write` classes
have been deprecated since Python 3.13. have been deprecated since Python 3.13.
* :mod:`zipimport`:
* :meth:`~zipimport.zipimporter.load_module` has been deprecated since
Python 3.10. Use :meth:`~zipimport.zipimporter.exec_module` instead.
(Contributed by Jiahao Li in :gh:`125746`.)

View file

@ -32,7 +32,6 @@ Pending removal in Python 3.16
* :class:`asyncio.WindowsProactorEventLoopPolicy` * :class:`asyncio.WindowsProactorEventLoopPolicy`
* :func:`asyncio.get_event_loop_policy` * :func:`asyncio.get_event_loop_policy`
* :func:`asyncio.set_event_loop_policy` * :func:`asyncio.set_event_loop_policy`
* :func:`asyncio.set_event_loop`
Users should use :func:`asyncio.run` or :class:`asyncio.Runner` with Users should use :func:`asyncio.run` or :class:`asyncio.Runner` with
*loop_factory* to use the desired event loop implementation. *loop_factory* to use the desired event loop implementation.
@ -62,6 +61,20 @@ Pending removal in Python 3.16
* Calling the Python implementation of :func:`functools.reduce` with *function* * Calling the Python implementation of :func:`functools.reduce` with *function*
or *sequence* as keyword arguments has been deprecated since Python 3.14. or *sequence* as keyword arguments has been deprecated since Python 3.14.
* :mod:`logging`:
Support for custom logging handlers with the *strm* argument is deprecated
and scheduled for removal in Python 3.16. Define handlers with the *stream*
argument instead. (Contributed by Mariusz Felisiak in :gh:`115032`.)
* :mod:`mimetypes`:
* Valid extensions start with a '.' or are empty for
:meth:`mimetypes.MimeTypes.add_type`.
Undotted extensions are deprecated and will
raise a :exc:`ValueError` in Python 3.16.
(Contributed by Hugo van Kemenade in :gh:`75223`.)
* :mod:`shutil`: * :mod:`shutil`:
* The :class:`!ExecError` exception * The :class:`!ExecError` exception
@ -80,6 +93,12 @@ Pending removal in Python 3.16
has been deprecated since Python 3.13. has been deprecated since Python 3.13.
Use the :envvar:`PYTHONLEGACYWINDOWSFSENCODING` environment variable instead. Use the :envvar:`PYTHONLEGACYWINDOWSFSENCODING` environment variable instead.
* :mod:`sysconfig`:
* The :func:`!sysconfig.expand_makefile_vars` function
has been deprecated since Python 3.14.
Use the ``vars`` argument of :func:`sysconfig.get_paths` instead.
* :mod:`tarfile`: * :mod:`tarfile`:
* The undocumented and unused :attr:`!TarFile.tarfile` attribute * The undocumented and unused :attr:`!TarFile.tarfile` attribute

View file

@ -0,0 +1,10 @@
Pending removal in Python 3.17
------------------------------
* :mod:`typing`:
- Before Python 3.14, old-style unions were implemented using the private class
``typing._UnionGenericAlias``. This class is no longer needed for the implementation,
but it has been retained for backward compatibility, with removal scheduled for Python
3.17. Users should use documented introspection helpers like :func:`typing.get_origin`
and :func:`typing.get_args` instead of relying on private implementation details.

View file

@ -0,0 +1,8 @@
Pending removal in Python 3.19
------------------------------
* :mod:`ctypes`:
* Implicitly switching to the MSVC-compatible struct layout by setting
:attr:`~ctypes.Structure._pack_` but not :attr:`~ctypes.Structure._layout_`
on non-Windows platforms.

View file

@ -13,8 +13,6 @@ although there is currently no date scheduled for their removal.
deprecated. deprecated.
* The :class:`argparse.FileType` type converter is deprecated. * The :class:`argparse.FileType` type converter is deprecated.
* :mod:`array`'s ``'u'`` format code (:gh:`57281`)
* :mod:`builtins`: * :mod:`builtins`:
* ``bool(NotImplemented)``. * ``bool(NotImplemented)``.
@ -49,6 +47,8 @@ although there is currently no date scheduled for their removal.
:data:`calendar.FEBRUARY`. :data:`calendar.FEBRUARY`.
(Contributed by Prince Roshan in :gh:`103636`.) (Contributed by Prince Roshan in :gh:`103636`.)
* :mod:`codecs`: use :func:`open` instead of :func:`codecs.open`. (:gh:`133038`)
* :attr:`codeobject.co_lnotab`: use the :meth:`codeobject.co_lines` method * :attr:`codeobject.co_lnotab`: use the :meth:`codeobject.co_lines` method
instead. instead.
@ -63,7 +63,6 @@ although there is currently no date scheduled for their removal.
* :mod:`importlib`: * :mod:`importlib`:
* ``load_module()`` method: use ``exec_module()`` instead.
* :func:`~importlib.util.cache_from_source` *debug_override* parameter is * :func:`~importlib.util.cache_from_source` *debug_override* parameter is
deprecated: use the *optimization* parameter instead. deprecated: use the *optimization* parameter instead.
@ -112,9 +111,6 @@ although there is currently no date scheduled for their removal.
* ``ssl.TLSVersion.TLSv1`` * ``ssl.TLSVersion.TLSv1``
* ``ssl.TLSVersion.TLSv1_1`` * ``ssl.TLSVersion.TLSv1_1``
* :func:`sysconfig.is_python_build` *check_home* parameter is deprecated and
ignored.
* :mod:`threading` methods: * :mod:`threading` methods:
* :meth:`!threading.Condition.notifyAll`: use :meth:`~threading.Condition.notify_all`. * :meth:`!threading.Condition.notifyAll`: use :meth:`~threading.Condition.notify_all`.
@ -128,6 +124,11 @@ although there is currently no date scheduled for their removal.
* :class:`typing.Text` (:gh:`92332`). * :class:`typing.Text` (:gh:`92332`).
* The internal class ``typing._UnionGenericAlias`` is no longer used to implement
:class:`typing.Union`. To preserve compatibility with users using this private
class, a compatibility shim will be provided until at least Python 3.17. (Contributed by
Jelle Zijlstra in :gh:`105499`.)
* :class:`unittest.IsolatedAsyncioTestCase`: it is deprecated to return a value * :class:`unittest.IsolatedAsyncioTestCase`: it is deprecated to return a value
that is not ``None`` from a test case. that is not ``None`` from a test case.
@ -153,5 +154,5 @@ although there is currently no date scheduled for their removal.
will always return ``True``. Prefer explicit ``len(elem)`` or will always return ``True``. Prefer explicit ``len(elem)`` or
``elem is not None`` tests instead. ``elem is not None`` tests instead.
* :meth:`zipimport.zipimporter.load_module` is deprecated: * :func:`sys._clear_type_cache` is deprecated:
use :meth:`~zipimport.zipimporter.exec_module` instead. use :func:`sys._clear_internal_caches` instead.

View file

@ -196,8 +196,8 @@ interesting part with respect to embedding Python starts with ::
After initializing the interpreter, the script is loaded using After initializing the interpreter, the script is loaded using
:c:func:`PyImport_Import`. This routine needs a Python string as its argument, :c:func:`PyImport_Import`. This routine needs a Python string as its argument,
which is constructed using the :c:func:`PyUnicode_FromString` data conversion which is constructed using the :c:func:`PyUnicode_DecodeFSDefault` data
routine. :: conversion routine. ::
pFunc = PyObject_GetAttrString(pModule, argv[2]); pFunc = PyObject_GetAttrString(pModule, argv[2]);
/* pFunc is a new reference */ /* pFunc is a new reference */

View file

@ -70,22 +70,24 @@ object itself needs to be freed here as well. Here is an example of this
function:: function::
static void static void
newdatatype_dealloc(newdatatypeobject *obj) newdatatype_dealloc(PyObject *op)
{ {
free(obj->obj_UnderlyingDatatypePtr); newdatatypeobject *self = (newdatatypeobject *) op;
Py_TYPE(obj)->tp_free((PyObject *)obj); free(self->obj_UnderlyingDatatypePtr);
Py_TYPE(self)->tp_free(self);
} }
If your type supports garbage collection, the destructor should call If your type supports garbage collection, the destructor should call
:c:func:`PyObject_GC_UnTrack` before clearing any member fields:: :c:func:`PyObject_GC_UnTrack` before clearing any member fields::
static void static void
newdatatype_dealloc(newdatatypeobject *obj) newdatatype_dealloc(PyObject *op)
{ {
PyObject_GC_UnTrack(obj); newdatatypeobject *self = (newdatatypeobject *) op;
Py_CLEAR(obj->other_obj); PyObject_GC_UnTrack(op);
Py_CLEAR(self->other_obj);
... ...
Py_TYPE(obj)->tp_free((PyObject *)obj); Py_TYPE(self)->tp_free(self);
} }
.. index:: .. index::
@ -117,17 +119,19 @@ done. This can be done using the :c:func:`PyErr_Fetch` and
PyErr_Fetch(&err_type, &err_value, &err_traceback); PyErr_Fetch(&err_type, &err_value, &err_traceback);
cbresult = PyObject_CallNoArgs(self->my_callback); cbresult = PyObject_CallNoArgs(self->my_callback);
if (cbresult == NULL) if (cbresult == NULL) {
PyErr_WriteUnraisable(self->my_callback); PyErr_WriteUnraisable(self->my_callback);
else }
else {
Py_DECREF(cbresult); Py_DECREF(cbresult);
}
/* This restores the saved exception state */ /* This restores the saved exception state */
PyErr_Restore(err_type, err_value, err_traceback); PyErr_Restore(err_type, err_value, err_traceback);
Py_DECREF(self->my_callback); Py_DECREF(self->my_callback);
} }
Py_TYPE(obj)->tp_free((PyObject*)self); Py_TYPE(self)->tp_free(self);
} }
.. note:: .. note::
@ -168,10 +172,11 @@ representation of the instance for which it is called. Here is a simple
example:: example::
static PyObject * static PyObject *
newdatatype_repr(newdatatypeobject *obj) newdatatype_repr(PyObject *op)
{ {
newdatatypeobject *self = (newdatatypeobject *) op;
return PyUnicode_FromFormat("Repr-ified_newdatatype{{size:%d}}", return PyUnicode_FromFormat("Repr-ified_newdatatype{{size:%d}}",
obj->obj_UnderlyingDatatypePtr->size); self->obj_UnderlyingDatatypePtr->size);
} }
If no :c:member:`~PyTypeObject.tp_repr` handler is specified, the interpreter will supply a If no :c:member:`~PyTypeObject.tp_repr` handler is specified, the interpreter will supply a
@ -188,10 +193,11 @@ used instead.
Here is a simple example:: Here is a simple example::
static PyObject * static PyObject *
newdatatype_str(newdatatypeobject *obj) newdatatype_str(PyObject *op)
{ {
newdatatypeobject *self = (newdatatypeobject *) op;
return PyUnicode_FromFormat("Stringified_newdatatype{{size:%d}}", return PyUnicode_FromFormat("Stringified_newdatatype{{size:%d}}",
obj->obj_UnderlyingDatatypePtr->size); self->obj_UnderlyingDatatypePtr->size);
} }
@ -329,16 +335,16 @@ method of a class would be called.
Here is an example:: Here is an example::
static PyObject * static PyObject *
newdatatype_getattr(newdatatypeobject *obj, char *name) newdatatype_getattr(PyObject *op, char *name)
{ {
if (strcmp(name, "data") == 0) newdatatypeobject *self = (newdatatypeobject *) op;
{ if (strcmp(name, "data") == 0) {
return PyLong_FromLong(obj->data); return PyLong_FromLong(self->data);
} }
PyErr_Format(PyExc_AttributeError, PyErr_Format(PyExc_AttributeError,
"'%.100s' object has no attribute '%.400s'", "'%.100s' object has no attribute '%.400s'",
Py_TYPE(obj)->tp_name, name); Py_TYPE(self)->tp_name, name);
return NULL; return NULL;
} }
@ -349,7 +355,7 @@ example that simply raises an exception; if this were really all you wanted, the
:c:member:`~PyTypeObject.tp_setattr` handler should be set to ``NULL``. :: :c:member:`~PyTypeObject.tp_setattr` handler should be set to ``NULL``. ::
static int static int
newdatatype_setattr(newdatatypeobject *obj, char *name, PyObject *v) newdatatype_setattr(PyObject *op, char *name, PyObject *v)
{ {
PyErr_Format(PyExc_RuntimeError, "Read-only attribute: %s", name); PyErr_Format(PyExc_RuntimeError, "Read-only attribute: %s", name);
return -1; return -1;
@ -379,8 +385,10 @@ Here is a sample implementation, for a datatype that is considered equal if the
size of an internal pointer is equal:: size of an internal pointer is equal::
static PyObject * static PyObject *
newdatatype_richcmp(newdatatypeobject *obj1, newdatatypeobject *obj2, int op) newdatatype_richcmp(PyObject *lhs, PyObject *rhs, int op)
{ {
newdatatypeobject *obj1 = (newdatatypeobject *) lhs;
newdatatypeobject *obj2 = (newdatatypeobject *) rhs;
PyObject *result; PyObject *result;
int c, size1, size2; int c, size1, size2;
@ -399,8 +407,7 @@ size of an internal pointer is equal::
case Py_GE: c = size1 >= size2; break; case Py_GE: c = size1 >= size2; break;
} }
result = c ? Py_True : Py_False; result = c ? Py_True : Py_False;
Py_INCREF(result); return Py_NewRef(result);
return result;
} }
@ -439,12 +446,14 @@ This function, if you choose to provide it, should return a hash number for an
instance of your data type. Here is a simple example:: instance of your data type. Here is a simple example::
static Py_hash_t static Py_hash_t
newdatatype_hash(newdatatypeobject *obj) newdatatype_hash(PyObject *op)
{ {
newdatatypeobject *self = (newdatatypeobject *) op;
Py_hash_t result; Py_hash_t result;
result = obj->some_size + 32767 * obj->some_number; result = self->some_size + 32767 * self->some_number;
if (result == -1) if (result == -1) {
result = -2; result = -2;
}
return result; return result;
} }
@ -478,8 +487,9 @@ This function takes three arguments:
Here is a toy ``tp_call`` implementation:: Here is a toy ``tp_call`` implementation::
static PyObject * static PyObject *
newdatatype_call(newdatatypeobject *obj, PyObject *args, PyObject *kwds) newdatatype_call(PyObject *op, PyObject *args, PyObject *kwds)
{ {
newdatatypeobject *self = (newdatatypeobject *) op;
PyObject *result; PyObject *result;
const char *arg1; const char *arg1;
const char *arg2; const char *arg2;
@ -490,7 +500,7 @@ Here is a toy ``tp_call`` implementation::
} }
result = PyUnicode_FromFormat( result = PyUnicode_FromFormat(
"Returning -- value: [%d] arg1: [%s] arg2: [%s] arg3: [%s]\n", "Returning -- value: [%d] arg1: [%s] arg2: [%s] arg3: [%s]\n",
obj->obj_UnderlyingDatatypePtr->size, self->obj_UnderlyingDatatypePtr->size,
arg1, arg2, arg3); arg1, arg2, arg3);
return result; return result;
} }
@ -563,12 +573,12 @@ The only further addition is that ``tp_dealloc`` needs to clear any weak
references (by calling :c:func:`PyObject_ClearWeakRefs`):: references (by calling :c:func:`PyObject_ClearWeakRefs`)::
static void static void
Trivial_dealloc(TrivialObject *self) Trivial_dealloc(PyObject *op)
{ {
/* Clear weakrefs first before calling any destructors */ /* Clear weakrefs first before calling any destructors */
PyObject_ClearWeakRefs((PyObject *) self); PyObject_ClearWeakRefs(op);
/* ... remainder of destruction code omitted for brevity ... */ /* ... remainder of destruction code omitted for brevity ... */
Py_TYPE(self)->tp_free((PyObject *) self); Py_TYPE(op)->tp_free(op);
} }

View file

@ -250,16 +250,17 @@ Because we now have data to manage, we have to be more careful about object
allocation and deallocation. At a minimum, we need a deallocation method:: allocation and deallocation. At a minimum, we need a deallocation method::
static void static void
Custom_dealloc(CustomObject *self) Custom_dealloc(PyObject *op)
{ {
CustomObject *self = (CustomObject *) op;
Py_XDECREF(self->first); Py_XDECREF(self->first);
Py_XDECREF(self->last); Py_XDECREF(self->last);
Py_TYPE(self)->tp_free((PyObject *) self); Py_TYPE(self)->tp_free(self);
} }
which is assigned to the :c:member:`~PyTypeObject.tp_dealloc` member:: which is assigned to the :c:member:`~PyTypeObject.tp_dealloc` member::
.tp_dealloc = (destructor) Custom_dealloc, .tp_dealloc = Custom_dealloc,
This method first clears the reference counts of the two Python attributes. This method first clears the reference counts of the two Python attributes.
:c:func:`Py_XDECREF` correctly handles the case where its argument is :c:func:`Py_XDECREF` correctly handles the case where its argument is
@ -270,11 +271,31 @@ the object's type might not be :class:`!CustomType`, because the object may
be an instance of a subclass. be an instance of a subclass.
.. note:: .. note::
The explicit cast to ``destructor`` above is needed because we defined
``Custom_dealloc`` to take a ``CustomObject *`` argument, but the ``tp_dealloc`` The explicit cast to ``CustomObject *`` above is needed because we defined
function pointer expects to receive a ``PyObject *`` argument. Otherwise, ``Custom_dealloc`` to take a ``PyObject *`` argument, as the ``tp_dealloc``
the compiler will emit a warning. This is object-oriented polymorphism, function pointer expects to receive a ``PyObject *`` argument.
in C! By assigning to the the ``tp_dealloc`` slot of a type, we declare
that it can only be called with instances of our ``CustomObject``
class, so the cast to ``(CustomObject *)`` is safe.
This is object-oriented polymorphism, in C!
In existing code, or in previous versions of this tutorial,
you might see similar functions take a pointer to the subtype
object structure (``CustomObject*``) directly, like this::
Custom_dealloc(CustomObject *self)
{
Py_XDECREF(self->first);
Py_XDECREF(self->last);
Py_TYPE(self)->tp_free((PyObject *) self);
}
...
.tp_dealloc = (destructor) Custom_dealloc,
This does the same thing on all architectures that CPython
supports, but according to the C standard, it invokes
undefined behavior.
We want to make sure that the first and last names are initialized to empty We want to make sure that the first and last names are initialized to empty
strings, so we provide a ``tp_new`` implementation:: strings, so we provide a ``tp_new`` implementation::
@ -352,8 +373,9 @@ We also define an initialization function which accepts arguments to provide
initial values for our instance:: initial values for our instance::
static int static int
Custom_init(CustomObject *self, PyObject *args, PyObject *kwds) Custom_init(PyObject *op, PyObject *args, PyObject *kwds)
{ {
CustomObject *self = (CustomObject *) op;
static char *kwlist[] = {"first", "last", "number", NULL}; static char *kwlist[] = {"first", "last", "number", NULL};
PyObject *first = NULL, *last = NULL, *tmp; PyObject *first = NULL, *last = NULL, *tmp;
@ -379,7 +401,7 @@ initial values for our instance::
by filling the :c:member:`~PyTypeObject.tp_init` slot. :: by filling the :c:member:`~PyTypeObject.tp_init` slot. ::
.tp_init = (initproc) Custom_init, .tp_init = Custom_init,
The :c:member:`~PyTypeObject.tp_init` slot is exposed in Python as the The :c:member:`~PyTypeObject.tp_init` slot is exposed in Python as the
:meth:`~object.__init__` method. It is used to initialize an object after it's :meth:`~object.__init__` method. It is used to initialize an object after it's
@ -403,8 +425,8 @@ the new attribute values. We might be tempted, for example to assign the
But this would be risky. Our type doesn't restrict the type of the But this would be risky. Our type doesn't restrict the type of the
``first`` member, so it could be any kind of object. It could have a ``first`` member, so it could be any kind of object. It could have a
destructor that causes code to be executed that tries to access the destructor that causes code to be executed that tries to access the
``first`` member; or that destructor could release the ``first`` member; or that destructor could detach the
:term:`Global interpreter Lock <GIL>` and let arbitrary code run in other :term:`thread state <attached thread state>` and let arbitrary code run in other
threads that accesses and modifies our object. threads that accesses and modifies our object.
To be paranoid and protect ourselves against this possibility, we almost To be paranoid and protect ourselves against this possibility, we almost
@ -413,8 +435,8 @@ don't we have to do this?
* when we absolutely know that the reference count is greater than 1; * when we absolutely know that the reference count is greater than 1;
* when we know that deallocation of the object [#]_ will neither release * when we know that deallocation of the object [#]_ will neither detach
the :term:`GIL` nor cause any calls back into our type's code; the :term:`thread state <attached thread state>` nor cause any calls back into our type's code;
* when decrementing a reference count in a :c:member:`~PyTypeObject.tp_dealloc` * when decrementing a reference count in a :c:member:`~PyTypeObject.tp_dealloc`
handler on a type which doesn't support cyclic garbage collection [#]_. handler on a type which doesn't support cyclic garbage collection [#]_.
@ -451,8 +473,9 @@ We define a single method, :meth:`!Custom.name`, that outputs the objects name a
concatenation of the first and last names. :: concatenation of the first and last names. ::
static PyObject * static PyObject *
Custom_name(CustomObject *self, PyObject *Py_UNUSED(ignored)) Custom_name(PyObject *op, PyObject *Py_UNUSED(dummy))
{ {
CustomObject *self = (CustomObject *) op;
if (self->first == NULL) { if (self->first == NULL) {
PyErr_SetString(PyExc_AttributeError, "first"); PyErr_SetString(PyExc_AttributeError, "first");
return NULL; return NULL;
@ -486,7 +509,7 @@ Now that we've defined the method, we need to create an array of method
definitions:: definitions::
static PyMethodDef Custom_methods[] = { static PyMethodDef Custom_methods[] = {
{"name", (PyCFunction) Custom_name, METH_NOARGS, {"name", Custom_name, METH_NOARGS,
"Return the name, combining the first and last name" "Return the name, combining the first and last name"
}, },
{NULL} /* Sentinel */ {NULL} /* Sentinel */
@ -543,15 +566,17 @@ we'll use custom getter and setter functions. Here are the functions for
getting and setting the :attr:`!first` attribute:: getting and setting the :attr:`!first` attribute::
static PyObject * static PyObject *
Custom_getfirst(CustomObject *self, void *closure) Custom_getfirst(PyObject *op, void *closure)
{ {
CustomObject *self = (CustomObject *) op;
Py_INCREF(self->first); Py_INCREF(self->first);
return self->first; return self->first;
} }
static int static int
Custom_setfirst(CustomObject *self, PyObject *value, void *closure) Custom_setfirst(PyObject *op, PyObject *value, void *closure)
{ {
CustomObject *self = (CustomObject *) op;
PyObject *tmp; PyObject *tmp;
if (value == NULL) { if (value == NULL) {
PyErr_SetString(PyExc_TypeError, "Cannot delete the first attribute"); PyErr_SetString(PyExc_TypeError, "Cannot delete the first attribute");
@ -583,9 +608,9 @@ new value is not a string.
We create an array of :c:type:`PyGetSetDef` structures:: We create an array of :c:type:`PyGetSetDef` structures::
static PyGetSetDef Custom_getsetters[] = { static PyGetSetDef Custom_getsetters[] = {
{"first", (getter) Custom_getfirst, (setter) Custom_setfirst, {"first", Custom_getfirst, Custom_setfirst,
"first name", NULL}, "first name", NULL},
{"last", (getter) Custom_getlast, (setter) Custom_setlast, {"last", Custom_getlast, Custom_setlast,
"last name", NULL}, "last name", NULL},
{NULL} /* Sentinel */ {NULL} /* Sentinel */
}; };
@ -609,8 +634,9 @@ We also need to update the :c:member:`~PyTypeObject.tp_init` handler to only
allow strings [#]_ to be passed:: allow strings [#]_ to be passed::
static int static int
Custom_init(CustomObject *self, PyObject *args, PyObject *kwds) Custom_init(PyObject *op, PyObject *args, PyObject *kwds)
{ {
CustomObject *self = (CustomObject *) op;
static char *kwlist[] = {"first", "last", "number", NULL}; static char *kwlist[] = {"first", "last", "number", NULL};
PyObject *first = NULL, *last = NULL, *tmp; PyObject *first = NULL, *last = NULL, *tmp;
@ -689,8 +715,9 @@ First, the traversal method lets the cyclic GC know about subobjects that could
participate in cycles:: participate in cycles::
static int static int
Custom_traverse(CustomObject *self, visitproc visit, void *arg) Custom_traverse(PyObject *op, visitproc visit, void *arg)
{ {
CustomObject *self = (CustomObject *) op;
int vret; int vret;
if (self->first) { if (self->first) {
vret = visit(self->first, arg); vret = visit(self->first, arg);
@ -716,8 +743,9 @@ functions. With :c:func:`Py_VISIT`, we can minimize the amount of boilerplate
in ``Custom_traverse``:: in ``Custom_traverse``::
static int static int
Custom_traverse(CustomObject *self, visitproc visit, void *arg) Custom_traverse(PyObject *op, visitproc visit, void *arg)
{ {
CustomObject *self = (CustomObject *) op;
Py_VISIT(self->first); Py_VISIT(self->first);
Py_VISIT(self->last); Py_VISIT(self->last);
return 0; return 0;
@ -731,8 +759,9 @@ Second, we need to provide a method for clearing any subobjects that can
participate in cycles:: participate in cycles::
static int static int
Custom_clear(CustomObject *self) Custom_clear(PyObject *op)
{ {
CustomObject *self = (CustomObject *) op;
Py_CLEAR(self->first); Py_CLEAR(self->first);
Py_CLEAR(self->last); Py_CLEAR(self->last);
return 0; return 0;
@ -765,11 +794,11 @@ Here is our reimplemented deallocator using :c:func:`PyObject_GC_UnTrack`
and ``Custom_clear``:: and ``Custom_clear``::
static void static void
Custom_dealloc(CustomObject *self) Custom_dealloc(PyObject *op)
{ {
PyObject_GC_UnTrack(self); PyObject_GC_UnTrack(op);
Custom_clear(self); (void)Custom_clear(op);
Py_TYPE(self)->tp_free((PyObject *) self); Py_TYPE(op)->tp_free(op);
} }
Finally, we add the :c:macro:`Py_TPFLAGS_HAVE_GC` flag to the class flags:: Finally, we add the :c:macro:`Py_TPFLAGS_HAVE_GC` flag to the class flags::
@ -825,9 +854,10 @@ When a Python object is a :class:`!SubList` instance, its ``PyObject *`` pointer
can be safely cast to both ``PyListObject *`` and ``SubListObject *``:: can be safely cast to both ``PyListObject *`` and ``SubListObject *``::
static int static int
SubList_init(SubListObject *self, PyObject *args, PyObject *kwds) SubList_init(PyObject *op, PyObject *args, PyObject *kwds)
{ {
if (PyList_Type.tp_init((PyObject *) self, args, kwds) < 0) SubListObject *self = (SubListObject *) op;
if (PyList_Type.tp_init(op, args, kwds) < 0)
return -1; return -1;
self->state = 0; self->state = 0;
return 0; return 0;

View file

@ -96,6 +96,13 @@ gives you access to spam's names, but does not create a separate copy. On Unix,
linking with a library is more like ``from spam import *``; it does create a linking with a library is more like ``from spam import *``; it does create a
separate copy. separate copy.
.. c:macro:: Py_NO_LINK_LIB
Turn off the implicit, ``#pragma``-based linkage with the Python
library, performed inside CPython header files.
.. versionadded:: 3.14
.. _win-dlls: .. _win-dlls:
@ -108,21 +115,46 @@ Using DLLs in Practice
Windows Python is built in Microsoft Visual C++; using other compilers may or Windows Python is built in Microsoft Visual C++; using other compilers may or
may not work. The rest of this section is MSVC++ specific. may not work. The rest of this section is MSVC++ specific.
When creating DLLs in Windows, you must pass :file:`pythonXY.lib` to the linker. When creating DLLs in Windows, you can use the CPython library in two ways:
To build two DLLs, spam and ni (which uses C functions found in spam), you could
use these commands::
cl /LD /I/python/include spam.c ../libs/pythonXY.lib 1. By default, inclusion of :file:`PC/pyconfig.h` directly or via
cl /LD /I/python/include ni.c spam.lib ../libs/pythonXY.lib :file:`Python.h` triggers an implicit, configure-aware link with the
library. The header file chooses :file:`pythonXY_d.lib` for Debug,
:file:`pythonXY.lib` for Release, and :file:`pythonX.lib` for Release with
the `Limited API <stable-application-binary-interface>`_ enabled.
The first command created three files: :file:`spam.obj`, :file:`spam.dll` and To build two DLLs, spam and ni (which uses C functions found in spam), you
:file:`spam.lib`. :file:`Spam.dll` does not contain any Python functions (such could use these commands::
as :c:func:`PyArg_ParseTuple`), but it does know how to find the Python code
thanks to :file:`pythonXY.lib`.
The second command created :file:`ni.dll` (and :file:`.obj` and :file:`.lib`), cl /LD /I/python/include spam.c
which knows how to find the necessary functions from spam, and also from the cl /LD /I/python/include ni.c spam.lib
Python executable.
The first command created three files: :file:`spam.obj`, :file:`spam.dll`
and :file:`spam.lib`. :file:`Spam.dll` does not contain any Python
functions (such as :c:func:`PyArg_ParseTuple`), but it does know how to find
the Python code thanks to the implicitly linked :file:`pythonXY.lib`.
The second command created :file:`ni.dll` (and :file:`.obj` and
:file:`.lib`), which knows how to find the necessary functions from spam,
and also from the Python executable.
2. Manually by defining :c:macro:`Py_NO_LINK_LIB` macro before including
:file:`Python.h`. You must pass :file:`pythonXY.lib` to the linker.
To build two DLLs, spam and ni (which uses C functions found in spam), you
could use these commands::
cl /LD /DPy_NO_LINK_LIB /I/python/include spam.c ../libs/pythonXY.lib
cl /LD /DPy_NO_LINK_LIB /I/python/include ni.c spam.lib ../libs/pythonXY.lib
The first command created three files: :file:`spam.obj`, :file:`spam.dll`
and :file:`spam.lib`. :file:`Spam.dll` does not contain any Python
functions (such as :c:func:`PyArg_ParseTuple`), but it does know how to find
the Python code thanks to :file:`pythonXY.lib`.
The second command created :file:`ni.dll` (and :file:`.obj` and
:file:`.lib`), which knows how to find the necessary functions from spam,
and also from the Python executable.
Not every identifier is exported to the lookup table. If you want any other Not every identifier is exported to the lookup table. If you want any other
modules (including Python) to be able to see your identifiers, you have to say modules (including Python) to be able to see your identifiers, you have to say

View file

@ -986,8 +986,8 @@ There are various techniques.
f() f()
Is there an equivalent to Perl's chomp() for removing trailing newlines from strings? Is there an equivalent to Perl's ``chomp()`` for removing trailing newlines from strings?
------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------
You can use ``S.rstrip("\r\n")`` to remove all occurrences of any line You can use ``S.rstrip("\r\n")`` to remove all occurrences of any line
terminator from the end of the string ``S`` without removing other trailing terminator from the end of the string ``S`` without removing other trailing
@ -1005,8 +1005,8 @@ Since this is typically only desired when reading text one line at a time, using
``S.rstrip()`` this way works well. ``S.rstrip()`` this way works well.
Is there a scanf() or sscanf() equivalent? Is there a ``scanf()`` or ``sscanf()`` equivalent?
------------------------------------------ --------------------------------------------------
Not as such. Not as such.
@ -1020,8 +1020,8 @@ For more complicated input parsing, regular expressions are more powerful
than C's ``sscanf`` and better suited for the task. than C's ``sscanf`` and better suited for the task.
What does 'UnicodeDecodeError' or 'UnicodeEncodeError' error mean? What does ``UnicodeDecodeError`` or ``UnicodeEncodeError`` error mean?
------------------------------------------------------------------- ----------------------------------------------------------------------
See the :ref:`unicode-howto`. See the :ref:`unicode-howto`.
@ -1036,7 +1036,7 @@ A raw string ending with an odd number of backslashes will escape the string's q
>>> r'C:\this\will\not\work\' >>> r'C:\this\will\not\work\'
File "<stdin>", line 1 File "<stdin>", line 1
r'C:\this\will\not\work\' r'C:\this\will\not\work\'
^ ^
SyntaxError: unterminated string literal (detected at line 1) SyntaxError: unterminated string literal (detected at line 1)
There are several workarounds for this. One is to use regular strings and double There are several workarounds for this. One is to use regular strings and double
@ -1868,15 +1868,15 @@ object identity is assured. Generally, there are three circumstances where
identity is guaranteed: identity is guaranteed:
1) Assignments create new names but do not change object identity. After the 1) Assignments create new names but do not change object identity. After the
assignment ``new = old``, it is guaranteed that ``new is old``. assignment ``new = old``, it is guaranteed that ``new is old``.
2) Putting an object in a container that stores object references does not 2) Putting an object in a container that stores object references does not
change object identity. After the list assignment ``s[0] = x``, it is change object identity. After the list assignment ``s[0] = x``, it is
guaranteed that ``s[0] is x``. guaranteed that ``s[0] is x``.
3) If an object is a singleton, it means that only one instance of that object 3) If an object is a singleton, it means that only one instance of that object
can exist. After the assignments ``a = None`` and ``b = None``, it is can exist. After the assignments ``a = None`` and ``b = None``, it is
guaranteed that ``a is b`` because ``None`` is a singleton. guaranteed that ``a is b`` because ``None`` is a singleton.
In most other circumstances, identity tests are inadvisable and equality tests In most other circumstances, identity tests are inadvisable and equality tests
are preferred. In particular, identity tests should not be used to check are preferred. In particular, identity tests should not be used to check
@ -1906,28 +1906,30 @@ In the standard library code, you will see several common patterns for
correctly using identity tests: correctly using identity tests:
1) As recommended by :pep:`8`, an identity test is the preferred way to check 1) As recommended by :pep:`8`, an identity test is the preferred way to check
for ``None``. This reads like plain English in code and avoids confusion with for ``None``. This reads like plain English in code and avoids confusion
other objects that may have boolean values that evaluate to false. with other objects that may have boolean values that evaluate to false.
2) Detecting optional arguments can be tricky when ``None`` is a valid input 2) Detecting optional arguments can be tricky when ``None`` is a valid input
value. In those situations, you can create a singleton sentinel object value. In those situations, you can create a singleton sentinel object
guaranteed to be distinct from other objects. For example, here is how guaranteed to be distinct from other objects. For example, here is how
to implement a method that behaves like :meth:`dict.pop`:: to implement a method that behaves like :meth:`dict.pop`:
_sentinel = object() .. code-block:: python
def pop(self, key, default=_sentinel): _sentinel = object()
if key in self:
value = self[key] def pop(self, key, default=_sentinel):
del self[key] if key in self:
return value value = self[key]
if default is _sentinel: del self[key]
raise KeyError(key) return value
return default if default is _sentinel:
raise KeyError(key)
return default
3) Container implementations sometimes need to augment equality tests with 3) Container implementations sometimes need to augment equality tests with
identity tests. This prevents the code from being confused by objects such as identity tests. This prevents the code from being confused by objects
``float('NaN')`` that are not equal to themselves. such as ``float('NaN')`` that are not equal to themselves.
For example, here is the implementation of For example, here is the implementation of
:meth:`!collections.abc.Sequence.__contains__`:: :meth:`!collections.abc.Sequence.__contains__`::

View file

@ -115,7 +115,7 @@ Glossary
:keyword:`yield` expression. :keyword:`yield` expression.
Each :keyword:`yield` temporarily suspends processing, remembering the Each :keyword:`yield` temporarily suspends processing, remembering the
location execution state (including local variables and pending execution state (including local variables and pending
try-statements). When the *asynchronous generator iterator* effectively try-statements). When the *asynchronous generator iterator* effectively
resumes with another awaitable returned by :meth:`~object.__anext__`, it resumes with another awaitable returned by :meth:`~object.__anext__`, it
picks up where it left off. See :pep:`492` and :pep:`525`. picks up where it left off. See :pep:`492` and :pep:`525`.
@ -132,6 +132,28 @@ Glossary
iterator's :meth:`~object.__anext__` method until it raises a iterator's :meth:`~object.__anext__` method until it raises a
:exc:`StopAsyncIteration` exception. Introduced by :pep:`492`. :exc:`StopAsyncIteration` exception. Introduced by :pep:`492`.
attached thread state
A :term:`thread state` that is active for the current OS thread.
When a :term:`thread state` is attached, the OS thread has
access to the full Python C API and can safely invoke the
bytecode interpreter.
Unless a function explicitly notes otherwise, attempting to call
the C API without an attached thread state will result in a fatal
error or undefined behavior. A thread state can be attached and detached
explicitly by the user through the C API, or implicitly by the runtime,
including during blocking C calls and by the bytecode interpreter in between
calls.
On most builds of Python, having an attached thread state implies that the
caller holds the :term:`GIL` for the current interpreter, so only
one OS thread can have an attached thread state at a given moment. In
:term:`free-threaded <free threading>` builds of Python, threads can concurrently
hold an attached thread state, allowing for true parallelism of the bytecode
interpreter.
attribute attribute
A value associated with an object which is usually referenced by name A value associated with an object which is usually referenced by name
using dotted expressions. using dotted expressions.
@ -564,7 +586,7 @@ Glossary
An object created by a :term:`generator` function. An object created by a :term:`generator` function.
Each :keyword:`yield` temporarily suspends processing, remembering the Each :keyword:`yield` temporarily suspends processing, remembering the
location execution state (including local variables and pending execution state (including local variables and pending
try-statements). When the *generator iterator* resumes, it picks up where try-statements). When the *generator iterator* resumes, it picks up where
it left off (in contrast to functions which start fresh on every it left off (in contrast to functions which start fresh on every
invocation). invocation).
@ -622,6 +644,10 @@ Glossary
multi-threaded applications and makes it easier to use multi-core CPUs multi-threaded applications and makes it easier to use multi-core CPUs
efficiently. For more details, see :pep:`703`. efficiently. For more details, see :pep:`703`.
In prior versions of Python's C API, a function might declare that it
requires the GIL to be held in order to use it. This refers to having an
:term:`attached thread state`.
hash-based pyc hash-based pyc
A bytecode cache file that uses the hash rather than the last-modified A bytecode cache file that uses the hash rather than the last-modified
time of the corresponding source file to determine its validity. See time of the corresponding source file to determine its validity. See
@ -658,6 +684,9 @@ Glossary
and therefore it is never deallocated while the interpreter is running. and therefore it is never deallocated while the interpreter is running.
For example, :const:`True` and :const:`None` are immortal in CPython. For example, :const:`True` and :const:`None` are immortal in CPython.
Immortal objects can be identified via :func:`sys._is_immortal`, or
via :c:func:`PyUnstable_IsImmortal` in the C API.
immutable immutable
An object with a fixed value. Immutable objects include numbers, strings and An object with a fixed value. Immutable objects include numbers, strings and
tuples. Such an object cannot be altered. A new object has to tuples. Such an object cannot be altered. A new object has to
@ -716,7 +745,7 @@ Glossary
iterables include all sequence types (such as :class:`list`, :class:`str`, iterables include all sequence types (such as :class:`list`, :class:`str`,
and :class:`tuple`) and some non-sequence types like :class:`dict`, and :class:`tuple`) and some non-sequence types like :class:`dict`,
:term:`file objects <file object>`, and objects of any classes you define :term:`file objects <file object>`, and objects of any classes you define
with an :meth:`~iterator.__iter__` method or with a with an :meth:`~object.__iter__` method or with a
:meth:`~object.__getitem__` method :meth:`~object.__getitem__` method
that implements :term:`sequence` semantics. that implements :term:`sequence` semantics.
@ -797,6 +826,10 @@ Glossary
thread removes *key* from *mapping* after the test, but before the lookup. thread removes *key* from *mapping* after the test, but before the lookup.
This issue can be solved with locks or by using the EAFP approach. This issue can be solved with locks or by using the EAFP approach.
lexical analyzer
Formal name for the *tokenizer*; see :term:`token`.
list list
A built-in Python :term:`sequence`. Despite its name it is more akin A built-in Python :term:`sequence`. Despite its name it is more akin
to an array in other languages than to a linked list since access to to an array in other languages than to a linked list since access to
@ -811,9 +844,11 @@ Glossary
processed. processed.
loader loader
An object that loads a module. It must define a method named An object that loads a module.
:meth:`load_module`. A loader is typically returned by a It must define the :meth:`!exec_module` and :meth:`!create_module` methods
:term:`finder`. See also: to implement the :class:`~importlib.abc.Loader` interface.
A loader is typically returned by a :term:`finder`.
See also:
* :ref:`finders-and-loaders` * :ref:`finders-and-loaders`
* :class:`importlib.abc.Loader` * :class:`importlib.abc.Loader`
@ -934,11 +969,16 @@ Glossary
modules, respectively. modules, respectively.
namespace package namespace package
A :pep:`420` :term:`package` which serves only as a container for A :term:`package` which serves only as a container for subpackages.
subpackages. Namespace packages may have no physical representation, Namespace packages may have no physical representation,
and specifically are not like a :term:`regular package` because they and specifically are not like a :term:`regular package` because they
have no ``__init__.py`` file. have no ``__init__.py`` file.
Namespace packages allow several individually installable packages to have a common parent package.
Otherwise, it is recommended to use a :term:`regular package`.
For more information, see :pep:`420` and :ref:`reference-namespace-package`.
See also :term:`module`. See also :term:`module`.
nested scope nested scope
@ -1281,6 +1321,40 @@ Glossary
See also :term:`binary file` for a file object able to read and write See also :term:`binary file` for a file object able to read and write
:term:`bytes-like objects <bytes-like object>`. :term:`bytes-like objects <bytes-like object>`.
thread state
The information used by the :term:`CPython` runtime to run in an OS thread.
For example, this includes the current exception, if any, and the
state of the bytecode interpreter.
Each thread state is bound to a single OS thread, but threads may have
many thread states available. At most, one of them may be
:term:`attached <attached thread state>` at once.
An :term:`attached thread state` is required to call most
of Python's C API, unless a function explicitly documents otherwise.
The bytecode interpreter only runs under an attached thread state.
Each thread state belongs to a single interpreter, but each interpreter
may have many thread states, including multiple for the same OS thread.
Thread states from multiple interpreters may be bound to the same
thread, but only one can be :term:`attached <attached thread state>` in
that thread at any given moment.
See :ref:`Thread State and the Global Interpreter Lock <threads>` for more
information.
token
A small unit of source code, generated by the
:ref:`lexical analyzer <lexical>` (also called the *tokenizer*).
Names, numbers, strings, operators,
newlines and similar are represented by tokens.
The :mod:`tokenize` module exposes Python's lexical analyzer.
The :mod:`token` module contains information on the various types
of tokens.
triple-quoted string triple-quoted string
A string which is bound by three instances of either a quotation mark A string which is bound by three instances of either a quotation mark
(") or an apostrophe ('). While they don't provide any functionality (") or an apostrophe ('). While they don't provide any functionality

View file

@ -145,8 +145,8 @@ importing the :func:`curses.wrapper` function and using it like this::
v = i-10 v = i-10
stdscr.addstr(i, 0, '10 divided by {} is {}'.format(v, 10/v)) stdscr.addstr(i, 0, '10 divided by {} is {}'.format(v, 10/v))
stdscr.refresh() stdscr.refresh()
stdscr.getkey() stdscr.getkey()
wrapper(main) wrapper(main)

View file

@ -243,6 +243,141 @@ depend on your extension, but some common patterns include:
`thread-local storage <https://en.cppreference.com/w/c/language/storage_duration>`_. `thread-local storage <https://en.cppreference.com/w/c/language/storage_duration>`_.
Critical Sections
=================
.. _critical-sections:
In the free-threaded build, CPython provides a mechanism called "critical
sections" to protect data that would otherwise be protected by the GIL.
While extension authors may not interact with the internal critical section
implementation directly, understanding their behavior is crucial when using
certain C API functions or managing shared state in the free-threaded build.
What Are Critical Sections?
...........................
Conceptually, critical sections act as a deadlock avoidance layer built on
top of simple mutexes. Each thread maintains a stack of active critical
sections. When a thread needs to acquire a lock associated with a critical
section (e.g., implicitly when calling a thread-safe C API function like
:c:func:`PyDict_SetItem`, or explicitly using macros), it attempts to acquire
the underlying mutex.
Using Critical Sections
.......................
The primary APIs for using critical sections are:
* :c:macro:`Py_BEGIN_CRITICAL_SECTION` and :c:macro:`Py_END_CRITICAL_SECTION` -
For locking a single object
* :c:macro:`Py_BEGIN_CRITICAL_SECTION2` and :c:macro:`Py_END_CRITICAL_SECTION2`
- For locking two objects simultaneously
These macros must be used in matching pairs and must appear in the same C
scope, since they establish a new local scope. These macros are no-ops in
non-free-threaded builds, so they can be safely added to code that needs to
support both build types.
A common use of a critical section would be to lock an object while accessing
an internal attribute of it. For example, if an extension type has an internal
count field, you could use a critical section while reading or writing that
field::
// read the count, returns new reference to internal count value
PyObject *result;
Py_BEGIN_CRITICAL_SECTION(obj);
result = Py_NewRef(obj->count);
Py_END_CRITICAL_SECTION();
return result;
// write the count, consumes reference from new_count
Py_BEGIN_CRITICAL_SECTION(obj);
obj->count = new_count;
Py_END_CRITICAL_SECTION();
How Critical Sections Work
..........................
Unlike traditional locks, critical sections do not guarantee exclusive access
throughout their entire duration. If a thread would block while holding a
critical section (e.g., by acquiring another lock or performing I/O), the
critical section is temporarily suspended—all locks are released—and then
resumed when the blocking operation completes.
This behavior is similar to what happens with the GIL when a thread makes a
blocking call. The key differences are:
* Critical sections operate on a per-object basis rather than globally
* Critical sections follow a stack discipline within each thread (the "begin" and
"end" macros enforce this since they must be paired and within the same scope)
* Critical sections automatically release and reacquire locks around potential
blocking operations
Deadlock Avoidance
..................
Critical sections help avoid deadlocks in two ways:
1. If a thread tries to acquire a lock that's already held by another thread,
it first suspends all of its active critical sections, temporarily releasing
their locks
2. When the blocking operation completes, only the top-most critical section is
reacquired first
This means you cannot rely on nested critical sections to lock multiple objects
at once, as the inner critical section may suspend the outer ones. Instead, use
:c:macro:`Py_BEGIN_CRITICAL_SECTION2` to lock two objects simultaneously.
Note that the locks described above are only :c:type:`!PyMutex` based locks.
The critical section implementation does not know about or affect other locking
mechanisms that might be in use, like POSIX mutexes. Also note that while
blocking on any :c:type:`!PyMutex` causes the critical sections to be
suspended, only the mutexes that are part of the critical sections are
released. If :c:type:`!PyMutex` is used without a critical section, it will
not be released and therefore does not get the same deadlock avoidance.
Important Considerations
........................
* Critical sections may temporarily release their locks, allowing other threads
to modify the protected data. Be careful about making assumptions about the
state of the data after operations that might block.
* Because locks can be temporarily released (suspended), entering a critical
section does not guarantee exclusive access to the protected resource
throughout the section's duration. If code within a critical section calls
another function that blocks (e.g., acquires another lock, performs blocking
I/O), all locks held by the thread via critical sections will be released.
This is similar to how the GIL can be released during blocking calls.
* Only the lock(s) associated with the most recently entered (top-most)
critical section are guaranteed to be held at any given time. Locks for
outer, nested critical sections might have been suspended.
* You can lock at most two objects simultaneously with these APIs. If you need
to lock more objects, you'll need to restructure your code.
* While critical sections will not deadlock if you attempt to lock the same
object twice, they are less efficient than purpose-built reentrant locks for
this use case.
* When using :c:macro:`Py_BEGIN_CRITICAL_SECTION2`, the order of the objects
doesn't affect correctness (the implementation handles deadlock avoidance),
but it's good practice to always lock objects in a consistent order.
* Remember that the critical section macros are primarily for protecting access
to *Python objects* that might be involved in internal CPython operations
susceptible to the deadlock scenarios described above. For protecting purely
internal extension state, standard mutexes or other synchronization
primitives might be more appropriate.
Building Extensions for the Free-Threaded Build Building Extensions for the Free-Threaded Build
=============================================== ===============================================

View file

@ -43,7 +43,7 @@ Identifying free-threaded Python
================================ ================================
To check if the current interpreter supports free-threading, :option:`python -VV <-V>` To check if the current interpreter supports free-threading, :option:`python -VV <-V>`
and :attr:`sys.version` contain "experimental free-threading build". and :data:`sys.version` contain "experimental free-threading build".
The new :func:`sys._is_gil_enabled` function can be used to check whether The new :func:`sys._is_gil_enabled` function can be used to check whether
the GIL is actually disabled in the running process. the GIL is actually disabled in the running process.
@ -152,3 +152,33 @@ to re-enable it in a thread-safe way in the 3.14 release. This overhead is
expected to be reduced in upcoming Python release. We are aiming for an expected to be reduced in upcoming Python release. We are aiming for an
overhead of 10% or less on the pyperformance suite compared to the default overhead of 10% or less on the pyperformance suite compared to the default
GIL-enabled build. GIL-enabled build.
Behavioral changes
==================
This section describes CPython behavioural changes with the free-threaded
build.
Context variables
-----------------
In the free-threaded build, the flag :data:`~sys.flags.thread_inherit_context`
is set to true by default which causes threads created with
:class:`threading.Thread` to start with a copy of the
:class:`~contextvars.Context()` of the caller of
:meth:`~threading.Thread.start`. In the default GIL-enabled build, the flag
defaults to false so threads start with an
empty :class:`~contextvars.Context()`.
Warning filters
---------------
In the free-threaded build, the flag :data:`~sys.flags.context_aware_warnings`
is set to true by default. In the default GIL-enabled build, the flag defaults
to false. If the flag is true then the :class:`warnings.catch_warnings`
context manager uses a context variable for warning filters. If the flag is
false then :class:`~warnings.catch_warnings` modifies the global filters list,
which is not thread-safe. See the :mod:`warnings` module for more details.

View file

@ -34,6 +34,7 @@ Python Library Reference.
mro.rst mro.rst
free-threading-python.rst free-threading-python.rst
free-threading-extensions.rst free-threading-extensions.rst
remote_debugging.rst
General: General:
@ -66,3 +67,4 @@ Debugging and profiling:
* :ref:`gdb` * :ref:`gdb`
* :ref:`instrumentation` * :ref:`instrumentation`
* :ref:`perf_profiling` * :ref:`perf_profiling`
* :ref:`remote-debugging`

View file

@ -626,6 +626,19 @@ which, when run, will produce:
of each message with the handler's level, and only passes a message to a of each message with the handler's level, and only passes a message to a
handler if it's appropriate to do so. handler if it's appropriate to do so.
.. versionchanged:: 3.14
The :class:`QueueListener` can be started (and stopped) via the
:keyword:`with` statement. For example:
.. code-block:: python
with QueueListener(que, handler) as listener:
# The queue listener automatically starts
# when the 'with' block is entered.
pass
# The queue listener automatically stops once
# the 'with' block is exited.
.. _network-logging: .. _network-logging:
Sending and receiving logging events across a network Sending and receiving logging events across a network
@ -825,9 +838,9 @@ To test these files, do the following in a POSIX environment:
which will lead to records being written to the log. which will lead to records being written to the log.
#. Inspect the log files in the :file:`run` subdirectory. You should see the #. Inspect the log files in the :file:`run` subdirectory. You should see the
most recent log lines in files matching the pattern :file:`app.log*`. They won't be in most recent log lines in files matching the pattern :file:`app.log*`. They
any particular order, since they have been handled concurrently by different won't be in any particular order, since they have been handled concurrently
worker processes in a non-deterministic way. by different worker processes in a non-deterministic way.
#. You can shut down the listener and the web application by running #. You can shut down the listener and the web application by running
``venv/bin/supervisorctl -c supervisor.conf shutdown``. ``venv/bin/supervisorctl -c supervisor.conf shutdown``.
@ -835,6 +848,19 @@ To test these files, do the following in a POSIX environment:
You may need to tweak the configuration files in the unlikely event that the You may need to tweak the configuration files in the unlikely event that the
configured ports clash with something else in your test environment. configured ports clash with something else in your test environment.
The default configuration uses a TCP socket on port 9020. You can use a Unix
Domain socket instead of a TCP socket by doing the following:
#. In :file:`listener.json`, add a ``socket`` key with the path to the domain
socket you want to use. If this key is present, the listener listens on the
corresponding domain socket and not on a TCP socket (the ``port`` key is
ignored).
#. In :file:`webapp.json`, change the socket handler configuration dictionary
so that the ``host`` value is the path to the domain socket, and set the
``port`` value to ``null``.
.. currentmodule:: logging .. currentmodule:: logging
.. _context-info: .. _context-info:

View file

@ -398,7 +398,7 @@ with inheritance diagram
We see that class G inherits from F and E, with F *before* E: therefore We see that class G inherits from F and E, with F *before* E: therefore
we would expect the attribute *G.remember2buy* to be inherited by we would expect the attribute *G.remember2buy* to be inherited by
*F.rembermer2buy* and not by *E.remember2buy*: nevertheless Python 2.2 *F.remember2buy* and not by *E.remember2buy*: nevertheless Python 2.2
gives gives
>>> G.remember2buy # doctest: +SKIP >>> G.remember2buy # doctest: +SKIP

View file

@ -162,12 +162,12 @@ the :option:`!-X` option takes precedence over the environment variable.
Example, using the environment variable:: Example, using the environment variable::
$ PYTHONPERFSUPPORT=1 perf record -F 9999 -g -o perf.data python script.py $ PYTHONPERFSUPPORT=1 perf record -F 9999 -g -o perf.data python my_script.py
$ perf report -g -i perf.data $ perf report -g -i perf.data
Example, using the :option:`!-X` option:: Example, using the :option:`!-X` option::
$ perf record -F 9999 -g -o perf.data python -X perf script.py $ perf record -F 9999 -g -o perf.data python -X perf my_script.py
$ perf report -g -i perf.data $ perf report -g -i perf.data
Example, using the :mod:`sys` APIs in file :file:`example.py`: Example, using the :mod:`sys` APIs in file :file:`example.py`:
@ -236,7 +236,7 @@ When using the perf JIT mode, you need an extra step before you can run ``perf
report``. You need to call the ``perf inject`` command to inject the JIT report``. You need to call the ``perf inject`` command to inject the JIT
information into the ``perf.data`` file.:: information into the ``perf.data`` file.::
$ perf record -F 9999 -g --call-graph dwarf -o perf.data python -Xperf_jit my_script.py $ perf record -F 9999 -g -k 1 --call-graph dwarf -o perf.data python -Xperf_jit my_script.py
$ perf inject -i perf.data --jit --output perf.jit.data $ perf inject -i perf.data --jit --output perf.jit.data
$ perf report -g -i perf.jit.data $ perf report -g -i perf.jit.data
@ -254,13 +254,28 @@ files in the current directory which are ELF images for all the JIT trampolines
that were created by Python. that were created by Python.
.. warning:: .. warning::
Notice that when using ``--call-graph dwarf`` the ``perf`` tool will take When using ``--call-graph dwarf``, the ``perf`` tool will take
snapshots of the stack of the process being profiled and save the snapshots of the stack of the process being profiled and save the
information in the ``perf.data`` file. By default the size of the stack dump information in the ``perf.data`` file. By default, the size of the stack dump
is 8192 bytes but the user can change the size by passing the size after is 8192 bytes, but you can change the size by passing it after
comma like ``--call-graph dwarf,4096``. The size of the stack dump is a comma like ``--call-graph dwarf,16384``.
important because if the size is too small ``perf`` will not be able to
unwind the stack and the output will be incomplete. On the other hand, if
the size is too big, then ``perf`` won't be able to sample the process as
frequently as it would like as the overhead will be higher.
The size of the stack dump is important because if the size is too small
``perf`` will not be able to unwind the stack and the output will be
incomplete. On the other hand, if the size is too big, then ``perf`` won't
be able to sample the process as frequently as it would like as the overhead
will be higher.
The stack size is particularly important when profiling Python code compiled
with low optimization levels (like ``-O0``), as these builds tend to have
larger stack frames. If you are compiling Python with ``-O0`` and not seeing
Python functions in your profiling output, try increasing the stack dump
size to 65528 bytes (the maximum)::
$ perf record -F 9999 -g -k 1 --call-graph dwarf,65528 -o perf.data python -Xperf_jit my_script.py
Different compilation flags can significantly impact stack sizes:
- Builds with ``-O0`` typically have much larger stack frames than those with ``-O1`` or higher
- Adding optimizations (``-O1``, ``-O2``, etc.) typically reduces stack size
- Frame pointers (``-fno-omit-frame-pointer``) generally provide more reliable stack unwinding

View file

@ -738,9 +738,12 @@ given location, they can obviously be matched an infinite number of times.
different: ``\A`` still matches only at the beginning of the string, but ``^`` different: ``\A`` still matches only at the beginning of the string, but ``^``
may match at any location inside the string that follows a newline character. may match at any location inside the string that follows a newline character.
``\Z`` ``\z``
Matches only at the end of the string. Matches only at the end of the string.
``\Z``
The same as ``\z``. For compatibility with old Python versions.
``\b`` ``\b``
Word boundary. This is a zero-width assertion that matches only at the Word boundary. This is a zero-width assertion that matches only at the
beginning or end of a word. A word is defined as a sequence of alphanumeric beginning or end of a word. A word is defined as a sequence of alphanumeric

View file

@ -0,0 +1,545 @@
.. _remote-debugging:
Remote debugging attachment protocol
====================================
This section describes the low-level protocol that enables external tools to
inject and execute a Python script within a running CPython process.
This mechanism forms the basis of the :func:`sys.remote_exec` function, which
instructs a remote Python process to execute a ``.py`` file. However, this
section does not document the usage of that function. Instead, it provides a
detailed explanation of the underlying protocol, which takes as input the
``pid`` of a target Python process and the path to a Python source file to be
executed. This information supports independent reimplementation of the
protocol, regardless of programming language.
.. warning::
The execution of the injected script depends on the interpreter reaching a
safe evaluation point. As a result, execution may be delayed depending on
the runtime state of the target process.
Once injected, the script is executed by the interpreter within the target
process the next time a safe evaluation point is reached. This approach enables
remote execution capabilities without modifying the behavior or structure of
the running Python application.
Subsequent sections provide a step-by-step description of the protocol,
including techniques for locating interpreter structures in memory, safely
accessing internal fields, and triggering code execution. Platform-specific
variations are noted where applicable, and example implementations are included
to clarify each operation.
Locating the PyRuntime structure
================================
CPython places the ``PyRuntime`` structure in a dedicated binary section to
help external tools find it at runtime. The name and format of this section
vary by platform. For example, ``.PyRuntime`` is used on ELF systems, and
``__DATA,__PyRuntime`` is used on macOS. Tools can find the offset of this
structure by examining the binary on disk.
The ``PyRuntime`` structure contains CPythons global interpreter state and
provides access to other internal data, including the list of interpreters,
thread states, and debugger support fields.
To work with a remote Python process, a debugger must first find the memory
address of the ``PyRuntime`` structure in the target process. This address
cant be hardcoded or calculated from a symbol name, because it depends on
where the operating system loaded the binary.
The method for finding ``PyRuntime`` depends on the platform, but the steps are
the same in general:
1. Find the base address where the Python binary or shared library was loaded
in the target process.
2. Use the on-disk binary to locate the offset of the ``.PyRuntime`` section.
3. Add the section offset to the base address to compute the address in memory.
The sections below explain how to do this on each supported platform and
include example code.
.. rubric:: Linux (ELF)
To find the ``PyRuntime`` structure on Linux:
1. Read the processs memory map (for example, ``/proc/<pid>/maps``) to find
the address where the Python executable or ``libpython`` was loaded.
2. Parse the ELF section headers in the binary to get the offset of the
``.PyRuntime`` section.
3. Add that offset to the base address from step 1 to get the memory address of
``PyRuntime``.
The following is an example implementation::
def find_py_runtime_linux(pid: int) -> int:
# Step 1: Try to find the Python executable in memory
binary_path, base_address = find_mapped_binary(
pid, name_contains="python"
)
# Step 2: Fallback to shared library if executable is not found
if binary_path is None:
binary_path, base_address = find_mapped_binary(
pid, name_contains="libpython"
)
# Step 3: Parse ELF headers to get .PyRuntime section offset
section_offset = parse_elf_section_offset(
binary_path, ".PyRuntime"
)
# Step 4: Compute PyRuntime address in memory
return base_address + section_offset
On Linux systems, there are two main approaches to read memory from another
process. The first is through the ``/proc`` filesystem, specifically by reading from
``/proc/[pid]/mem`` which provides direct access to the process's memory. This
requires appropriate permissions - either being the same user as the target
process or having root access. The second approach is using the
``process_vm_readv()`` system call which provides a more efficient way to copy
memory between processes. While ptrace's ``PTRACE_PEEKTEXT`` operation can also be
used to read memory, it is significantly slower as it only reads one word at a
time and requires multiple context switches between the tracer and tracee
processes.
For parsing ELF sections, the process involves reading and interpreting the ELF
file format structures from the binary file on disk. The ELF header contains a
pointer to the section header table. Each section header contains metadata about
a section including its name (stored in a separate string table), offset, and
size. To find a specific section like .PyRuntime, you need to walk through these
headers and match the section name. The section header then provides the offset
where that section exists in the file, which can be used to calculate its
runtime address when the binary is loaded into memory.
You can read more about the ELF file format in the `ELF specification
<https://en.wikipedia.org/wiki/Executable_and_Linkable_Format>`_.
.. rubric:: macOS (Mach-O)
To find the ``PyRuntime`` structure on macOS:
1. Call ``task_for_pid()`` to get the ``mach_port_t`` task port for the target
process. This handle is needed to read memory using APIs like
``mach_vm_read_overwrite`` and ``mach_vm_region``.
2. Scan the memory regions to find the one containing the Python executable or
``libpython``.
3. Load the binary file from disk and parse the Mach-O headers to find the
section named ``PyRuntime`` in the ``__DATA`` segment. On macOS, symbol
names are automatically prefixed with an underscore, so the ``PyRuntime``
symbol appears as ``_PyRuntime`` in the symbol table, but the section name
is not affected.
The following is an example implementation::
def find_py_runtime_macos(pid: int) -> int:
# Step 1: Get access to the process's memory
handle = get_memory_access_handle(pid)
# Step 2: Try to find the Python executable in memory
binary_path, base_address = find_mapped_binary(
handle, name_contains="python"
)
# Step 3: Fallback to libpython if the executable is not found
if binary_path is None:
binary_path, base_address = find_mapped_binary(
handle, name_contains="libpython"
)
# Step 4: Parse Mach-O headers to get __DATA,__PyRuntime section offset
section_offset = parse_macho_section_offset(
binary_path, "__DATA", "__PyRuntime"
)
# Step 5: Compute the PyRuntime address in memory
return base_address + section_offset
On macOS, accessing another process's memory requires using Mach-O specific APIs
and file formats. The first step is obtaining a ``task_port`` handle via
``task_for_pid()``, which provides access to the target process's memory space.
This handle enables memory operations through APIs like
``mach_vm_read_overwrite()``.
The process memory can be examined using ``mach_vm_region()`` to scan through the
virtual memory space, while ``proc_regionfilename()`` helps identify which binary
files are loaded at each memory region. When the Python binary or library is
found, its Mach-O headers need to be parsed to locate the ``PyRuntime`` structure.
The Mach-O format organizes code and data into segments and sections. The
``PyRuntime`` structure lives in a section named ``__PyRuntime`` within the
``__DATA`` segment. The actual runtime address calculation involves finding the
``__TEXT`` segment which serves as the binary's base address, then locating the
``__DATA`` segment containing our target section. The final address is computed by
combining the base address with the appropriate section offsets from the Mach-O
headers.
Note that accessing another process's memory on macOS typically requires
elevated privileges - either root access or special security entitlements
granted to the debugging process.
.. rubric:: Windows (PE)
To find the ``PyRuntime`` structure on Windows:
1. Use the ToolHelp API to enumerate all modules loaded in the target process.
This is done using functions such as `CreateToolhelp32Snapshot
<https://learn.microsoft.com/en-us/windows/win32/api/tlhelp32/nf-tlhelp32-createtoolhelp32snapshot>`_,
`Module32First
<https://learn.microsoft.com/en-us/windows/win32/api/tlhelp32/nf-tlhelp32-module32first>`_,
and `Module32Next
<https://learn.microsoft.com/en-us/windows/win32/api/tlhelp32/nf-tlhelp32-module32next>`_.
2. Identify the module corresponding to :file:`python.exe` or
:file:`python{XY}.dll`, where ``X`` and ``Y`` are the major and minor
version numbers of the Python version, and record its base address.
3. Locate the ``PyRuntim`` section. Due to the PE format's 8-character limit
on section names (defined as ``IMAGE_SIZEOF_SHORT_NAME``), the original
name ``PyRuntime`` is truncated. This section contains the ``PyRuntime``
structure.
4. Retrieve the sections relative virtual address (RVA) and add it to the base
address of the module.
The following is an example implementation::
def find_py_runtime_windows(pid: int) -> int:
# Step 1: Try to find the Python executable in memory
binary_path, base_address = find_loaded_module(
pid, name_contains="python"
)
# Step 2: Fallback to shared pythonXY.dll if the executable is not
# found
if binary_path is None:
binary_path, base_address = find_loaded_module(
pid, name_contains="python3"
)
# Step 3: Parse PE section headers to get the RVA of the PyRuntime
# section. The section name appears as "PyRuntim" due to the
# 8-character limit defined by the PE format (IMAGE_SIZEOF_SHORT_NAME).
section_rva = parse_pe_section_offset(binary_path, "PyRuntim")
# Step 4: Compute PyRuntime address in memory
return base_address + section_rva
On Windows, accessing another process's memory requires using the Windows API
functions like ``CreateToolhelp32Snapshot()`` and ``Module32First()/Module32Next()``
to enumerate loaded modules. The ``OpenProcess()`` function provides a handle to
access the target process's memory space, enabling memory operations through
``ReadProcessMemory()``.
The process memory can be examined by enumerating loaded modules to find the
Python binary or DLL. When found, its PE headers need to be parsed to locate the
``PyRuntime`` structure.
The PE format organizes code and data into sections. The ``PyRuntime`` structure
lives in a section named "PyRuntim" (truncated from "PyRuntime" due to PE's
8-character name limit). The actual runtime address calculation involves finding
the module's base address from the module entry, then locating our target
section in the PE headers. The final address is computed by combining the base
address with the section's virtual address from the PE section headers.
Note that accessing another process's memory on Windows typically requires
appropriate privileges - either administrative access or the ``SeDebugPrivilege``
privilege granted to the debugging process.
Reading _Py_DebugOffsets
========================
Once the address of the ``PyRuntime`` structure has been determined, the next
step is to read the ``_Py_DebugOffsets`` structure located at the beginning of
the ``PyRuntime`` block.
This structure provides version-specific field offsets that are needed to
safely read interpreter and thread state memory. These offsets vary between
CPython versions and must be checked before use to ensure they are compatible.
To read and check the debug offsets, follow these steps:
1. Read memory from the target process starting at the ``PyRuntime`` address,
covering the same number of bytes as the ``_Py_DebugOffsets`` structure.
This structure is located at the very start of the ``PyRuntime`` memory
block. Its layout is defined in CPythons internal headers and stays the
same within a given minor version, but may change in major versions.
2. Check that the structure contains valid data:
- The ``cookie`` field must match the expected debug marker.
- The ``version`` field must match the version of the Python interpreter
used by the debugger.
- If either the debugger or the target process is using a pre-release
version (for example, an alpha, beta, or release candidate), the versions
must match exactly.
- The ``free_threaded`` field must have the same value in both the debugger
and the target process.
3. If the structure is valid, the offsets it contains can be used to locate
fields in memory. If any check fails, the debugger should stop the operation
to avoid reading memory in the wrong format.
The following is an example implementation that reads and checks
``_Py_DebugOffsets``::
def read_debug_offsets(pid: int, py_runtime_addr: int) -> DebugOffsets:
# Step 1: Read memory from the target process at the PyRuntime address
data = read_process_memory(
pid, address=py_runtime_addr, size=DEBUG_OFFSETS_SIZE
)
# Step 2: Deserialize the raw bytes into a _Py_DebugOffsets structure
debug_offsets = parse_debug_offsets(data)
# Step 3: Validate the contents of the structure
if debug_offsets.cookie != EXPECTED_COOKIE:
raise RuntimeError("Invalid or missing debug cookie")
if debug_offsets.version != LOCAL_PYTHON_VERSION:
raise RuntimeError(
"Mismatch between caller and target Python versions"
)
if debug_offsets.free_threaded != LOCAL_FREE_THREADED:
raise RuntimeError("Mismatch in free-threaded configuration")
return debug_offsets
.. warning::
**Process suspension recommended**
To avoid race conditions and ensure memory consistency, it is strongly
recommended that the target process be suspended before performing any
operations that read or write internal interpreter state. The Python runtime
may concurrently mutate interpreter data structures—such as creating or
destroying threads—during normal execution. This can result in invalid
memory reads or writes.
A debugger may suspend execution by attaching to the process with ``ptrace``
or by sending a ``SIGSTOP`` signal. Execution should only be resumed after
debugger-side memory operations are complete.
.. note::
Some tools, such as profilers or sampling-based debuggers, may operate on
a running process without suspension. In such cases, tools must be
explicitly designed to handle partially updated or inconsistent memory.
For most debugger implementations, suspending the process remains the
safest and most robust approach.
Locating the interpreter and thread state
=========================================
Before code can be injected and executed in a remote Python process, the
debugger must choose a thread in which to schedule execution. This is necessary
because the control fields used to perform remote code injection are located in
the ``_PyRemoteDebuggerSupport`` structure, which is embedded in a
``PyThreadState`` object. These fields are modified by the debugger to request
execution of injected scripts.
The ``PyThreadState`` structure represents a thread running inside a Python
interpreter. It maintains the threads evaluation context and contains the
fields required for debugger coordination. Locating a valid ``PyThreadState``
is therefore a key prerequisite for triggering execution remotely.
A thread is typically selected based on its role or ID. In most cases, the main
thread is used, but some tools may target a specific thread by its native
thread ID. Once the target thread is chosen, the debugger must locate both the
interpreter and the associated thread state structures in memory.
The relevant internal structures are defined as follows:
- ``PyInterpreterState`` represents an isolated Python interpreter instance.
Each interpreter maintains its own set of imported modules, built-in state,
and thread state list. Although most Python applications use a single
interpreter, CPython supports multiple interpreters in the same process.
- ``PyThreadState`` represents a thread running within an interpreter. It
contains execution state and the control fields used by the debugger.
To locate a thread:
1. Use the offset ``runtime_state.interpreters_head`` to obtain the address of
the first interpreter in the ``PyRuntime`` structure. This is the entry point
to the linked list of active interpreters.
2. Use the offset ``interpreter_state.threads_main`` to access the main thread
state associated with the selected interpreter. This is typically the most
reliable thread to target.
3. Optionally, use the offset ``interpreter_state.threads_head`` to iterate
through the linked list of all thread states. Each ``PyThreadState`` structure
contains a ``native_thread_id`` field, which may be compared to a target thread
ID to find a specific thread.
1. Once a valid ``PyThreadState`` has been found, its address can be used in
later steps of the protocol, such as writing debugger control fields and
scheduling execution.
The following is an example implementation that locates the main thread state::
def find_main_thread_state(
pid: int, py_runtime_addr: int, debug_offsets: DebugOffsets,
) -> int:
# Step 1: Read interpreters_head from PyRuntime
interp_head_ptr = (
py_runtime_addr + debug_offsets.runtime_state.interpreters_head
)
interp_addr = read_pointer(pid, interp_head_ptr)
if interp_addr == 0:
raise RuntimeError("No interpreter found in the target process")
# Step 2: Read the threads_main pointer from the interpreter
threads_main_ptr = (
interp_addr + debug_offsets.interpreter_state.threads_main
)
thread_state_addr = read_pointer(pid, threads_main_ptr)
if thread_state_addr == 0:
raise RuntimeError("Main thread state is not available")
return thread_state_addr
The following example demonstrates how to locate a thread by its native thread
ID::
def find_thread_by_id(
pid: int,
interp_addr: int,
debug_offsets: DebugOffsets,
target_tid: int,
) -> int:
# Start at threads_head and walk the linked list
thread_ptr = read_pointer(
pid,
interp_addr + debug_offsets.interpreter_state.threads_head
)
while thread_ptr:
native_tid_ptr = (
thread_ptr + debug_offsets.thread_state.native_thread_id
)
native_tid = read_int(pid, native_tid_ptr)
if native_tid == target_tid:
return thread_ptr
thread_ptr = read_pointer(
pid,
thread_ptr + debug_offsets.thread_state.next
)
raise RuntimeError("Thread with the given ID was not found")
Once a valid thread state has been located, the debugger can proceed with
modifying its control fields and scheduling execution, as described in the next
section.
Writing control information
===========================
Once a valid ``PyThreadState`` structure has been identified, the debugger may
modify control fields within it to schedule the execution of a specified Python
script. These control fields are checked periodically by the interpreter, and
when set correctly, they trigger the execution of remote code at a safe point
in the evaluation loop.
Each ``PyThreadState`` contains a ``_PyRemoteDebuggerSupport`` structure used
for communication between the debugger and the interpreter. The locations of
its fields are defined by the ``_Py_DebugOffsets`` structure and include the
following:
- ``debugger_script_path``: A fixed-size buffer that holds the full path to a
Python source file (``.py``). This file must be accessible and readable by
the target process when execution is triggered.
- ``debugger_pending_call``: An integer flag. Setting this to ``1`` tells the
interpreter that a script is ready to be executed.
- ``eval_breaker``: A field checked by the interpreter during execution.
Setting bit 5 (``_PY_EVAL_PLEASE_STOP_BIT``, value ``1U << 5``) in this
field causes the interpreter to pause and check for debugger activity.
To complete the injection, the debugger must perform the following steps:
1. Write the full script path into the ``debugger_script_path`` buffer.
2. Set ``debugger_pending_call`` to ``1``.
3. Read the current value of ``eval_breaker``, set bit 5
(``_PY_EVAL_PLEASE_STOP_BIT``), and write the updated value back. This
signals the interpreter to check for debugger activity.
The following is an example implementation::
def inject_script(
pid: int,
thread_state_addr: int,
debug_offsets: DebugOffsets,
script_path: str
) -> None:
# Compute the base offset of _PyRemoteDebuggerSupport
support_base = (
thread_state_addr +
debug_offsets.debugger_support.remote_debugger_support
)
# Step 1: Write the script path into debugger_script_path
script_path_ptr = (
support_base +
debug_offsets.debugger_support.debugger_script_path
)
write_string(pid, script_path_ptr, script_path)
# Step 2: Set debugger_pending_call to 1
pending_ptr = (
support_base +
debug_offsets.debugger_support.debugger_pending_call
)
write_int(pid, pending_ptr, 1)
# Step 3: Set _PY_EVAL_PLEASE_STOP_BIT (bit 5, value 1 << 5) in
# eval_breaker
eval_breaker_ptr = (
thread_state_addr +
debug_offsets.debugger_support.eval_breaker
)
breaker = read_int(pid, eval_breaker_ptr)
breaker |= (1 << 5)
write_int(pid, eval_breaker_ptr, breaker)
Once these fields are set, the debugger may resume the process (if it was
suspended). The interpreter will process the request at the next safe
evaluation point, load the script from disk, and execute it.
It is the responsibility of the debugger to ensure that the script file remains
present and accessible to the target process during execution.
.. note::
Script execution is asynchronous. The script file cannot be deleted
immediately after injection. The debugger should wait until the injected
script has produced an observable effect before removing the file.
This effect depends on what the script is designed to do. For example,
a debugger might wait until the remote process connects back to a socket
before removing the script. Once such an effect is observed, it is safe to
assume the file is no longer needed.
Summary
=======
To inject and execute a Python script in a remote process:
1. Locate the ``PyRuntime`` structure in the target processs memory.
2. Read and validate the ``_Py_DebugOffsets`` structure at the beginning of
``PyRuntime``.
3. Use the offsets to locate a valid ``PyThreadState``.
4. Write the path to a Python script into ``debugger_script_path``.
5. Set the ``debugger_pending_call`` flag to ``1``.
6. Set ``_PY_EVAL_PLEASE_STOP_BIT`` in the ``eval_breaker`` field.
7. Resume the process (if suspended). The script will execute at the next safe
evaluation point.

View file

@ -36,7 +36,7 @@
recette recette
</a> sera sûrement un très bon repas. </a> sera sûrement un très bon repas.
</p> </p>
<img src="cid:{asparagus_cid}" /> <img src="cid:{asparagus_cid}">
</body> </body>
</html> </html>
""".format(asparagus_cid=asparagus_cid[1:-1]), subtype='html') """.format(asparagus_cid=asparagus_cid[1:-1]), subtype='html')

Some files were not shown because too many files have changed in this diff Show more