- Factor out _translate_newlines() as a module-level function, have
Popen's method delegate to it for code sharing
- Remove rejection of universal_newlines kwarg in run_pipeline(), treat
it the same as text=True (consistent with Popen behavior)
- Use _translate_newlines() for text mode decoding in run_pipeline()
to properly handle \r\n and \r newline sequences
- Update documentation to remove mention of universal_newlines rejection
- Update test to verify universal_newlines=True works like text=True
Co-authored-by: Claude <noreply@anthropic.com>
Extract common stdin preparation logic into shared helper functions
used by both _communicate_streams_posix() and Popen._communicate():
- _flush_stdin(stdin): Flush stdin, ignoring BrokenPipeError and
ValueError (for closed files)
- _make_input_view(input_data): Convert input data to a byte memoryview,
handling non-byte memoryviews by casting to "b" view
This ensures consistent behavior and makes the fixes for gh-134453
(memoryview) and gh-74389 (closed stdin) shared in one place.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Apply the same fixes from Popen._communicate() to _communicate_streams_posix
for run_pipeline():
1. Handle non-byte memoryview input by casting to byte view (gh-134453):
Non-byte memoryviews (e.g., int32 arrays) had incorrect length tracking
because len() returns element count, not byte count. Now cast to "b"
view for correct progress tracking.
2. Handle ValueError on stdin.flush() when stdin is closed (gh-74389):
Ignore ValueError from flush() if stdin is already closed, matching
the BrokenPipeError handling.
Add tests for memoryview input to run_pipeline:
- test_pipeline_memoryview_input: basic byte memoryview
- test_pipeline_memoryview_input_nonbyte: int32 array memoryview
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Move stdin writing to a background thread in _communicate_streams_windows
to avoid blocking indefinitely when writing large input to a pipeline
where the subprocess doesn't consume stdin quickly.
This mirrors the fix made to Popen._communicate() for Windows in
commit 5b1862b (gh-87512).
Add test_pipeline_timeout_large_input to verify that TimeoutExpired
is raised promptly when run_pipeline() is called with large input
and a timeout, even when the first process is slow to consume stdin.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Extract the core selector-based I/O loop into a new _communicate_io_posix()
function that is shared by both _communicate_streams_posix() (used by
run_pipeline) and Popen._communicate() (used by Popen.communicate).
The new function:
- Takes a pre-configured selector and output buffers
- Supports resume via input_offset parameter (for Popen timeout retry)
- Returns (new_offset, completed) instead of raising TimeoutExpired
- Does not close streams (caller decides based on use case)
This reduces code duplication and ensures both APIs use the same
well-tested I/O multiplexing logic.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
The comment suggested rewriting Popen._communicate() to use
non-blocking I/O on file objects now that Python 3's io module
is used instead of C stdio.
This is unnecessary - the current approach using select() to
detect ready fds followed by os.read()/os.write() is correct
and efficient. The selector already solves "when is data ready?"
so non-blocking mode would add complexity with no benefit.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Remove support for raw file descriptors in _communicate_streams(),
requiring all streams to be file objects. This simplifies both the
Windows and POSIX implementations by removing isinstance() checks
and fd-wrapping logic.
The run_pipeline() function now wraps the stderr pipe's read end
with os.fdopen() immediately after creation.
This change makes _communicate_streams() more compatible with
Popen.communicate() which already uses file objects, enabling
potential future refactoring to share the multiplexed I/O logic.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Add _communicate_streams() helper function that properly multiplexes
read/write operations to prevent pipe buffer deadlocks. The helper
uses selectors on POSIX and threads on Windows, similar to
Popen.communicate().
This fixes potential deadlocks when large amounts of data flow through
the pipeline and significantly improves performance.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Add a new run_pipeline() function to the subprocess module that enables
running multiple commands connected via pipes, similar to shell pipelines.
New API:
- run_pipeline(*commands, ...) - Run a pipeline of commands
- PipelineResult - Return type with commands, returncodes, stdout, stderr
- PipelineError - Raised when check=True and any command fails
Features:
- Supports arbitrary number of commands (minimum 2)
- capture_output, input, timeout, and check parameters like run()
- stdin= connects to first process, stdout= connects to last process
- Text mode support via text=True, encoding, errors
- All processes share a single stderr pipe for simplicity
- "pipefail" semantics: check=True fails if any command fails
Unlike run(), this function does not accept universal_newlines.
Use text=True instead.
Example:
result = subprocess.run_pipeline(
['cat', 'file.txt'],
['grep', 'pattern'],
['wc', '-l'],
capture_output=True, text=True
)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
On Windows, Popen._communicate() previously wrote to stdin synchronously, which could block indefinitely if the subprocess didn't consume input= quickly and the pipe buffer filled up. The timeout= parameter was only checked when joining the reader threads, not during the stdin write.
This change moves the Windows stdin writing to a background thread (similar to how stdout/stderr are read in threads), allowing the timeout to be properly enforced. If timeout expires, TimeoutExpired is raised promptly and the writer thread continues in the background. Subsequent calls to communicate() will join the existing writer thread.
Adds test_communicate_timeout_large_input to verify that TimeoutExpired is raised promptly when communicate() is called with large input and a timeout, even when the subprocess doesn't consume stdin quickly.
This test already passed on POSIX (where select() is used) but failed on Windows where the stdin write blocks without checking the timeout.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Fix inconsistent subprocess.Popen.communicate() behavior between Windows
and POSIX when using memoryview objects with non-byte elements as input.
On POSIX systems, the code was incorrectly comparing bytes written against
element count instead of byte count, causing data truncation for large
inputs with non-byte element types.
Changes:
- Cast memoryview inputs to byte view when input is already a memoryview
- Fix progress tracking to use len(input_view) instead of len(self._input)
- Add comprehensive test coverage for memoryview inputs
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* old-man-yells-at-ReST
* Update 2025-05-30-18-37-44.gh-issue-134453.kxkA-o.rst
* assertIsNone review feedback
* fix memoryview_nonbytes test to fail without our fix on main, and have a nicer error.
Thanks to Peter Bierma @ZeroIntensity for the code review.
* gh-141473: Fix subprocess.Popen.communicate to send input to stdin
* Docs: Clarify that `input` is one time only on `communicate()`
* NEWS entry
* Add a regression test.
---------
Co-authored-by: Gregory P. Smith <greg@krypto.org>
* Add _PYTHON_SUBPROCESS_USE_POSIX_SPAWN environment knob
Add support for disabling the use of `posix_spawn` via a variable in
the process environment.
While it was previously possible to toggle this by modifying the value
of `subprocess._USE_POSIX_SPAWN`, this required either patching CPython
or modifying it within the interpreter instance which is not always
possible, such as when running applications or scripts not under a
user's control.
Signed-off-by: Vincent Fazio <vfazio@gmail.com>
* fixup NEWS entry
---------
Signed-off-by: Vincent Fazio <vfazio@gmail.com>
This flag was added as an escape hatch in gh-91401 and backported to
Python 3.10. The flag broke at some point between its addition and now.
As there is currently no publicly known environments that require this,
remove it rather than work on fixing it.
This leaves the flag in the subprocess module to not break code which
may have used / checked the flag itself.
discussion: https://discuss.python.org/t/subprocess-use-vfork-escape-hatch-broken-fix-or-remove/56915/2
Add __all__ to the following modules:
importlib.machinery, importlib.util and xml.sax.
Add also "# noqa: F401" in collections.abc,
subprocess and xml.sax.
* Sort __all__; remove collections.abc.__all__; remove private names
* Add tests
Explicitly handle the case where stdout=STDOUT
as otherwise the existing error handling gets
confused and reports hard to understand errors.
Signed-off-by: Paulo Neves <ptsneves@gmail.com>
In free-threaded builds, running with `PYTHON_GIL=0` will now disable the
GIL. Follow-up issues track work to re-enable the GIL when loading an
incompatible extension, and to disable the GIL by default.
In order to support re-enabling the GIL at runtime, all GIL-related data
structures are initialized as usual, and disabling the GIL simply sets a flag
that causes `take_gil()` and `drop_gil()` to return early.
Only set filename to cwd if it was caused by failed chdir(cwd).
_fork_exec() now returns "noexec:chdir" for failed chdir(cwd).
Co-authored-by: Robert O'Shea <PurityLake@users.noreply.github.com>
Add support for `os.POSIX_SPAWN_CLOSEFROM` and
`posix_spawn_file_actions_addclosefrom_np` and have the `subprocess` module use
them when available. This means `posix_spawn` can now be used in the default
`close_fds=True` situation on many platforms.
Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
* Allow posix_spawn to inherit environment form parent environ variable.
With this change, posix_spawn call can behave similarly to execv with regards to environments when used in subprocess functions.
Refactor delete-safe symbol handling in subprocess.
Only module globals are force-cleared during interpreter finalization, using a class reference instead of individually listing the constants everywhere is simpler.
This fixes several ways file descriptors could be leaked from `subprocess.Popen` constructor during error conditions by opening them later and using a context manager "fds to close" registration scheme to ensure they get closed before returning.
---------
Co-authored-by: Gregory P. Smith [Google] <greg@krypto.org>
Add the -P command line option and the PYTHONSAFEPATH environment
variable to not prepend a potentially unsafe path to sys.path.
* Add sys.flags.safe_path flag.
* Add PyConfig.safe_path member.
* Programs/_bootstrap_python.c uses config.safe_path=0.
* Update subprocess._optim_args_from_interpreter_flags() to handle
the -P command line option.
* Modules/getpath.py sets safe_path to 1 if a "._pth" file is
present.
One more thing that can help prevent people from using `preexec_fn`.
Also adds conditional skips to two tests exposing ASAN flakiness on the Ubuntu 20.04 Address Sanitizer Github CI system. When that build is run on more modern systems the "problem" does not show up. It seems ASAN implementation related.
Co-authored-by: Zackery Spytz <zspytz@gmail.com>
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Just in case there is ever an issue with _posixsubprocess's use of
vfork() due to the complexity of using it properly and potential
directions that Linux platforms where it defaults to on could take, this
adds a failsafe so that users can disable its use entirely by setting
a global flag.
No known reason to disable it exists. But it'd be a shame to encounter
one and not be able to use CPython without patching and rebuilding it.
See the linked issue for some discussion on reasoning.
Also documents the existing way to disable posix_spawn.
Removes the `list` call in the Popen `repr`.
Current implementation:
For cmd = `python --version`, with `shell=True`.
```bash
<Popen: returncode: None args: ['p', 'y', 't', 'h', 'o', 'n', ' ', '-', '-',...>
```
For `shell=False` and args=`['python', '--version']`, the output is correct:
```bash
<Popen: returncode: None args: ['python', '--version']>
```
With the new changes the `repr` yields:
For cmd = `python --version`, with `shell=True`:
```bash
<Popen: returncode: None args: 'python --version'>
```
For `shell=False` and args=`['python', '--version']`, the output:
```bash
<Popen: returncode: None args: ['python', '--version']>
```
Automerge-Triggered-By: GH:gpshead