Skip to content

Commit

Permalink
Read fixed chunked size in run_duplicate_streams
Browse files Browse the repository at this point in the history
This fixes a potential subversion of the timeout parameter
in `process.run_duplicate_streams`.

That is, if the child process writes to much and too fast to its
standard streams, the parent process will take too long (or even
indefinitely) to read and then write the contents to its own
standard streams, before it e.g. checks whether the process should
timeout.

This change adds a size parameter to the relevant read functions
to prevent above.
  • Loading branch information
lukpueh committed Oct 1, 2019
1 parent 5d51440 commit 3cec858
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions securesystemslib/process.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,8 +190,11 @@ def _duplicate_streams():
contents to parent process standard streams, and build up return values
for outer function.
"""
stdout_part = stdout_reader.read()
stderr_part = stderr_reader.read()
# Read until EOF but at most `io.DEFAULT_BUFFER_SIZE` bytes per call.
# Reading and writing in reasonably sized chunks prevents us from
# subverting a timeout, due to being busy for too long or indefinitely.
stdout_part = stdout_reader.read(io.DEFAULT_BUFFER_SIZE)
stderr_part = stderr_reader.read(io.DEFAULT_BUFFER_SIZE)
sys.stdout.write(stdout_part)
sys.stderr.write(stderr_part)
sys.stdout.flush()
Expand Down

0 comments on commit 3cec858

Please sign in to comment.