Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When the uperf command fails for some reason pbench-uperf does not report a failure #2842

Open
portante opened this issue May 14, 2022 · 0 comments
Labels
Agent Backlog bug uperf pbench-uperf benchmark related

Comments

@portante
Copy link
Member

When the uperf command fails for some reason pbench-uperf does not report a failure.

E.g., in the following cases if root does not have ssh keys setup on the client ...

pbench-uperf -r 10 -t stream -m 1 -p tcp -i 1 --samples 1 --clients 192.168.122.1
# reports no failures ($? == 0), the “results.json” file is generated and contains only metadata, no results.
# In “client_output.txt” I can see “** TCP: Cannot connect to 127.0.0.1:20010 Connection refused”

pbench-uperf -r 10 -t stream -m 1 -p tcp -i 1 --samples 1 --clients 192.168.122.1 --servers 0.0.0.0
# no error, $? == 0, but results.json contains only metadata.
# in sample output I can see “** TCP: Cannot connect to 0.0.0.0:20010 Connection refused”
@portante portante added bug Agent Backlog uperf pbench-uperf benchmark related labels May 14, 2022
portante added a commit to portante/pbench that referenced this issue May 24, 2022
Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue distributed-system-analysis#2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue distributed-system-analysis#2842 [2].

[1] distributed-system-analysis#2841
[2] distributed-system-analysis#2842
portante added a commit that referenced this issue May 24, 2022
Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue #2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue #2842 [2].

[1] #2841
[2] #2842
portante added a commit to portante/pbench that referenced this issue May 25, 2022
This is a back-port of commit ba6a2b7 (PR distributed-system-analysis#2860) from `main`.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue distributed-system-analysis#2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue distributed-system-analysis#2842 [2].

[1] distributed-system-analysis#2841
[2] distributed-system-analysis#2842
portante added a commit to portante/pbench that referenced this issue May 25, 2022
This is a back-port of commit ba6a2b7 (PR distributed-system-analysis#2860) from `main`.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue distributed-system-analysis#2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue distributed-system-analysis#2842 [2].

[1] distributed-system-analysis#2841
[2] distributed-system-analysis#2842
portante added a commit to portante/pbench that referenced this issue May 25, 2022
This is a back-port of commit ba6a2b7 (PR distributed-system-analysis#2860) from `main`.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue distributed-system-analysis#2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue distributed-system-analysis#2842 [2].

[1] distributed-system-analysis#2841
[2] distributed-system-analysis#2842
portante added a commit that referenced this issue May 25, 2022
This is a back-port of commit ba6a2b7 (PR #2860) from `main`.

Both `pbench-fio` and `pbench-uperf`:

 * No longer allow for the benchmark binary to be overriden by an
   environment variable -- this was an out-dated way for the unit tests
   to mock the respective benchmark behavior

 * No longer resolve and check that the benchmark binary is executable

There was an ordering problem with the old `benchmark_bin` variable
initial value and where it is used in the rest of the script (both for
`pbench-fio` and `pbench-uperf`). The existence check was not always
performed locally (no clients or servers specified which are local), but
the commands run remotely required `benchmark_bin` to be set during
creation of the commands for remote execution. By only checking for the
existence of the benchmark binary when performing the version check, and
allowing the shell to resolve the location of the benchmark binary at
run time, we avoid the interdependency altogether.

Mock commands for `fio` and `uperf` are provided on the `PATH` for
tests.  For the CLI tests, those mocks are removed so that we can verify
that help and usage text is emitted before the checks for the particular
command existence (which is shown by the failing `test-CL`).

We also add failing tests for `uperf` and `fio` behaviors:

 * `pbench-fio`
   * `test-22` -- missing `fio` command
   * `test-50` -- `fio -V` reports a bad version
 * `pbench-uperf`
   * `test-02` -- missing `uperf` command
   * `test-51` -- `uperf -V` reports a bad version

The existence check of the `fio` and `uperf` benchmark commands now
occurs after any help requests or command usage errors. This fixes
issue #2841 [1].

Finally, we correct the way `pbench-uperf` checked the status of the
`uperf` command execution of the benchmark by making sure the `local`
declaration is performed before the assigment, so that the return code
check is not overridden by the "status" of the `local` declaration.
This fixes issue #2842 [2].

[1] #2841
[2] #2842
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Agent Backlog bug uperf pbench-uperf benchmark related
Projects
None yet
Development

No branches or pull requests

1 participant