From 5f660d0f8e340c144d03eafe8279b8eb78f95e85 Mon Sep 17 00:00:00 2001 From: Webb Scales Date: Fri, 21 Apr 2023 12:23:38 -0400 Subject: [PATCH 1/6] Update the top-level README.md --- README.md | 48 +++++++++++++++++++++--------------------------- 1 file changed, 21 insertions(+), 27 deletions(-) diff --git a/README.md b/README.md index 5d40db8411..f593991806 100644 --- a/README.md +++ b/README.md @@ -9,14 +9,14 @@ for those systems, and specified telemetry or data from various tools (`sar`, The second sub-system is the `pbench-server`, which is responsible for archiving result tar balls, indexing them, and unpacking them for display. +It provides a RESTful API which can be used by client applications, such as +the Pbench Dashboard, which provides a web-based interface to the Pbench +Server. The third sub-system is the `web-server` JS and CSS files, used to display various graphs and results, and any other content generated by the `pbench-agent` during benchmark and tool post-processing steps. -The pbench Dashboard code lives in its own [repository]( -https://github.com/distributed-system-analysis/pbench-dashboard). - ## How is it installed? Instructions on installing `pbench-agent`, can be found in the Pbench Agent [Getting Started Guide]( @@ -24,8 +24,7 @@ https://distributed-system-analysis.github.io/pbench/gh-pages/start.html). For Fedora, CentOS, and RHEL users, we have made available [COPR builds](https://copr.fedorainfracloud.org/coprs/ndokos/pbench/) for the -`pbench-agent`, `pbench-server`, `pbench-web-server`, and some benchmark and -tool packages. +`pbench-agent`, `pbench-web-server`, and some benchmark and tool packages. Install the `pbench-web-server` package on the machine from where you want to run the `pbench-agent` workloads, allowing you to view the graphs before @@ -47,14 +46,11 @@ main documentation for a super quick set of introductory steps. The latest source code is at https://github.com/distributed-system-analysis/pbench. -The pbench dashboard code is maintained separately at -https://github.com/distributed-system-analysis/pbench-dashboard. - ## Is there a mailing list for discussions? Yes, we use [Google Groups](https://groups.google.com/forum/#!forum/pbench) -## How do I report and issue? +## How do I report an issue? Please use GitHub's [issues]( https://github.com/distributed-system-analysis/pbench/issues/new/choose). @@ -66,7 +62,7 @@ https://github.com/distributed-system-analysis/pbench/projects). Please find projects covering the [Agent]( https://github.com/distributed-system-analysis/pbench/projects/2), [Server](https://github.com/distributed-system-analysis/pbench/projects/3), -[Dashboard]()https://github.com/distributed-system-analysis/pbench/projects/1, +[Dashboard](https://github.com/distributed-system-analysis/pbench/projects/1), and a project that is named the same as the current [milestone]( https://github.com/distributed-system-analysis/pbench/milestones). @@ -103,8 +99,6 @@ python using the python environment short-hands: See https://tox.wiki/en/latest/example/basic.html#a-simple-tox-ini-default-environments. -Each time tests are run, the linting steps (`black` and `flake8`) are run first. - You can provide arguments to the `tox` invocation to request sub-sets of the available tests be run. @@ -158,18 +152,23 @@ a sub-directory name found in `agent/bench-scripts/tests`. For example: The first runs all the `pbench-fio` tests, while the second runs all the `pbench-uperf` and `pbench-linpack` tests. +You can run the `build.sh` script to execute the linters, to run the unit tests +for the Agent, Server, and Dashboard code, and to build installations for the +Agent, Server, and Dashboard. + Finally, see the `jenkins/Pipeline.gy` file for how the unit tests are run in our CI jobs. ### Python formatting -This project uses the [flake8](http://flake8.pycqa.org/en/latest) method of code +This project uses the [`flake8`](http://flake8.pycqa.org/en/latest) method of code style enforcement, linting, and checking. All python code contributed to pbench must match the style requirements. These -requirements are enforced by the [pre-commit](https://pre-commit.com) hook -using the [black](https://github.com/psf/black) Python code formatter and the -[isort](https://github.com/pycqa/isort) Python import sorter. +requirements are enforced by the [pre-commit](https://pre-commit.com) hook. In +addition to `flake8`, pbench uses the [`black`](https://github.com/psf/black) +Python code formatter and the [`isort`](https://github.com/pycqa/isort) Python +import sorter. ### Use pre-commit to set automatic commit requirements @@ -194,14 +193,13 @@ starting with the `v0.70.0` release (`v..[-]`). Prior to the v0.70.0 release, the scheme used was mostly `v.`, where we only had minor releases (`Major = 0`). -The practice of using `-agent` or `-server` is also ending with the `v0.70.0` -release. +The practice of using `-agent` or `-server` ended with the `v0.70.0` release. ### Container Image Tags This same GitHub "tag" scheme is used with tags applied to container images we build, with the following exceptions for tag names: - * `latest` - always points to the "latest" container image pushed to a + * `latest` - always points to the latest released container image pushed to a repository * `v-latest` - always points to the "latest" `Major` released @@ -215,13 +213,9 @@ we build, with the following exceptions for tag names: ### References to Container Image Repositories The operation of our functional tests, the Pbench Server "in-a-can" used in the functional tests, and other verification and testing environments use -container images from public repositories and non-public ones. The CI jobs +container images from remote image registries. The CI jobs obtain references to those repositories using Jenkins credentials. When a -developer runs those same jobs locally, you can create two files with the -appropriate contents locally: - - * `${HOME}/.config/pbench/ci_registry.name` - * `${HOME}/.config/pbench/public_registry.name` +running those same jobs locally, you can provide the registry via +`${HOME}/.config/pbench/ci_registry.name`. -If those files are not provided local execution will report an error when those -values are missing. +If this file is not provided, local execution will report an error. From ea2e8f6f4338c02411a88ec26cc4779faaf8f9a7 Mon Sep 17 00:00:00 2001 From: Webb Scales Date: Fri, 26 May 2023 17:48:27 -0400 Subject: [PATCH 2/6] Correct the TDS main() docstring --- lib/pbench/agent/tool_data_sink.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/pbench/agent/tool_data_sink.py b/lib/pbench/agent/tool_data_sink.py index 29be20642e..53d78d286b 100644 --- a/lib/pbench/agent/tool_data_sink.py +++ b/lib/pbench/agent/tool_data_sink.py @@ -2125,7 +2125,7 @@ def start(prog: Path, parsed: Arguments): def main(argv: List[str]): - """Main program for the Tool Meister. + """Main program for the Tool Data Sink. Arguments: argv - a list of parameters From 9edab21db6b3dcdd18e5a7417698f7a352920986 Mon Sep 17 00:00:00 2001 From: Webb Scales Date: Tue, 30 May 2023 12:11:18 -0400 Subject: [PATCH 3/6] Add missing trailing newline to docs/.gitignore --- docs/.gitignore | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/.gitignore b/docs/.gitignore index c6a151b325..69fa449dd9 100644 --- a/docs/.gitignore +++ b/docs/.gitignore @@ -1 +1 @@ -_build/ \ No newline at end of file +_build/ From 49b42684cf62a44b7a2ad79f66374e9815e30829 Mon Sep 17 00:00:00 2001 From: Webb Scales Date: Mon, 5 Jun 2023 17:56:37 -0400 Subject: [PATCH 4/6] Correct pbench-register-tool-trigger help text --- lib/pbench/cli/agent/commands/triggers/register.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/pbench/cli/agent/commands/triggers/register.py b/lib/pbench/cli/agent/commands/triggers/register.py index b726357c7e..6e3353c930 100644 --- a/lib/pbench/cli/agent/commands/triggers/register.py +++ b/lib/pbench/cli/agent/commands/triggers/register.py @@ -107,7 +107,7 @@ def callback(ctxt, param, value): )(f) -@click.command(help="list registered triggers") +@click.command(help="register tool triggers") @common_options @_group_option @_start_option From 8b9f4a0a09aac4bf5325678322ac605f747ea1b3 Mon Sep 17 00:00:00 2001 From: Webb Scales Date: Wed, 7 Jun 2023 17:24:51 -0400 Subject: [PATCH 5/6] Review feedback --- README.md | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index f593991806..a5c98937ea 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,8 @@ for those systems, and specified telemetry or data from various tools (`sar`, `vmstat`, `perf`, etc.). The second sub-system is the `pbench-server`, which is responsible for -archiving result tar balls, indexing them, and unpacking them for display. +archiving result tar balls, indexing them, and managing access to their +contents. It provides a RESTful API which can be used by client applications, such as the Pbench Dashboard, which provides a web-based interface to the Pbench Server. @@ -71,7 +72,7 @@ https://github.com/distributed-system-analysis/pbench/milestones). Below are some simple steps for setting up a development environment for working with the Pbench code base. For more detailed instructions on the workflow and process of contributing code to Pbench, refer to the [Guidelines -for Contributing](docs/CONTRIBUTING.md). +for Contributing](docs/Developers/contributing.md). ### Getting the Code @@ -126,12 +127,10 @@ Each of the "agent" and "server" tests can be further subsetted as follows: * server * python -- runs the python tests (via python) - * legacy -- runs the legacy tests For example: * `tox -- agent legacy` -- run agent legacy tests - * `tox -- server legacy` -- run server legacy tests * `tox -- server python` -- run server python tests (via `pytest`) For any of the test sub-sets on either the agent or server sides of the tree, @@ -140,7 +139,6 @@ allows one to request a specific test, or set of tests, or command line parameters to modify the test behavior: * `tox -- agent bench-scripts test-CL` -- run bench-scripts' test-CL - * `tox -- server legacy test-28 test-32` -- run server legacy tests 28 & 32 * `tox -- server python -v` -- run server python tests verbosely For the `agent/bench-scripts` tests, one can run entire sub-sets of tests using @@ -193,8 +191,6 @@ starting with the `v0.70.0` release (`v..[-]`). Prior to the v0.70.0 release, the scheme used was mostly `v.`, where we only had minor releases (`Major = 0`). -The practice of using `-agent` or `-server` ended with the `v0.70.0` release. - ### Container Image Tags This same GitHub "tag" scheme is used with tags applied to container images we build, with the following exceptions for tag names: From 29abe0531d4942f1a1ba05a3c2dfe1f93705a7c8 Mon Sep 17 00:00:00 2001 From: Webb Scales Date: Thu, 8 Jun 2023 15:42:57 -0400 Subject: [PATCH 6/6] Review feedback --- README.md | 29 ++++++++++++++--------------- 1 file changed, 14 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index a5c98937ea..5f97a670f5 100644 --- a/README.md +++ b/README.md @@ -3,34 +3,33 @@ A Benchmarking and Performance Analysis Framework The code base includes three sub-systems. The first is the collection agent, `pbench-agent`, responsible for providing commands for running benchmarks -across one or more systems, while properly collecting the configuration data -for those systems, and specified telemetry or data from various tools (`sar`, +across one or more systems while properly collecting the configuration data for +those systems as well as specified telemetry or data from various tools (`sar`, `vmstat`, `perf`, etc.). The second sub-system is the `pbench-server`, which is responsible for archiving result tar balls, indexing them, and managing access to their contents. It provides a RESTful API which can be used by client applications, such as -the Pbench Dashboard, which provides a web-based interface to the Pbench -Server. +the Pbench Dashboard, to curate the results as well as to explore their +contents. -The third sub-system is the `web-server` JS and CSS files, used to display -various graphs and results, and any other content generated by the -`pbench-agent` during benchmark and tool post-processing steps. +The third sub-system is the Pbench Dashboard, which provides a web-based GUI +for the Pbench Server allowing users to list and view public results. After +logging in, users can view their own results, make them available for others +to view, or delete them. On the _User Profile_ page, a logged-in user can +generate API keys for use with the Pbench Server API or with the Agent +`pbench-results-push` command. The Pbench Dashboard also serves as a platform +for exploring and visualizing result data. ## How is it installed? -Instructions on installing `pbench-agent`, can be found +Instructions for installing `pbench-agent`, can be found in the Pbench Agent [Getting Started Guide]( https://distributed-system-analysis.github.io/pbench/gh-pages/start.html). For Fedora, CentOS, and RHEL users, we have made available [COPR builds](https://copr.fedorainfracloud.org/coprs/ndokos/pbench/) for the -`pbench-agent`, `pbench-web-server`, and some benchmark and tool packages. - -Install the `pbench-web-server` package on the machine from where you want to -run the `pbench-agent` workloads, allowing you to view the graphs before -sending the results to a server, or even if there is no server configured to -send results. +`pbench-agent` and some benchmark and tool packages. You might want to consider browsing through the [rest of the documentation]( https://distributed-system-analysis.github.io/pbench/gh-pages/doc.html). @@ -210,7 +209,7 @@ we build, with the following exceptions for tag names: The operation of our functional tests, the Pbench Server "in-a-can" used in the functional tests, and other verification and testing environments use container images from remote image registries. The CI jobs -obtain references to those repositories using Jenkins credentials. When a +obtain references to those repositories using Jenkins credentials. When running those same jobs locally, you can provide the registry via `${HOME}/.config/pbench/ci_registry.name`.