cluster-monitoring-operator projects are Apache 2.0 licensed and accept contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted.
By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the DCO file for details.
Cluster Monitoring Operator is part of OpenShift and therefore follows the OpenShift Life Cycle
You should keep this in mind when deciding in which release you want your feature or fix.
Before you get started, you have to perform the following mandatory steps:
- Open an bug in Bugzilla
- Fork this repository
If you want to make changes to the actual code, please follow the Coding Style for code changes.
These steps outline the general contribution workflow:
- Create a topic branch from where you want to base your work (usually master).
- Make commits of logical units.
- Make sure your commit messages are in the proper format (see Format of the Commit Message)
- Push your changes to a topic branch in your fork of the repository.
- Make sure the tests pass, and add any new tests as appropriate.
- Submit a pull request to the original repository. (see Format of Pull Requests)
To allow scripts and make
targets working correctly, ensure you have following tools installed in your system:
- golang (see
go.mod
file for minimum required go version) awk
(GNU variant)sed
(GNU variant)make
curl
- python2 and pyyaml package
All other tools are downloaded automatically by make
and put into tmp/bin
directory.
This project is making use of a lot of upstream projects and imports them.
All tools required, should be installed on demand as part of the make
command starting from release-4.6.
Prior to release-4.6 to work with jsonnet you should have jsonnet bundler installed and updated.
Assuming you have made your changes upstream (see an example change), you can now go ahead and update the dependency.
Since release-4.6:
make jsonnet/vendor --always-make
Earlier release branches:
cd jsonnet
jb update
Now make sure that you only update or adjust the dependency you need to and commit that update. Please refer to Development Doc for more information.
git add -p jsonnet/jsonnetfile.lock.json
git commit -m 'jsonnet: <meaningful message about what you did>'
git push
git checkout jsonnet/jsonnet.lock.json
See Format of the Commit Message for help on how to format your commit message
The last step is to regenerate all assets.
Since release-4.6, this just requires the following command:
make generate
For all older branches this is easiest done in a container using the following command
make generate-in-docker
or if you are on a Mac
MAKE=make make generate-in-docker
At this point, you should follow a standard git workflow:
- review your changes using
git diff
- add all generated files in one commit
- push to your branch
- open a Pull Request (see Format of Pull Requests)
-
In case generation step or CI
ci/prow/generate
check fails, try runningmake clean
to remove stale jsonnet vendor directory. -
In case you have problems with
make generate
due to problems with system-wide tooling, you can use slowermake generate-in-docker
target which will install necessary tools in containerized environment and will generate assets. This targets needsdocker
to be installed on host and was not tested with other container runtime environments.
Supposing $KUBECONFIG is set to the config file of a Kubernetes cluster running the codes to test. We can run all tests by using this command make test
.
The testing consist of 3 aspects:
- unit tests, can be run separately by
make test-unit
. - Prometheus rule tests, can be run separately by
make test-rules
. - end to end tests, can be run separately by
make test-e2e
.
If we need to run a specific test case of the E2E test, we can use the following command.
go test -v -timeout=120m ./test/e2e/ --kubeconfig $KUBECONFIG -run TestBodySizeLimit
Attention that we have to pass a valid $KUBECONFIG explicitly --kubeconfig $KUBECONFIG
even if ~/.kube/config exists.
To run a specific test case of unit tests, we can use the command go test -v $PACKAGE_DIR -run $TEST_FUNC_NAME
.
The $PACKAGE_DIR
is where the source files of a Go package lives. The $TEST_FUNC_NAME
is the test function whose name always starts with "Test" (regex pattern TEST\w+
).
For example, we have a test function TestImageParsing
in package manifests
.The source file ./pkg/manifests/image_test.go
and other source code files of this package lives in ./pkg/manifests
. So we can use the following command to run the test function TestImageParsing
in package manifests
.
go test -v ./pkg/manifests -run TestImageParsing
cluster-monitoring-operator projects written in Go follow a set of style guidelines that we've documented here. Please follow them when working on your contributions.
We are making heavy use of bots and integrations. In order for those to work properly, your Pull Request should match the following structure:
Bug 123456: this is the exact problem or fix
We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why.
scripts: add the test-cluster command
this uses tmux to setup a test cluster that you can easily kill and
start for debugging.
Fixes #38
The format can be described more formally as follows:
<subsystem>: <what changed>
<BLANK LINE>
<why this change was made>
<BLANK LINE>
<footer>
The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools.
Thank you for contributing!