Please note that this repository has been DEPRECATED and further development is being done in our new repository cnf-certification-test.
This repository contains a set of Cloud-Native Network Functions (CNFs) test cases and the framework to build more. The tests and framework are intended to test the interaction of CNFs with OpenShift Container Platform. It also generates a report (claim.json) after running tests.
Please consult CATALOG.md for a catalog of the included test cases and test case building blocks.
The suite is provided here in part so that CNF Developers can use the suite to test their CNFs readiness for certification. Please see "CNF Developers" below for more information.
In the diagram above:
- the
CNF
is the CNF to be certified. The certification suite identifies the resources (containers/pods/operators etc) belonging to the CNF via labels or static data entries in the config file - the
Certification container/exec
is the certification test suite running on the platform or in a container. The executable verifies the CNF under test configuration and its interactions with openshift - the
Debug
pods are part of a daemonset responsible to run various privileged commands on kubernetes nodes. Debug pods are useful to run platform tests and test commands (e.g. ping) in container namespaces without changing the container image content. The debug daemonset is instantiated via the cnf-certification-test-partner repository repo.
The Test Network Function support autodiscovery using labels and annotations. The following sections describe how to configure the TNF via labels/annotation and the corresponding settings in the config file. A sample config file can be found here.
Multiple namespaces can be specified in the configuration file. Namespaces will be used by autodiscovery to find the Pods under test.
targetNameSpaces:
- name: firstnamespace
- name: secondnamespace
The goal of this section is to specify the labels to be used to identify the CNF resources under test. It's highly recommended that the labels should be defined in pod definition rather than added after pod is created, as labels added later on will be lost in case the pod gets rescheduled. In case of pods defined as part of a deployment, it's best to use the same label as the one defined in the spec.selector.matchLabels
section of the deployment yaml. The prefix field can be used to avoid naming collision with other labels.
targetPodLabels:
- prefix: test-network-function.com
name: generic
value: target
The corresponding label used to match pods is:
test-network-function.com/generic: target
Once the pods are found, all of their containers are also added to the target container list. A target deployments list will also be created with all the deployments which the test pods belong to.
In order to autodiscover the CRDs to be tested, an array of search filters can be set under the "targetCrdFilters" label. The autodiscovery mechanism will iterate through all the filters to look for all the CRDs that match it. Currently, filters only work by name suffix.
targetCrdFilters:
- nameSuffix: "group1.tnf.com"
- nameSuffix: "anydomain.com"
The autodiscovery mechanism will create a list of all CRD names in the cluster whose names have the suffix "group1.tnf.com" or "anydomain.com", e.g. "crd1.group1.tnf.com" or "mycrd.mygroup.anydomain.com".
The autodiscovery mechanism will attempt to identify the default network device and all the IP addresses of the pods it needs for network connectivity tests, though that information can be explicitly set using annotations if needed. For Pod IPs:
- The annotation test-network-function.com/multusips is the highest priority, and must contain a JSON-encoded list of IP addresses to be tested for the pod. This must be explicitly set.
- If the above is not present, the k8s.v1.cni.cncf.io/networks-status annotation is checked and all IPs from it are used. This annotation is automatically managed in OpenShift but may not be present in K8s.
- If neither of the above is present, then only known IPs associated with the pod are used (the pod .status.ips field).
For Network Interfaces:
- The annotation test-network-function.com/defaultnetworkinterface is the highest priority, and must contain a JSON-encoded string of the primary network interface for the pod. This must be explicitly set if needed. Examples can be seen in cnf-certification-test-partner
- If the above is not present, the k8s.v1.cni.cncf.io/networks-status annotation is checked and the "interface" from the first entry found with "default"=true is used. This annotation is automatically managed in OpenShift but may not be present in K8s.
The label test-network-function.com/skip_connectivity_tests excludes pods from connectivity tests. The label value is not important, only its presence. The label test-network-function.com/skip_multus_connectivity_tests excludes pods from Multus connectivity tests only. The label value is not important, only its presence. Note: if both labels are present the test-network-function.com/skip_connectivity_tests takes precedence.
The section can be configured as well as auto discovered. For manual configuration, see the commented part of the sample config. For autodiscovery:
- CSVs to be tested by the
operator
spec are identified with thetest-network-function.com/operator=target
label. Any value is permitted buttarget
is used here for consistency with the other specs. test-network-function.com/subscription_name
is optional and should contain a JSON-encoded string that's the name of the subscription for this CSV. If unset, the CSV name will be used.
The certifiedcontainerinfo
and certifiedoperatorinfo
sections contain information about Containers and Operators that are
to be checked for certification status on Red Hat catalogs.
This boolean flag can be turned on when you intent to have the test suite check the certification status of the container images used by the autodiscoverd test target pods in addition to the configured image list.
If you would like to skip intrusive tests which may disrupt cluster operations, issue the following:
export TNF_NON_INTRUSIVE_ONLY=true
Likewise, to enable intrusive tests, set the following:
export TNF_NON_INTRUSIVE_ONLY=false
This env var is optional, but highly recommended if running the test suite from a clone of this github repo. It's not needed or used if running the tnf image.
To set it, clone the partner repo and set TNF_PARTNER_SRC_DIR to point to it.
export TNF_PARTNER_SRC_DIR=/home/userid/code/cnf-certification-test-partner
When this variable is set, the run-cnf-suites.sh script will deploy/refresh the partner deployments/pods in the cluster before starting the test run.
In a disconnected environment, only specific versions of images are mirrored to the local repo. For those environments,
the debug pod image quay.io/testnetworkfunction/debug-partner
should be mirrored
and TNF_PARTNER_REPO
should be set to the local repo, e.g.:
export TNF_PARTNER_REPO="registry.dfwt5g.lab:5000/testnetworkfunction"
The test suites from openshift-kni/cnf-feature-deploy can be run prior to the actual CNF certification test execution and the results are incorporated in the same claim file if the following environment variable is set:
export TNF_RUN_CFD_TEST=true
By default, the image with release tag 4.6
is used and the ginkgo skip argument is set to performance|sriov|ptp|sctp|xt_u32|dpdk|ovn
. To override the default behavior, set these environment variables: TNF_CFD_IMAGE_TAG
and TNF_CFD_SKIP
. For more information on the test suites, refer to the cnf-features-deploy repository
An image is built and is available at this repository: quay.io The image can be pulled using :
docker pull quay.io/testnetworkfunction/test-network-function
- OCP cluster should allow interactive shell sessions to pods/containers to stay alive when being idle for more than a few minutes. If it is not the case, consult the maintainer of the cluster infrastructure on how it can be enabled. Also, make sure the firewalls/load balancers on the path do not timeout idle connections too quickly.
- OCP cluster should provide enough resources to drain nodes and reschedule pods. If that's not the case, then
lifecycle-pod-recreation
test should be skipped.
Some tests suites such as platform-alteration require node access to get node configuration like hugepage. In order to get the required information, the test suite does not ssh into nodes, but instead rely on oc debug tools . This tool makes it easier to fetch information from nodes and also to debug running pods.
In short, oc debug tool will launch a new container ending with "-debug" suffix, the container will be destroyed once the debug session is done. To be able to create the debug pod, the cluster should have enough resources, otherwise those tests would fail.
Note: It's recommended to clean up disk space and make sure there's enough resources to deploy another container image in every node before starting the tests.
./run-tnf-container.sh
script is used to launch the tests.
There are several required arguments:
-t
gives the local directory that contains tnf config files set up for the test.-o
gives the local directory that the test results will be available in once the container exits. This directory must exist in order for the claim file to be written.
Optional arguments are:
-f
gives the list of suites to be run, space separated.-s
gives the name of tests that should be skipped. This flag is discarded if no-f
was set.-i
gives a name to a custom TNF container image. Supports local images, as well as images from external registries.-k
gives a path to one or more kubeconfig files to be used by the container to authenticate with the cluster. Paths must be separated by a colon.-n
gives the network mode of the container. Defaults set tohost
, which requires selinux to be disabled. Alternatively,bridge
mode can be used with selinux if TNF_CONTAINER_CLIENT is set todocker
or running the test as root. See the docker run --network parameter reference for more information on how to configure network settings.
If -f
is not specified, the tnf will run in 'diagnostic' mode. In this mode, no test case will run: it will only get information from the cluster (PUTs, CRDs, nodes info, etc...) to save it in the claim file. This can be used to make sure the configuration was properly set and the autodiscovery found the right pods/crds...
If -k
is not specified, autodiscovery is performed.
The autodiscovery first looks for paths in the $KUBECONFIG
environment variable on the host system, and if the variable is not set or is empty, the default configuration stored in $HOME/.kube/config
is checked.
./run-tnf-container.sh -k ~/.kube/config -t ~/tnf/config -o ~/tnf/output -f networking access-control -s access-control-host-resource-PRIVILEGED_POD
See General tests for a list of available keywords.
By default, run-container.sh
utilizes podman
. However, you can configure an alternate container virtualization
client using TNF_CONTAINER_CLIENT
. This is particularly useful for operating systems that do not readily support
podman
, such as macOS. In order to configure the test harness to use docker
, issue the following prior to
run-tnf-container.sh
:
export TNF_CONTAINER_CLIENT="docker"
You can build an image locally by using the command below. Use the value of TNF_VERSION
to set a branch, a tag, or a hash of a commit that will be installed into the image.
docker build -t test-network-function:v1.0.5 --build-arg TNF_VERSION=v1.0.5 .
To build an image that installs TNF from an unofficial source (e.g. a fork of the TNF repository), use the TNF_SRC_URL
build argument to override the URL to a source repository.
docker build -t test-network-function:v1.0.5 \
--build-arg TNF_VERSION=v1.0.5 \
--build-arg TNF_SRC_URL=https://github.com/test-network-function/test-network-function .
To make run-tnf-container.sh
use the newly built image, specify the custom TNF image using the -i
parameter.
./run-tnf-container.sh -i test-network-function:v1.0.5 -t ~/tnf/config -o ~/tnf/output -f networking access-control
Note: see General tests for a list of available keywords.
Currently, all available tests are part of the "CNF Certification Test Suite" test suite, which serves as the entrypoint to run all test specs.
By default, test-network-function
emits results to test-network-function/cnf-certification-tests_junit.xml
.
At a minimum, the following dependencies must be installed prior to running make install-tools
.
Dependency | Minimum Version |
---|---|
GoLang | 1.17 |
golangci-lint | 1.45.2 |
jq | 1.6 |
OpenShift Client | 4.7 |
Other binary dependencies required to run tests can be installed using the following command:
make install-tools
Finally the source dependencies can be installed with
make update-deps
Note: You must also make sure that $GOBIN
(default $GOPATH/bin
) is on your $PATH
.
Note: Efforts to containerize this offering are considered a work in progress.
In order to pull the code, issue the following command:
mkdir ~/workspace
cd ~/workspace
git clone [email protected]:test-network-function/test-network-function.git
cd test-network-function
In order to build the test executable, first make sure you have satisfied the dependencies.
make build-cnf-tests
Gotcha: The make build*
commands run unit tests where appropriate. They do NOT test the CNF.
Once the executable is built, a CNF can be tested by specifying which suites to run using the run-cnf-suites.sh
helper
script.
Run any combination of the suites keywords listed at in the General tests section, e.g.
./run-cnf-suites.sh -f lifecycle
./run-cnf-suites.sh -f networking lifecycle
./run-cnf-suites.sh -f operator networking
./run-cnf-suites.sh -f networking platform-alteration
./run-cnf-suites.sh -f networking lifecycle affiliated-certification operator
As with "run-tnf-container.sh", if -f
is not specified here, the tnf will run in 'diagnostic' mode. See Run the tests section for more info.
By default the claim file will be output into the same location as the test executable. The -o
argument for
run-cnf-suites.sh
can be used to provide a new location that the output files will be saved to. For more detailed
control over the outputs, see the output of test-network-function.test --help
.
cd test-network-function && ./test-network-function.test --help
Gotcha: check that OCP cluster has resources to deploy debug image
All tests have unique labels, which can be used to filter which tests are to be run. This is useful when debugging a single test.
You can select the test to be executed when running run-cnf-suites.sh
with the following command-line:
./run-cnf-suites.sh -f operator -l operator-install-source
Note that the -l
parameter will be treated as a regular expression, so you can select more than one test by
their labels.
You can find all test labels by running the following commands:
cd test-network-function
./test-network-function.test --ginkgo.dry-run --ginkgo.v
You can also check the CATALOG.md to find all test labels.
There are two categories for CNF tests; 'General' and 'CNF-specific' (TODO).
The 'General' tests are designed to test any commodity CNF running on OpenShift, and include specifications such as 'Default' network connectivity.
'CNF-specific' tests are designed to test some unique aspects of the CNF under test are behaving correctly. This could
include specifications such as issuing a GET
request to a web server, or passing traffic through an IPSEC tunnel.
'CNF-specific' test are yet to be defined.
Test in the "general" category belong to multiple suites that can be run in any combination as is appropriate for the CNF(s) under test. Test suites group tests by topic area:
Suite | Test Spec Description | Minimum OpenShift Version |
---|---|---|
access-control |
The access-control test suite is used to test service account, namespace and cluster/pod role binding for the pods under test. It also tests the pods/containers configuration. | 4.6.0 |
affiliated-certification |
The affiliated-certification test suite verifies that the containers and operators listed in the configuration file or used by the CNF are certified by Redhat | 4.6.0 |
lifecycle |
The lifecycle test suite verifies the pods deployment, creation, shutdown and survivability. | 4.6.0 |
networking |
The networking test suite contains tests that check connectivity and networking config related best practices. | 4.6.0 |
operator |
The operator test suite is designed to test basic Kubernetes Operator functionality. | 4.6.0 |
platform-alteration |
verifies that key platform configuration is not modified by the CNF under test | 4.6.0 |
observability |
the observability test suite contains tests that check CNF logging is following best practices and that CRDs have status fields | 4.6.0 |
Please consult CATALOG.md for a detailed description of tests in each suite. |
TODO
The test suite generates a "claim" file, which describes the system(s) under test, the tests that were run, and the outcome of all of the tests. This claim file is the proof of the test run that is evaluated by Red Hat when "certified" status is being considered. For more information about the contents of the claim file please see the schema. You can read more about the purpose of the claim file and CNF Certification in the Guide.
e.g. Adding a cnf platform test results to your existing claim file.
You can use the claim cli tool to append other related test suite results to your existing claim.json file. The output of the tool will be an updated claim file.
go run cmd/tools/cmd/main.go claim-add --claimfile=claim.json --reportdir=/home/$USER/reports
Args:
--claimfile is an existing claim.json file
--repordir :path to test results that you want to include.
The tests result files from the given report dir will be appended under the result section of the claim file using file name as the key/value pair. The tool will ignore the test result, if the key name is already present under result section of the claim file.
"results": {
"cnf-certification-tests_junit": {
"testsuite": {
"-errors": "0",
"-failures": "2",
"-name": "CNF Certification Test Suite",
"-tests": "14",
When run the CNF test suite will output a report to the terminal that is primarily useful for Developers to evaluate and address problems. This output is similar to many testing tools.
Here's an example of a Test pass. It verifies that the CNF is using a replica set:
------------------------------
lifecycle when Testing owners of CNF pod
Should be only ReplicaSet
/Users/$USER/cnf-cert/test-network-function/test-network-function/lifecycle/suite.go:339
2021/07/27 11:41:25 Sent: "oc -n tnf get pods test-697ff58f87-d55zx -o custom-columns=OWNERKIND:.metadata.ownerReferences\\[\\*\\].kind && echo END_OF_TEST_SENTINEL\n"
2021/07/27 11:41:26 Match for RE: "(?s)OWNERKIND\n.+((.|\n)*END_OF_TEST_SENTINEL\n)" found: ["OWNERKIND\nReplicaSet\nEND_OF_TEST_SENTINEL\n" "END_OF_TEST_SENTINEL\n" ""] Buffer: "OWNERKIND\nReplicaSet\nEND_OF_TEST_SENTINEL\n"
•
The following is the output from a Test failure. In this case, the test is checking that a CSV (ClusterServiceVersion) is installed correctly, but does not find it (the operator was not present on the cluster under test):
------------------------------
operator Runs test on operators when under test is: my-etcd/etcdoperator.v0.9.4
tests for: CSV_INSTALLED
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:122
2020/12/15 15:28:19 Sent: "oc get csv etcdoperator.v0.9.4 -n my-etcd -o json | jq -r '.status.phase'\n"
• Failure [10.002 seconds]
operator
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:58
Runs test on operators
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:71
when under test is: my-etcd/etcdoperator.v0.9.4
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:121
tests for: CSV_INSTALLED [It]
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:122
Expected
<int>: 0
to equal
<int>: 1
The following is the output from a Test failure. In this case, the test is checking that a Subscription is installed correctly, but does not find it (the operator was not present on the cluster under test):
------------------------------
operator Runs test on operators when under test is: my-etcd/etcd
tests for: SUBSCRIPTION_INSTALLED
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:129
2021/04/09 12:37:10 Sent: "oc get subscription etcd -n my-etcd -ojson | jq -r '.spec.name'\n"
• Failure [10.000 seconds]
operator
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:55
Runs test on operators
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:68
when under test is: default/etcdoperator.v0.9.4
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:128
tests for: SUBSCRIPTION_INSTALLED [It]
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:129
Expected
<int>: 0
to equal
<int>: 1
The following is the output from a Test failure. In this case, the test is checking clusterPermissions for specific CSV, but does not find it (the operator was not present on the cluster under test):
------------------------------
operator Runs test on operators
should eventually be verified as certified (operator redhat-marketplace/etcd-operator)
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:146
• Failure [30.002 seconds]
operator
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:76
Runs test on operators
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:89
should eventually be verified as certified (operator redhat-marketplace/etcd-operator) [It]
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:146
Timed out after 30.001s.
Expected
<bool>: false
to be true
/Users/$USER/cnf-cert/test-network-function/test-network-function/operator/suite.go:152
The optional LOG_LEVEL environment variable sets the log level. Defaults to "info" if not set. Valid values are: trace, debug, info, warn, error, fatal, panic.
A tool for processing the claim file and producing a quality grade for the CNF.
The user supplies a policy conforming to policy schema.
A grade is considered passed
if all its direct tests passed and its base grade passed.
In the output we use the field propose
to indicate grade passed or failed.
See policy example for understanding the output of the grading tool.
make build
or
make build-gradetool
Executable name is gradetool
.
Developers of CNFs, particularly those targeting CNF Certification with Red Hat on OpenShift, can use this suite to test the interaction of their CNF with OpenShift. If you are interested in CNF Certification please contact Red Hat.
Refer to the rest of the documentation in this file to see how to install and run the tests as well as how to interpret the results.
You will need an OpenShift 4.6 (or newer) installation running your CNF, and at least one other machine available to host the test suite. The cnf-certification-test-partner repository has a very simple example of this you can model your setup on.
Issue #146: Shell Output larger than 16KB requires specification of the TNF_DEFAULT_BUFFER_SIZE environment variable
When dealing with large output, you may occasionally overrun the default buffer size. The manifestation of this issue is
a json.SyntaxError
, and may look similar to the following:
Expected
<*json.SyntaxError | 0xc0002bc020>: {
msg: "unexpected end of JSON input",
Offset: 660,
}
to be nil
In such cases, you will need to set the TNF_DEFAULT_BUFFER_SIZE to a sufficient size (in bytes) to handle the expected output.
For example:
TNF_DEFAULT_BUFFER_SIZE=32768 ./run-cnf-suites.sh -f networking
to test if operator certified need to label it with this command
oc label csv <name> -n <namespace> "test-network-function.com/operator=target"
to test if the helm chart is certified its need to be deployed under the namespace that are under test. if there is a need to skip a spisific helm need to add his name into the tnf_config.