Skip to content

Commit

Permalink
Merge pull request #370 from fabriziosestito/docs/update-audit-scanne…
Browse files Browse the repository at this point in the history
…r-explanation

docs: update audit-scanner explanation
  • Loading branch information
jhkrug authored Mar 8, 2024
2 parents cbf857f + 9bfcd59 commit c103672
Show file tree
Hide file tree
Showing 2 changed files with 148 additions and 116 deletions.
20 changes: 9 additions & 11 deletions docs/explanations/audit-scanner/audit-scanner.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ sidebar_position: 50
title: What is the Audit Scanner?
description: An overview of the Kubewarden Audit Scanner.
keywords: [kubewarden, audit scanner, kubernetes]
doc-persona: [kubewarden-user, kubewarden-operator, kubewarden-policy-developer, kubewarden-integrator]
doc-persona:
[kubewarden-user, kubewarden-operator, kubewarden-policy-developer, kubewarden-integrator]
doc-type: [explanation]
doc-topic: [explanations, audit-scanner]
---
Expand All @@ -20,7 +21,7 @@ The Audit Scanner feature is available starting from Kubewarden 1.7.0 release
:::

The `audit-scanner` component constantly checks resources in the cluster.
It flags the ones that don't adhere with the Kubewarden policies deployed in the cluster.
It flags the ones that don't adhere to the Kubewarden policies deployed in the cluster.

Policies evolve over time.
New ones are deployed, existing ones are updated.
Expand All @@ -32,8 +33,7 @@ To explain the use of the audit scanner in Kubewarden, consider the following sc

Assume Bob is deploying a WordPress Pod in the cluster.
Bob is new to Kubernetes, makes a mistake and deploys the Pod running as a privileged container.
At this point there's no policy preventing that so the Pod is
successfully created in the cluster.
At this point, there's no policy preventing that so the Pod is successfully created in the cluster.

Some days later, Alice, the Kubernetes administrator, enforces a Kubewarden policy that prohibits the creation of privileged containers.
The Pod deployed by Bob keeps running in the cluster as it already exists.
Expand All @@ -44,14 +44,12 @@ This includes the WordPress Pod created by Bob.
The audit scanner operates by:

- identifying all the resources to audit
- for each it builds a synthetic admission request with the resource's data
- for each resource, it builds a synthetic admission request with the resource's data
- it sends each admission request to a policy server endpoint which is only for audit requests

For the policy evaluating the request,
there are no differences between real or audit requests.
The data received is the same.
This auditing policy server endpoint has instrumentation to collect data of the evaluation.
So, users can use their monitoring tools analyze audit scanner data.
For the policy evaluating the request, there are no differences between real or audit requests.
This auditing policy server endpoint has instrumentation to collect data about the evaluation.
So, users can use their monitoring tools to analyze audit scanner data.

## Enable audit scanner

Expand All @@ -76,7 +74,7 @@ See the policy authors [documentation](../../tutorials/writing-policies/index.md

The audit scanner in Kubewarden requires specific Role Based Access Control (RBAC) configurations to be able to scan Kubernetes resources and save the results.
A correct default Service Account with those permissions is created during the installation.
The user can create and configure their own ServiceAccount to fine tune access to resources.
The user can create their own ServiceAccount to configure access to resources.

The default audit scanner `ServiceAccount` is bound to the `view` `ClusterRole` provided by Kubernetes.
This `ClusterRole` allows read-only access to a wide range of Kubernetes resources within a namespace.
Expand Down
244 changes: 139 additions & 105 deletions docs/explanations/audit-scanner/policy-reports.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ sidebar_label: Policy Reports
title: Audit Scanner - Policy Reports
description: The policy reports that the Audit Scanner produces.
keywords: [kubewarden, kubernetes, audit scanner]
doc-persona: [kubewarden-user, kubewarden-operator, kubewarden-policy-developer, kubewarden-integrator]
doc-persona:
[kubewarden-user, kubewarden-operator, kubewarden-policy-developer, kubewarden-integrator]
doc-type: [explanation]
doc-topic: [explanations, audit-scanner, policy-reports]
---
Expand All @@ -29,10 +30,9 @@ for more information about the CRDs.

These CRDs offer a structured way to store and manage the audit results.

Each namespace scanned by the audit scanner has a dedicated `PolicyReport` resource defined in it.

The results of Cluster wide resources are found in a `ClusterPolicyReport` object.
There will be only one `ClusterPolicyReport` per cluster.
`PolicyReport` and `ClusterPolicyReport` are used to store the results of the policy scans performed by the audit scanner.
The audit scanner will create a `PolicyReport` or a `ClusterPolicyReport` for each resource scanned, depending on the scope of the resource.
`PolicyReport` objects are available in the namespace where the resource is located, while `ClusterPolicyReport` objects are available in the cluster scope.

The audit results generated by the scanner include:

Expand All @@ -43,140 +43,174 @@ The audit results generated by the scanner include:

You can also define severity and category annotations for your policies.

Operators can access the reports via ordinary `kubectl` commands.
Operators can query the reports by using `kubectl`.
They can also use the optional UI provided by the
[policy-reporter](https://kyverno.github.io/policy-reporter)
open source project for monitoring and observability of the PolicyReport CRDs.
open-source project for monitoring and observability of the PolicyReport CRDs.

## Policy Reporter UI
## Querying the reports

The Policy Reporter is shipped as a subchart of `kubewarden-controller`.
Refer to the [Audit Scanner Installation](../../howtos/audit-scanner)
page for more information.
Using the kubectl command line tool, it is possible to query the results of the scan:

The Policy Reporter UI provides a dashboard showcasing all violations
from `PolicyReports` and the `ClusterPolicyReport`.
This is shown below.
List the reports in the default namespace:

![Policy Reporter dashboard example](/img/policy-reporter_dashboard.png)
```console
$ kubectl get polr -o wide

As shown below,
it also provides a tab for PolicyReports, and a tab for ClusterPolicyReports, with expanded information.
NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE
009805e4-6e16-4b70-80c9-cb33b6734c82 Deployment deployment1 5 1 0 0 0 1h
011e8ca7-40d5-4e76-8c89-6f820e24f895 Deployment deployment2 2 4 0 0 0 1h
02c28ab7-e332-47a2-9cc2-fe0fad5cd9ad Pod pod1 10 0 0 0 0 1h
04937b2b-e68b-47d5-909d-d0ae75527f07 Pod pod2 9 1 0 0 0 1h
...
```

![Policy Reporter PolicyReports example](/img/policy-reporter_policyreports.png)
List the cluster-wide reports:

Other features of Policy Reporter include forwarding of results to different clients
(like Grafana Loki, Elasticsearch, chat applications),
metrics endpoints, and so on.
See the [policy-reporter's community docs](https://kyverno.github.io/policy-reporter)
for more information.
```console
$ kubectl get cpolr -o wide

## Cluster-Wide Audit Results example
NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE
261c9492-deec-4a09-8aa9-cd464bb4b8d1 Namespace namespace1 3 1 0 0 0 1h
35ca342f-685b-4162-a342-8d7a52a61749 Namespace namespace2 0 4 0 0 0 1h
3a8f8a88-338b-4905-b9e4-f13397a0d7b5 Namespace namespace3 4 0 0 0 0 15h
```

Get the details of a specific PolicyReport:

In the next example, the audit scanner has evaluated the `cap-testing-cap-policy` on many namespaces in the cluster.
The results indicate that all the namespaces passed the policy validation.
The `summary` section summarizes the audit results, showing there were no errors, failures, or warnings.
```console
$ kubectl get polr 009805e4-6e16-4b70-80c9-cb33b6734c82 -o yaml
```

Get the details of a specific ClusterPolicyReport:

```console
$ kubectl get cpolr 261c9492-deec-4a09-8aa9-cd464bb4b8d1 -o yaml
```

## PolicyReport example

The following example shows a `PolicyReport` for the `Deployment` resource `deployment1` in the `default` namespace.
The report indicates that the `Pod` failed the `safe-labels` AdmissionPolicy.

```yaml
apiVersion: wgpolicyk8s.io/v1beta1
kind: ClusterPolicyReport
kind: PolicyReport
metadata:
creationTimestamp: "2023-07-10T19:25:40Z"
generation: 1
creationTimestamp: "2024-02-29T06:55:37Z"
generation: 6
labels:
app.kubernetes.io/managed-by: kubewarden
...
name: 009805e4-6e16-4b70-80c9-cb33b6734c82
namespace: default
ownerReferences:
- apiVersion: apps/v1
kind: Deployment
name: deployment1
uid: 009805e4-6e16-4b70-80c9-cb33b6734c82
resourceVersion: "2685996"
uid: c5a88847-d678-4733-8120-1b83fd6330cb
results:
- policy: cap-testing-cap-policy
...
resourceSelector: {}
resources:
- apiVersion: v1
kind: Namespace
name: kube-system
...
result: pass
rule: testing-cap-policy
source: kubewarden
timestamp:
nanos: 0
seconds: 1689017140
- policy: cap-testing-cap-policy
...
resourceSelector: {}
resources:
- apiVersion: v1
kind: Namespace
name: default
...
result: pass
rule: testing-cap-policy
source: kubewarden
timestamp:
nanos: 0
seconds: 1689017140
...
- category: Resource validation
message: "The following mandatory labels are missing: cost-center"
policy: namespaced-default-safe-labels
properties:
policy-resource-version: "2684810"
policy-uid: 826dd4ef-9db5-408e-9482-455f278bf9bf
validating: "true"
resourceSelector: {}
result: fail
scored: true
severity: low
source: kubewarden
timestamp:
nanos: 0
seconds: 1709294251
scope:
apiVersion: apps/v1
kind: Deployment
name: deployment1
namespace: default
resourceVersion: "3"
uid: 009805e4-6e16-4b70-80c9-cb33b6734c82
summary:
error: 0
fail: 0
pass: 6
fail: 1
pass: 0
skip: 0
warn: 0
```
## Namespace-Specific Audit Results example
## ClusterPolicyReport example
In this example, the audit scanner evaluated many policies on resources within the `default` namespace.
The results indicate that certain resources failed the validation for the `cap-no-privilege-escalation` policy.
Others passed the validation for the `cap-do-not-run-as-root` policy.
The `summary` section shows the number of failures and passes.
The following example shows a `ClusterPolicyReport` for the `Namespace` resource `default`.
The report indicates that the resource has failed the `safe-annotations` ClusterAdmissionPolicy validation.

```yaml
apiVersion: wgpolicyk8s.io/v1beta1
kind: PolicyReport
kind: ClusterPolicyReport
metadata:
creationTimestamp: "2023-07-10T19:28:05Z"
generation: 4
creationTimestamp: "2024-02-28T14:44:37Z"
generation: 3
labels:
app.kubernetes.io/managed-by: kubewarden
...
name: 261c9492-deec-4a09-8aa9-cd464bb4b8d1
ownerReferences:
- apiVersion: v1
kind: Namespace
name: default
uid: 261c9492-deec-4a09-8aa9-cd464bb4b8d1
resourceVersion: "2403034"
uid: 20a3d00e-e955-4f21-a887-317d40f3f052
results:
- message: one of the containers has privilege escalation enabled
policy: cap-no-privilege-escalation
...
resourceSelector: {}
resources:
- apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
namespace: default
...
result: fail
rule: no-privilege-escalation
source: kubewarden
timestamp:
nanos: 0
seconds: 1689017383
- policy: cap-do-not-run-as-root
...
resourceSelector: {}
resources:
- apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
namespace: default
...
result: pass
rule: do-not-run-as-root
source: kubewarden
timestamp:
nanos: 0
seconds: 1689017383
...
- category: Resource validation
message: "The following mandatory annotations are not allowed: owner"
policy: clusterwide-safe-annotations
properties:
policy-resource-version: "2396437"
policy-uid: 46780d6e-e51a-4d65-8572-a6af01380aa7
validating: "true"
resourceSelector: {}
result: fail
scored: true
severity: low
source: kubewarden
timestamp:
nanos: 0
seconds: 1709294251
scope:
apiVersion: v1
kind: Namespace
name: default
resourceVersion: "37"
uid: 261c9492-deec-4a09-8aa9-cd464bb4b8d1
summary:
error: 0
fail: 8
pass: 10
fail: 1
pass: 0
skip: 0
warn: 0
```

## Policy Reporter UI

The Policy Reporter is shipped as a subchart of `kubewarden-controller`.
Refer to the [Audit Scanner Installation](../../howtos/audit-scanner)
page for more information.

The Policy Reporter UI provides a dashboard showing all violations
from `PolicyReports` and the `ClusterPolicyReport`.
This is shown below.

![Policy Reporter dashboard example](/img/policy-reporter_dashboard.png)

As shown below,
it also provides a tab for PolicyReports, and a tab for ClusterPolicyReports, with expanded information.

![Policy Reporter PolicyReports example](/img/policy-reporter_policyreports.png)

Other features of Policy Reporter include forwarding of results to different clients
(like Grafana Loki, Elasticsearch, chat applications),
metrics endpoints, and so on.
See the [policy-reporter's community docs](https://kyverno.github.io/policy-reporter)
for more information.

0 comments on commit c103672

Please sign in to comment.