Skip to content

Commit

Permalink
Federation: Add task for setting up placement policies (#4075)
Browse files Browse the repository at this point in the history
* Add task for setting up placement policies

* Update version of management sidecar in policy engine deployment

* Address @nikhiljindal's comments

- Lower case filenames
- Comments in policy
- Typo fixes
- Removed type LoadBalancer from OPA Service

* Add example that sets cluster selector

Per-@nikhiljindal's suggestion

* Fix wording and templating per @chenopis
  • Loading branch information
tsandall authored and chenopis committed Jun 26, 2017
1 parent 3d41757 commit 5825ea1
Show file tree
Hide file tree
Showing 7 changed files with 320 additions and 0 deletions.
1 change: 1 addition & 0 deletions _data/tasks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,7 @@ toc:
- docs/tasks/federation/federation-service-discovery.md
- docs/tasks/federation/set-up-cluster-federation-kubefed.md
- docs/tasks/federation/set-up-coredns-provider-federation.md
- docs/tasks/federation/set-up-placement-policies-federation.md
- docs/tasks/administer-federation/cluster.md
- docs/tasks/administer-federation/configmap.md
- docs/tasks/administer-federation/daemonset.md
Expand Down
34 changes: 34 additions & 0 deletions docs/tasks/federation/policy-engine-deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: opa
name: opa
namespace: federation-system
spec:
replicas: 1
template:
metadata:
labels:
app: opa
name: opa
spec:
containers:
- name: opa
image: openpolicyagent/opa:0.4.10
args:
- "run"
- "--server"
- name: kube-mgmt
image: openpolicyagent/kube-mgmt:0.2
args:
- "-kubeconfig=/srv/kubernetes/kubeconfig"
- "-cluster=federation/v1beta1/clusters"
volumeMounts:
- name: federation-kubeconfig
mountPath: /srv/kubernetes
readOnly: true
volumes:
- name: federation-kubeconfig
secret:
secretName: federation-controller-manager-kubeconfig
13 changes: 13 additions & 0 deletions docs/tasks/federation/policy-engine-service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
kind: Service
apiVersion: v1
metadata:
name: opa
namespace: federation-system
spec:
selector:
app: opa
ports:
- name: http
protocol: TCP
port: 8181
targetPort: 8181
74 changes: 74 additions & 0 deletions docs/tasks/federation/policy.rego
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# OPA supports a high-level declarative language named Rego for authoring and
# enforcing policies. For more infomration on Rego, visit
# http://openpolicyagent.org.

# Rego policies are namespaced by the "package" directive.
package kubernetes.placement

# Imports provide aliases for data inside the policy engine. In this case, the
# policy simply refers to "clusters" below.
import data.kubernetes.clusters

# The "annotations" rule generates a JSON object containing the key
# "federation.kubernetes.io/replica-set-preferences" mapped to <preferences>.
# The preferences values is generated dynamically by OPA when it evaluates the
# rule.
#
# The SchedulingPolicy Admission Controller running inside the Federation API
# server will merge these annotatiosn into incoming Federated resources. By
# setting replica-set-preferences, we can control the placement of Federated
# ReplicaSets.
#
# Rules are defined to generate JSON values (booleans, strings, objects, etc.)
# When OPA evaluates a rule, it generates a value IF all of the expressions in
# the body evaluate successfully. All rules can be understood intuitively as
# <head> if <body> where <body> is true if <expr-1> AND <expr-2> AND ...
# <expr-N> is true (for some set of data.)
annotations["federation.kubernetes.io/replica-set-preferences"] = preferences {
input.kind = "ReplicaSet"
value = {"clusters": cluster_map, "rebalance": true}
json.marshal(value, preferences)
}

# This "annotations" rule generates a value for the "federation.alpha.kubernetes.io/cluster-selector"
# annotation.
#
# In English, the policy asserts that resources in the "production" namespace
# that are not annotated with "criticality=low" MUST be placed on clusters
# labelled with "on-premise=true".
annotations["federation.alpha.kubernetes.io/cluster-selector"] = selector {
input.metadata.namespace = "production"
not input.metadata.annotations.criticality = "low"
json.marshal([{
"operator": "=",
"key": "on-premise",
"values": "[true]",
}], selector)
}

# Generates a set of cluster names that satisfy the incoming Federated
# ReplicaSet's requirements. In this case, just PCI compliance.
replica_set_clusters[cluster_name] {
clusters[cluster_name]
not insufficient_pci[cluster_name]
}

# Generates a set of clusters that must not be used for Federated ReplicaSets
# that request PCI compliance.
insufficient_pci[cluster_name] {
clusters[cluster_name]
input.metadata.annotations["requires-pci"] = "true"
not pci_clusters[cluster_name]
}

# Generates a set of clusters that are PCI certified. In this case, we assume
# clusters are annotated to indicate if they have passed PCI compliance audits.
pci_clusters[cluster_name] {
clusters[cluster_name].metadata.annotations["pci-certified"] = "true"
}

# Helper rule to generate a mapping of desired clusters to weights. In this
# case, weights are static.
cluster_map[cluster_name] = {"weight": 1} {
replica_set_clusters[cluster_name]
}
21 changes: 21 additions & 0 deletions docs/tasks/federation/replicaset-example-policy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
labels:
app: nginx-pci
name: nginx-pci
annotations:
requires-pci: "true"
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pci
template:
metadata:
labels:
app: nginx-pci
spec:
containers:
- image: nginx
name: nginx-pci
29 changes: 29 additions & 0 deletions docs/tasks/federation/scheduling-policy-admission.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: admission
namespace: federation-system
data:
config.yml: |
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: SchedulingPolicy
path: /etc/kubernetes/admission/scheduling-policy-config.yml
scheduling-policy-config.yml: |
kubeconfig: /etc/kubernetes/admission/opa-kubeconfig
opa-kubeconfig: |
clusters:
- name: opa-api
cluster:
server: http://opa.federation-system.svc.cluster.local:8181/v0/data/kubernetes/placement
users:
- name: scheduling-policy
user:
token: deadbeefsecret
contexts:
- name: default
context:
cluster: opa-api
user: scheduling-policy
current-context: default
148 changes: 148 additions & 0 deletions docs/tasks/federation/set-up-placement-policies-federation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
---
title: Set up placement policies in Federation
redirect_from:
- "/docs/tutorials/federation/set-up-placement-policies-federation/"
- "/docs/tutorials/federation/set-up-placement-policies-federation.html"
---

{% capture overview %}

This page shows how to policy-based placement decisions over Federated
resources using an external policy engine.

{% endcapture %}

{% capture prerequisites %}

You need to have a running Kubernetes cluster (which is referenced as host
cluster). Please see one of the [getting started](/docs/getting-started-guides/)
guides for installation instructions for your platform.

{% endcapture %}

{% capture steps %}

## Deploying Federation and configuring an external policy engine

The Federation control plane can be deployed using `kubefed init`.

After deploying the Federation control plane, you must configure an Admission
Controller in the Federation API server that enforces placement decisions
received from the external policy engine.

kubectl create -f scheduling-policy-admission.yaml

Shown below is an example ConfigMap for the Admission Controller:

{% include code.html language="yaml" file="scheduling-policy-admission.yaml"
ghlink="/docs/tutorials/federation/scheduling-policy-admission.yaml" %}

The ConfigMap contains three files:

* `config.yml` specifies the location of the `SchedulingPolicy` Admission
Controller config file.
* `scheduling-policy-config.yml` specifies the location of the kubeconfig file
required to contact the external policy engine. This file can also include a
`retryBackoff` value that controls the initial retry backoff delay in
milliseconds.
* `opa-kubeconfig` is a standard kubeconfig containing the URL and credentials
needed to contact the external policy engine.

Edit the Federation API server deployment to enable the `SchedulingPolicy`
Admission Controller.

kubectl -n federation-system edit deployment federation-apiserver

Update the Federation API server command line arguments to enable the Admission
Controller and mount the ConfigMap into the container. If there's an existing
`--admission-control` flag, append `,SchedulingPolicy` instead of adding
another line.

--admission-control=SchedulingPolicy
--admission-control-config-file=/etc/kubernetes/admission/config.yml

Add the following volume to the Federation API server pod:

- name: admission-config
configMap:
name: admission

Add the following volume mount the Federation API server `apiserver` container:

volumeMounts:
- name: admission-config
mountPath: /etc/kubernetes/admission

## Deploying an external policy engine

The [Open Policy Agent (OPA)](http://openpolicyagent.org) is an open source,
general-purpose policy engine that you can use to enforce policy-based placement
decisions in the Federation control plane.

Create a Service in the host cluster to contact the external policy engine:

kubectl create -f policy-engine-service.yaml

Shown below is an example Service for OPA.

{% include code.html language="yaml" file="policy-engine-service.yaml"
ghlink="/docs/tutorials/federation/policy-engine-service.yaml" %}

Create a Deployment in the host cluster with the Federation control plane:

kubectl create -f policy-engine-deployment.yaml

Shown below is an example Deployment for OPA.

{% include code.html language="yaml" file="policy-engine-deployment.yaml"
ghlink="/docs/tutorials/federation/policy-engine-deployment.yaml" %}

## Configuring placement policies via ConfigMaps

The external policy engine will discover placement policies created in the
`kube-federation-scheduling-policy` namespace in the Federation API server.

Create the namespace if it does not already exist:

kubectl --context=federation create namespace kube-federation-scheduling-policy

Configure a sample policy to test the external policy engine:

{% include code.html language="yaml" file="policy.rego"
ghlink="/docs/tutorials/federation/policy.rego" %}

Shown below is the command to create the sample policy:

kubectl --context=federation -n kube-federation-scheduling-policy create configmap scheduling-policy --from-file=policy.rego

This sample policy illustrates a few key ideas:

* Placement policies can refer to any field in Federated resources.
* Placement policies can leverage external context (for example, Cluster
metadata) to make decisions.
* Administrative policy can be managed centrally.
* Policies can define simple interfaces (such as the `requires-pci` annotation) to
avoid duplicating logic in manifests.

## Testing placement policies

Annotate one of the clusters to indicate that it is PCI certified.

kubectl --context=federation annotate clusters cluster-name-1 pci-certified=true

Deploy a Federated ReplicaSet to test the placement policy.

{% include code.html language="yaml" file="replicaset-example-policy.yaml"
ghlink="/docs/tutorials/federation/replicaset-example-policy.yaml" %}

Shown below is the command to deploy a ReplicaSet that *does* match the policy.

kubectl --context=federation create -f replicaset-example-policy.yaml

Inspect the ReplicaSet to confirm the appropriate annotations have been applied:

kubectl --context=federation get rs nginx-pci -o jsonpath='{.metadata.annotations}'

{% endcapture %}

{% include templates/task.md %}

0 comments on commit 5825ea1

Please sign in to comment.