The component is moved to the ocm as this task of consolidating code is going, please follow the CONTRIBUTING Guidence in the ocm to contribute.
With Placement
, you can select a set of ManagedClusters
from the ManagedClusterSets
bound to the placement namespace.
Check the CONTRIBUTING Doc for how to contribute to the repo.
You have at least one running kubernetes cluster;
git clone https://github.com/open-cluster-management-io/placement.git
cd placement
Set environment variables.
export KUBECONFIG=</path/to/kubeconfig>
Build the docker image to run the placement controller.
go install github.com/openshift/imagebuilder/cmd/[email protected]
make images
export IMAGE_NAME=<placement_image_name> # export IMAGE_NAME=quay.io/open-cluster-management/placement:latest
If your are using kind, load image into the kind cluster.
kind load docker-image <placement_image_name> # kind load docker-image quay.io/open-cluster-management/placement:latest
And then deploy placement manager on the cluster.
make deploy-hub
After a successful deployment, check on the cluster and see the placement controller has been deployed.
kubectl -n open-cluster-management-hub get pods
NAME READY STATUS RESTARTS AGE
cluster-manager-placement-controller-cf9bbd6c-x9dnd 1/1 Running 0 2m16s
Here is an example.
Create a ManagedClusterSet
.
cat <<EOF | kubectl apply -f -
apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSet
metadata:
name: clusterset1
EOF
Create a ManagedCluster
and assign it to clusterset clusterset1
.
cat <<EOF | kubectl apply -f -
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
name: cluster1
labels:
cluster.open-cluster-management.io/clusterset: clusterset1
vendor: OpenShift
spec:
hubAcceptsClient: true
EOF
Create a ManagedClusterSetBinding
to bind the ManagedClusterSet
to the default namespace.
cat <<EOF | kubectl apply -f -
apiVersion: cluster.open-cluster-management.io/v1beta2
kind: ManagedClusterSetBinding
metadata:
name: clusterset1
namespace: default
spec:
clusterSet: clusterset1
EOF
Now create a Placement
:
cat <<EOF | kubectl apply -f -
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement1
namespace: default
spec:
predicates:
- requiredClusterSelector:
labelSelector:
matchLabels:
vendor: OpenShift
EOF
Check the PlacementDecision
created for this placement. It contains all selected clusters in status.
kubectl get placementdecisions
NAME AGE
placement1-decision-1 2m27s
kubectl describe placementdecisions placement1-decision-1
......
Status:
Decisions:
Cluster Name: cluster1
Reason:
Events: <none>
Undeploy placement controller from the cluster.
make undeploy-hub
In order to verify the Placement
API on an existing environment with placement controller installed and well configured, you are able to run the e2e test cases as sanity check by following the steps below.
Build the binary of the e2e test cases
make build-e2e
And then run the e2e test cases against an existing environment.
./e2e.test --ginkgo.v --ginkgo.label-filter=sanity-check -hub-kubeconfig=/path/to/file
In an environment that has already had the global
clusterset created, you can skip the creation of the global
clusterset during testing.
./e2e.test --ginkgo.v --ginkgo.label-filter=sanity-check -hub-kubeconfig=/path/to/file -create-global-clusterset=false
Since the e2e test cases create fake ManagedClusters (without agent installed) during testing, in a full featured OCM environment (with registration controller running on the hub), a taint cluster.open-cluster-management.io/unreachable
will be added to those fake ManagedClusters automatically. You have to tolerate this taint when running e2e test cases in such an environment.
./e2e.test --ginkgo.v --ginkgo.label-filter=sanity-check -hub-kubeconfig=/path/to/file -tolerate-unreachable-taint