This example shows that NSEs can be created on the fly on NSC requests. This allows effective scaling for endpoints. The requested endpoint will be automatically spawned on the same node as NSC (see step 12), allowing the best performance for connectivity.
Here we are using an endpoint that automatically shuts down when it has no active connection for specified time. We are using very short timeout for the purpose of the test: 15 seconds.
We are only using one client in this test, so removing it (see step 13) will cause the NSE to shut down.
Supplier watches for endpoints it created and clears endpoints that finished their work, thus saving cluster resources (see step 14).
Create test namespace:
kubectl create ns ns-scale-from-zero
Select nodes to deploy NSC and supplier:
NODES=($(kubectl get nodes -o go-template='{{range .items}}{{ if not .spec.taints }}{{ .metadata.name }} {{end}}{{end}}'))
NSC_NODE=${NODES[0]}
SUPPLIER_NODE=${NODES[1]}
if [ "$SUPPLIER_NODE" == "" ]; then SUPPLIER_NODE=$NSC_NODE; echo "Only 1 node found, testing that pod is created on the same node is useless"; fi
Deploy NSC and supplier:
kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/features/scale-from-zero?ref=58a90eb58a3e06f02cbd99c221b35327488025cc
Wait for applications ready:
kubectl wait -n ns-scale-from-zero --for=condition=ready --timeout=1m pod -l app=nse-supplier-k8s
kubectl wait -n ns-scale-from-zero --for=condition=ready --timeout=1m pod -l app=nsc-kernel
kubectl wait -n ns-scale-from-zero --for=condition=ready --timeout=1m pod -l app=nse-icmp-responder
Find NSC and NSE pods by labels:
NSC=$(kubectl get pod -n ns-scale-from-zero --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' -l app=nsc-kernel)
NSE=$(kubectl get pod -n ns-scale-from-zero --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' -l app=nse-icmp-responder)
Check connectivity:
kubectl exec $NSC -n ns-scale-from-zero -- ping -c 4 169.254.0.0
kubectl exec $NSE -n ns-scale-from-zero -- ping -c 4 169.254.0.1
Check that the NSE spawned on the same node as NSC:
NSE_NODE=$(kubectl get pod -n ns-scale-from-zero --template '{{range .items}}{{.spec.nodeName}}{{"\n"}}{{end}}' -l app=nse-icmp-responder)
NSC_NODE=$(kubectl get pod -n ns-scale-from-zero --template '{{range .items}}{{.spec.nodeName}}{{"\n"}}{{end}}' -l app=nsc-kernel)
if [ $NSC_NODE == $NSE_NODE ]; then echo "OK"; else echo "different nodes"; false; fi
Remove NSC:
kubectl scale -n ns-scale-from-zero deployment nsc-kernel --replicas=0
Wait for the NSE pod to be deleted:
kubectl wait -n ns-scale-from-zero --for=delete --timeout=1m pod -l app=nse-icmp-responder
Delete namespace:
kubectl delete ns ns-scale-from-zero
Delete network service:
kubectl delete -n nsm-system networkservices.networkservicemesh.io scale-from-zero