Skip to content

operator‐949

Allan Roger Reid edited this page Dec 4, 2024 · 3 revisions

Steps

SSH into control plane

ssh -p 20029 [email protected] -o "ServerAliveInterval=5" -o "ServerAliveCountMax=100000" -o "StrictHostKeyChecking=off"

Persist sessions after logout

loginctl enable-linger ubuntu

Clone operator

cd ~
git clone https://github.com/allanrogerr/operator.git
cd operator && git checkout master

Install k3s

cd ~
sudo touch /dev/kmsg
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --snapshotter=fuse-overlayfs
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
k3s --version

Install essential packages

sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install vim -y
sudo apt-get install zip -y
sudo apt-get install make -y
sudo apt install build-essential -y

Install go

cd ~
wget https://go.dev/dl/go1.21.3.linux-amd64.tar.gz
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz
cat <<EOF >> $HOME/.profile 
export PATH=$PATH:/usr/local/go/bin:~/go/bin
EOF
cat $HOME/.profile 
source $HOME/.profile
go version

Install nvm (optional - only to ensure nothing else is broken in operator)

sudo apt-get install nodejs -y
sudo apt-get install npm -y
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
source ~/.bashrc
nvm install && nvm use && npm install -g yarn

Install podman. Push newly compile operator to docker (optional - only to ensure nothing else is broken in operator)

sudo apt-get install podman -y
sudo apt-get install podman-docker -y
TAG=minio/operator:noop
mkdir ~/operator/minio-operator
cd ~/operator && make assets && make binary && docker build -t $TAG .
podman save minio/operator:noop | sudo k3s ctr images import -
sudo k3s ctr images ls | grep minio/operator:noop

Build & install kubectl-minio plugin

cd ~/operator/kubectl-minio
CGO_ENABLED=1 go build --ldflags "-s -w" -trimpath -o ../kubectl-minio

Set default storage class

kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl get storageclass

Test cases

FAIL EXPECTED

Test 0

kubectl delete ns myminio
kubectl create ns myminio
# Create PVs for pool-0 and pool-1 - See yaml below
kubectl -n myminio delete pv/pv-k8s-0-test-pool-{0..1}-{0..3}
kubectl apply -f ~/pv.yml
~/operator/kubectl-minio/kubectl-minio tenant create test --namespace myminio --servers 4 --volumes 4 --capacity 4Gi --disable-tls --pool pool-0
~/operator/kubectl-minio/kubectl-minio tenant expand test --pool pool-1 --servers 4 --volumes 4 --capacity 4Gi --namespace myminio --pool pool-1
~/operator/kubectl-minio/kubectl-minio tenant info test

Output

Tenant 'test', Namespace 'myminio', Total capacity 8.0 GiB

Current status: Provisioning initial users
MinIO version: minio/minio:RELEASE.2023-11-15T20-43-25Z
MinIO service: minio/ClusterIP (port 80)
Console service: test-console/ClusterIP (port 9090)

POOL	SERVERS	VOLUMES PER SERVER	VOLUMES	CAPACITY PER VOLUME	CAPACITY 
0   	4      	1                 	4      	1.0 GiB            	4.0 GiB 	
1   	4      	1                 	4      	1.0 GiB            	4.0 GiB 	

MinIO Root User Credentials:
MINIO_ROOT_USER="A9QQ9OKY9OXFKMEKHHIU"
MINIO_ROOT_PASSWORD="00TxMSqLWCsSAFbX2Xp7XHt9G1dyZyXak6g8xAAA"

Observe pods require fix to antiAffinity

kubectl -n myminio describe pod/test-pool-1-0 - scheduling failed

Output

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  4s    default-scheduler  0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod..

Observe pods require fix to antiAffinity - pool 1 cannot initialize

kubectl -n myminio get pods

Output

NAME            READY   STATUS    RESTARTS   AGE
test-pool-0-0   2/2     Running   0          5m42s
test-pool-0-1   2/2     Running   0          5m42s
test-pool-0-2   2/2     Running   0          5m42s
test-pool-0-3   2/2     Running   0          5m41s
test-pool-1-0   0/2     Pending   0          32s
test-pool-1-1   0/2     Pending   0          32s
test-pool-1-2   0/2     Pending   0          32s
test-pool-1-3   0/2     Pending   0          32s
kubectl -n myminio describe pod/test-pool-0-0

Output Events: Type Reason Age From Message


Warning FailedScheduling 90s default-scheduler 0/5 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling..

Console screenshot image

Cleanup

kubectl delete ns myminio

PASS EXPECTED

Implement fix

cd ~/operator && git checkout enable-pool-antiaffinity
cd ~/operator/kubectl-minio
CGO_ENABLED=1 go build --ldflags "-s -w" -trimpath -o ../kubectl-minio

Test 1

kubectl delete ns myminio
kubectl create ns myminio
# Create PVs for pool-0 and pool-1 - See yaml below
kubectl -n myminio delete pv/pv-k8s-0-test-pool-{0..1}-{0..3}
kubectl apply -f ~/pv.yml

Create pod on tenant manually (in console) with parameters --namespace myminio --servers 4 --volumes 4 --capacity 4Gi --disable-tls --pool pool-0

Continue with kubectl-minio plugin

~/operator/kubectl-minio/kubectl-minio tenant expand test --pool pool-1 --servers 4 --volumes 4 --capacity 4Gi --namespace myminio --pool pool-1
~/operator/kubectl-minio/kubectl-minio tenant info test

Output

Tenant 'test', Namespace 'myminio', Total capacity 8.0 GiB

Current status: Initialized
MinIO version: minio/minio:RELEASE.2023-12-23T07-19-11Z
MinIO service: minio/ClusterIP (port 80)
Console service: test-console/ClusterIP (port 9090)

POOL	SERVERS	VOLUMES PER SERVER	VOLUMES	CAPACITY PER VOLUME	CAPACITY 
0   	4      	1                 	4      	1.0 GiB            	4.0 GiB 	
1   	4      	1                 	4      	1.0 GiB            	4.0 GiB 	

MinIO Root User Credentials:
MINIO_ROOT_USER="5YIHMNSFSM1EAHQ3"
MINIO_ROOT_PASSWORD="HLHZXQNTXNCW5F2L3RHCZBN3AS3OORA4"

Observe pods are all scheduled

kubectl -n myminio get pods

Output

NAME            READY   STATUS    RESTARTS   AGE
test-pool-0-0   2/2     Running   0          75s
test-pool-0-1   2/2     Running   0          75s
test-pool-0-2   2/2     Running   0          75s
test-pool-0-3   2/2     Running   0          75s
test-pool-1-0   2/2     Running   0          21s
test-pool-1-1   2/2     Running   0          21s
test-pool-1-2   2/2     Running   0          21s
test-pool-1-3   2/2     Running   0          21s

Console screenshot image

Cleanup

kubectl delete ns myminio

Test 2

kubectl delete ns myminio
kubectl create ns myminio
# Create PVs for pool-0 and pool-1 - See yaml below
kubectl -n myminio delete pv/pv-k8s-0-test-pool-{0..1}-{0..3}
kubectl apply -f ~/pv.yml
~/operator/kubectl-minio/kubectl-minio tenant create test --namespace myminio --servers 4 --volumes 4 --capacity 4Gi --disable-tls --pool pool-0

Since volumes are mounted as root by default, on each k8s slave run

sudo chgrp -R ubuntu /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
sudo chown -R ubuntu /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
sudo chmod -R 777 /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
ls -la /opt/local-path-provisioner/

On k8s master run

kubectl -n myminio delete pod/test-pool-0-{0..3}

Expand pod on tenant manually (in console) with parameters --pool pool-1 --servers 4 --volumes 4 --capacity 4Gi --namespace myminio --pool pool-1 image

Continue with kubectl-minio plugin

~/operator/kubectl-minio/kubectl-minio tenant info test

Output

Tenant 'test', Namespace 'myminio', Total capacity 8.0 GiB

Current status: Initialized
MinIO version: minio/minio:RELEASE.2023-11-15T20-43-25Z
MinIO service: minio/ClusterIP (port 80)
Console service: test-console/ClusterIP (port 9090)

POOL	SERVERS	VOLUMES PER SERVER	VOLUMES	CAPACITY PER VOLUME	CAPACITY 
0   	4      	1                 	4      	1.0 GiB            	4.0 GiB 	
1   	4      	1                 	4      	1.0 GiB            	4.0 GiB 	

MinIO Root User Credentials:
MINIO_ROOT_USER="9IZ9SCUYMHVPXAICFM8C"
MINIO_ROOT_PASSWORD="K1k2raC3dGVkhVer5kQPbcSqAgqLTuni9ze4yGxC"

Observe pods are all scheduled

kubectl -n myminio get pods

Output

NAME            READY   STATUS    RESTARTS   AGE
test-pool-0-0   2/2     Running   0          2m41s
test-pool-0-1   2/2     Running   0          2m47s
test-pool-0-2   2/2     Running   0          2m46s
test-pool-0-3   2/2     Running   0          2m41s
test-pool-1-0   2/2     Running   0          19s
test-pool-1-1   2/2     Running   0          19s
test-pool-1-2   2/2     Running   0          19s
test-pool-1-3   2/2     Running   0          19s

Console screenshot image

Cleanup

kubectl delete ns myminio

Test 3

kubectl delete ns myminio
kubectl create ns myminio
# Create PVs for pool-0 and pool-1 - See yaml below
kubectl -n myminio delete pv/pv-k8s-0-test-pool-{0..1}-{0..3}
kubectl apply -f ~/pv.yml

Create pod on tenant manually (in console) with parameters --namespace myminio --servers 4 --volumes 4 --capacity 4Gi --disable-tls --pool pool-0 Expand pod on tenant manually (in console) with parameters --pool pool-1 --servers 4 --volumes 4 --capacity 4Gi --namespace myminio --pool pool-1

Continue with kubectl-minio plugin

~/operator/kubectl-minio/kubectl-minio tenant info test

Output


Observe pods are all scheduled

kubectl -n myminio get pods

Output

NAME            READY   STATUS    RESTARTS   AGE
test-pool-0-0   2/2     Running   0          2m10s
test-pool-0-1   2/2     Running   0          2m10s
test-pool-0-2   2/2     Running   0          2m10s
test-pool-0-3   2/2     Running   0          2m10s
test-pool-1-0   2/2     Running   0          59s
test-pool-1-1   2/2     Running   0          59s
test-pool-1-2   2/2     Running   0          59s
test-pool-1-3   2/2     Running   0          59s

Console screenshot image

Cleanup

kubectl delete ns myminio

Test 4

kubectl delete ns myminio
kubectl create ns myminio
# Create PVs for pool-0 and pool-1 - See yaml below
kubectl -n myminio delete pv/pv-k8s-0-test-pool-{0..1}-{0..3}
kubectl apply -f ~/pv.yml
~/operator/kubectl-minio/kubectl-minio tenant create test --namespace myminio --servers 4 --volumes 4 --capacity 4Gi --disable-tls --pool pool-0

Since volumes are mounted as root by default, on each k8s slave run

sudo chgrp -R ubuntu /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
sudo chown -R ubuntu /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
sudo chmod -R 777 /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
ls -la /opt/local-path-provisioner/

On k8s master run

kubectl -n myminio delete pod/test-pool-0-{0..3}

Continue

~/operator/kubectl-minio/kubectl-minio tenant expand test --pool pool-1 --servers 4 --volumes 4 --capacity 4Gi --namespace myminio --pool pool-1
~/operator/kubectl-minio/kubectl-minio tenant info test

Output

Tenant 'test', Namespace 'myminio', Total capacity 8.0 GiB

Current status: Restarting MinIO
MinIO version: minio/minio:RELEASE.2023-11-15T20-43-25Z
MinIO service: minio/ClusterIP (port 80)
Console service: test-console/ClusterIP (port 9090)

POOL	SERVERS	VOLUMES PER SERVER	VOLUMES	CAPACITY PER VOLUME	CAPACITY 
0   	4      	1                 	4      	1.0 GiB            	4.0 GiB 	
1   	4      	1                 	4      	1.0 GiB            	4.0 GiB 	

MinIO Root User Credentials:
MINIO_ROOT_USER="0NWWXMBKG0K3WF445F53"
MINIO_ROOT_PASSWORD="FDBDeYl1G19DuV1e01CDS4mTkjmBmDHHGKTnSCMV"

Since volumes are mounted as root by default, on each k8s slave run

sudo chgrp -R ubuntu /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
sudo chown -R ubuntu /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
sudo chmod -R 777 /opt/local-path-provisioner/pvc-myminio_0-test-pool-*
ls -la /opt/local-path-provisioner/

On k8s master run

kubectl -n myminio delete pod/test-pool-1-{0..3}

Observe pods are all scheduled

kubectl -n myminio get pods

Output

NAME            READY   STATUS    RESTARTS   AGE
test-pool-0-0   2/2     Running   0          11m
test-pool-0-1   2/2     Running   0          11m
test-pool-0-2   2/2     Running   0          11m
test-pool-0-3   2/2     Running   0          11m
test-pool-1-0   2/2     Running   0          24s
test-pool-1-1   2/2     Running   0          24s
test-pool-1-2   2/2     Running   0          24s
test-pool-1-3   2/2     Running   0          24s

Console screenshot image

Cleanup

kubectl delete ns myminio

Supporting scripts

https://github.com/allanrogerr/public/blob/main/minio/supporting-scripts/pv.yml https://github.com/allanrogerr/public/blob/main/minio/supporting-scripts/pvc.yml

Clone this wiki locally