You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
root@instance-11:~/cluster-api-provider-bringyourownhost# k delete cluster byoh-cluster
cluster.cluster.x-k8s.io "byoh-cluster" deleted
root@instance-11:~/cluster-api-provider-bringyourownhost# k get k8sinstallerconfigtemplates
NAME AGE
byoh-cluster-control-plane 7m30s
byoh-cluster-md-0 7m30s
root@instance-11:~/cluster-api-provider-bringyourownhost# kubectl apply -f cluster.yaml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/byoh-cluster-md-0 created
cluster.cluster.x-k8s.io/byoh-cluster created
machinedeployment.cluster.x-k8s.io/byoh-cluster-md-0 created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/byoh-cluster-control-plane created
byocluster.infrastructure.cluster.x-k8s.io/byoh-cluster created
byomachinetemplate.infrastructure.cluster.x-k8s.io/byoh-cluster-control-plane created
byomachinetemplate.infrastructure.cluster.x-k8s.io/byoh-cluster-md-0 created
k8sinstallerconfigtemplate.infrastructure.cluster.x-k8s.io/byoh-cluster-control-plane unchanged
k8sinstallerconfigtemplate.infrastructure.cluster.x-k8s.io/byoh-cluster-md-0 unchanged
What did you expect to happen:
I would expect to see the following 2 k8sinstallerconfigtemplates would be deleted after deleting its cluster:
root@instance-11:~/cluster-api-provider-bringyourownhost# k get k8sinstallerconfigtemplates
NAME AGE
byoh-cluster-control-plane 7m30s
byoh-cluster-md-0 7m30s
What's the point of keeping them, after its cluster has already been deleted?
What steps did you take and what happened:
I followed the same steps here at https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/blob/main/docs/getting_started.md, on a vanilla Ubuntu 20.04 OS, with kind and docker on the same host.
root@instance-11:~/cluster-api-provider-bringyourownhost# k delete cluster byoh-cluster
cluster.cluster.x-k8s.io "byoh-cluster" deleted
root@instance-11:~/cluster-api-provider-bringyourownhost# k get k8sinstallerconfigtemplates
NAME AGE
byoh-cluster-control-plane 7m30s
byoh-cluster-md-0 7m30s
root@instance-11:~/cluster-api-provider-bringyourownhost# kubectl apply -f cluster.yaml
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/byoh-cluster-md-0 created
cluster.cluster.x-k8s.io/byoh-cluster created
machinedeployment.cluster.x-k8s.io/byoh-cluster-md-0 created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/byoh-cluster-control-plane created
byocluster.infrastructure.cluster.x-k8s.io/byoh-cluster created
byomachinetemplate.infrastructure.cluster.x-k8s.io/byoh-cluster-control-plane created
byomachinetemplate.infrastructure.cluster.x-k8s.io/byoh-cluster-md-0 created
k8sinstallerconfigtemplate.infrastructure.cluster.x-k8s.io/byoh-cluster-control-plane unchanged
k8sinstallerconfigtemplate.infrastructure.cluster.x-k8s.io/byoh-cluster-md-0 unchanged
What did you expect to happen:
I would expect to see the following 2 k8sinstallerconfigtemplates would be deleted after deleting its cluster:
root@instance-11:~/cluster-api-provider-bringyourownhost# k get k8sinstallerconfigtemplates
NAME AGE
byoh-cluster-control-plane 7m30s
byoh-cluster-md-0 7m30s
What's the point of keeping them, after its cluster has already been deleted?
Anything else you would like to add:
None
Environment:
Cluster-api-provider-bringyourownhost version:
root@instance-11:~# clusterctl init --infrastructure byoh
Fetching providers
Installing cert-manager Version="v1.12.2"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v1.4.4" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v1.4.4" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v1.4.4" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-byoh" Version="v0.4.0" TargetNamespace="byoh-system"
Kubernetes version: (use
kubectl version --short
): v1.25.11OS (e.g. from
/etc/os-release
): 20.04.6 LTS (Focal Fossa)The text was updated successfully, but these errors were encountered: