You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# addon controller
E0829 14:21:56.708112 1 clusterproxy.go:65] "BuildConfigFromFlags" err="error loading config file \"/tmp/kubeconfig1256201832\": couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }" cluster="default/capi-eks"
I0829 14:21:56.708293 1 resourcesummary_collection.go:55] "failed to collect ResourceSummaries from cluster: default/capi-eks BuildConfigFromFlags: error loading config file \"/tmp/kubeconfig1256201832\": couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string \"json:\\\"apiVersion,omitempty\\\"\"; Kind string \"json:\\\"kind,omitempty\\\"\" }"
E0829 14:22:06.721660 1 clusterproxy.go:65] "BuildConfigFromFlags" err="open /tmp/token-file: no such file or directory" cluster="default/capi-eks"
I0829 14:22:06.721839 1 resourcesummary_collection.go:55] "failed to collect ResourceSummaries from cluster: default/capi-eks BuildConfigFromFlags: open /tmp/token-file: no such file or directory"
E0829 14:22:16.738122 1 clusterproxy.go:65] "BuildConfigFromFlags" err="open /tmp/token-file: no such file or directory" cluster="default/capi-eks"
I0829 14:22:16.738204 1 resourcesummary_collection.go:55] "failed to collect ResourceSummaries from cluster: default/capi-eks BuildConfigFromFlags: open /tmp/token-file: no such file or directory"
E0829 14:22:26.752496 1 clusterproxy.go:65] "BuildConfigFromFlags" err="open /tmp/token-file: no such file or directory" cluster="default/capi-eks"
I0829 14:22:26.752795 1 resourcesummary_collection.go:55] "failed to collect ResourceSummaries from cluster: default/capi-eks BuildConfigFromFlags: open /tmp/token-file: no such file or directory"
I0829 14:22:36.110929 1 main.go:562] "memory stats" logger="memory-usage" Alloc (MiB)=31 TotalAlloc (MiB)=142 Sys (MiB)=63 NumGC=19
I0829 14:22:42.079536 1 reflector.go:808] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:232: Watch close - *v1beta1.Machine total
# show addons is blank
sveltosctl show addons
+------------------+---------------+-----------+----------------+---------+-------------------------------+------------------------+
| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | PROFILES |
+------------------+---------------+-----------+----------------+---------+-------------------------------+------------------------+
+------------------+---------------+-----------+----------------+---------+-------------------------------+------------------------+
# managed cluster status show failed
Status:
Dependencies: no dependencies
Feature Summaries:
Failure Message: Kubernetes cluster unreachable: open /tmp/token-file: no such file or directory
Feature ID: Helm
Hash: l5DI7wv7fHhZjngMV9n9zKGtfATb2A35ABJ/9uej0bA=
Last Applied Time: 2024-08-29T15:03:25Z
Status: Failed
Helm Release Summaries:
Release Name: podinfo-latest
Release Namespace: podinfo
Status: Managing
Events: <none>
Capi kubeconfigs
There are 2 kubeconfigs created for a workload cluster, it would appear it is the first one that sveltos uses and this secret has multiple entries kv which is causing the issue. As only one kv in the secret has the kubeconfig, i guess sveltos is using the wrong kv in the secret. This causes the errors as seen in the logs and the inconsistent behavior.
Trying to deploy various addons to managed EKS cluster created via CAPI results in in consistent results
sveltos version - 0.36.0
cluster-api version - v1.8.1
infrastructurei-aws version - v2.6.1
SLACK DISCUSSION/TROUBLESHOOTING
kubectl apply -f capi-eks.yaml
There are 2 kubeconfigs created for a workload cluster, it would appear it is the first one that sveltos uses and this secret has multiple entries kv which is causing the issue. As only one kv in the secret has the kubeconfig, i guess sveltos is using the wrong kv in the secret. This causes the errors as seen in the logs and the inconsistent behavior.
This results in a secret with a single kv with the kubeconfig to connect to the workload cluster - which sveltos uses for connectivity
sveltosctl register cluster --namespace=default --cluster=kingston --fleet-cluster-context=arn:aws:eks:us-east-1:XXXXXXXXXX:cluster/default_capi-eks-control-plane --labels=env=prod,region=us
The text was updated successfully, but these errors were encountered: