You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Environmental Info:
K3s Version:
root@k3s01:~# k3s -v
k3s version v1.30.5+k3s1 (9b58670)
go version go1.22.6
Node(s) CPU architecture, OS, and Version:
Linux k3s01 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023 aarch64 GNU/Linux
Cluster Configuration:
1 Master, 3 Workers
Describe the bug:
I try to run istio in k3s.
I followed the instructions on istio side to set cniConfDir and cniBinDir as suggested. When I try to scale any pod after istio is deployed I always get errors regarding the istio-cni plugin, for example:
2s Warning FailedKillPod pod/sealed-secrets-controller-6d56c6b9d9-4z9c9 error killing pod: failed to "KillPodSandbox" for "075856a2-c6a5-4b78-b37f-0bf9e18cfa14" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"800a6b83b2f3fb60236d4c213c05a89878f5e298df9257981af807c2818cc1c8\": plugin type=\"istio-cni\" name=\"istio-cni\" failed (delete): failed to find plugin \"istio-cni\" in path [/var/lib/rancher/k3s/data/6e93143ae43124867ba621eb24d033598be45e08808187400be7534906e8f180/bin]"
or when scheduling a pod:
3m29s Warning FailedCreatePodSandBox pod/argocd-dex-server-545985657c-dxx6m Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3ff6cc434138ac433689a6d38490140e48d3b84bd9f30f0ff7c0e1c23ccd9596": plugin type="istio-cni" name="istio-cni" failed (add): failed to find plugin "istio-cni" in path [/var/lib/rancher/k3s/data/6e93143ae43124867ba621eb24d033598be45e08808187400be7534906e8f180/bin]
When I ssh into the corresponding node of the pod I see this (mind the path from the argocd-dex pod above):
ll /var/lib/rancher/k3s/data/6e93143ae43124867ba621eb24d033598be45e08808187400be7534906e8f180/bin | grep istio
-- nothing here --
if I look up the current symlink (after I deployed istio-cni):
I can see the daemonset deployed istio-cni onto the node but the pods try to access istio-cni on the old path. Thus all pods I try to delete hang in Terminating state as long I force them to delete.
Pods getting scheduled and can be terminated. None of my workloads are istio-enabled right now but it seems that istio-cni plays a role already, not sure, pretty new to istio.
I am using dietpi as OS by the way, so kinda stripped-down debian.
Steps To Reproduce:
Installed K3s:
install istio in ambient mode
schedule/delete any pod
get error
Expected behavior:
Pods can be scheduled/terminated
Actual behavior:
see above
Additional context / logs:
The text was updated successfully, but these errors were encountered:
Environmental Info:
K3s Version:
root@k3s01:~# k3s -v
k3s version v1.30.5+k3s1 (9b58670)
go version go1.22.6
Node(s) CPU architecture, OS, and Version:
Linux k3s01 6.1.21-v8+ #1642 SMP PREEMPT Mon Apr 3 17:24:16 BST 2023 aarch64 GNU/Linux
Cluster Configuration:
1 Master, 3 Workers
Describe the bug:
I try to run istio in k3s.
I followed the instructions on istio side to set
cniConfDir
andcniBinDir
as suggested. When I try to scale any pod after istio is deployed I always get errors regarding the istio-cni plugin, for example:or when scheduling a pod:
When I ssh into the corresponding node of the pod I see this (mind the path from the argocd-dex pod above):
if I look up the
current
symlink (after I deployed istio-cni):I can see the daemonset deployed istio-cni onto the node but the pods try to access istio-cni on the old path. Thus all pods I try to delete hang in Terminating state as long I force them to delete.
Any idea here?
Workaround: Good ol'
cp
:D :Pods getting scheduled and can be terminated. None of my workloads are istio-enabled right now but it seems that
istio-cni
plays a role already, not sure, pretty new to istio.I am using dietpi as OS by the way, so kinda stripped-down debian.
Steps To Reproduce:
Expected behavior:
Pods can be scheduled/terminated
Actual behavior:
see above
Additional context / logs:
The text was updated successfully, but these errors were encountered: