Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing pod metrics on Docker 1.18 (kubeadm K8s version 1.19) #581

Closed
christianri opened this issue Sep 1, 2020 · 4 comments
Closed

Missing pod metrics on Docker 1.18 (kubeadm K8s version 1.19) #581

christianri opened this issue Sep 1, 2020 · 4 comments

Comments

@christianri
Copy link

Steps to reproduce:

  • Deploy metrics server to a Kubernetes 1.19 cluster
  • Adjust the log level, I use --v=4
  • To suit my cluster, I had to add --kubelet-preferred-address-types=InternalIP and --kubelet-insecure-tls to the args for the deployment
  • Check the logs of the server pod when running kubectl top nodes and kubectl top pods

What happened:

  • kubectl top nodes works fine and as expected.
  • kubectl top pods returns an error:
W0901 14:39:23.635213   27629 top_pod.go:265] Metrics not available for pod kube-system/calico-kube-controllers-647f44cd54-5b488, age: 111h4m23.635187s
error: Metrics not available for pod kube-system/calico-kube-controllers-647f44cd54-5b488, age: 111h4m23.635187s
  • The server log contains traces of scraping the server, but not finding any pods:
I0901 12:40:43.943923       1 manager.go:148] ScrapeMetrics: time: 63.912104ms, nodes: 3, pods: 0

What you expected to happen:

  • I have the exact same deployment (one worker node more) with Kubernetes 1.18 and kubectl top pods works as expected.
  • The server log on the 1.18 machine shows the pods being scraped:
I0901 11:22:50.437123       1 manager.go:148] ScrapeMetrics: time: 84.311507ms, nodes: 4, pods: 56

Anything else we need to know?:

Environment:

  • Kubernetes distribution (GKE, EKS, Kubeadm, the hard way, etc.): kubeadm
  • Container Network Setup (flannel, calico, etc.): calico
  • Kubernetes version (use kubectl version): 1.19
  • Metrics Server manifest: v0.3.7
  • Kubelet config: standard kubeadm config, no new options
  • Metrics Server logs: (see above)

/king bug

@serathius
Copy link
Contributor

Looks like duplicate of kubernetes/kubernetes#94281

@christianri
Copy link
Author

christianri commented Sep 1, 2020

The Kubernetes 1.18 cluster where it works has the following docker version:

Client:
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.11.6
 Git commit:        4c52b90
 Built:             Sun, 14 Jun 2020 22:12:29 +0200
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.6
  Git commit:       4c52b90
  Built:            Sun Jun 14 20:12:29 2020
  OS/Arch:          linux/amd64
  Experimental:     false

The cluster where it does not work, has the exact same docker version. Both hosts run Debian 10 Buster.

@christianri
Copy link
Author

christianri commented Sep 1, 2020

I can confirm that an upgrade to docker 19.3.12 resolves the issue.
Feel free to close, as Kubernetes in its documentation states: [Docker] Version 19.03.11 is recommended.
Source: https://kubernetes.io/docs/setup/production-environment/container-runtimes/

@serathius serathius changed the title Kubernetes 1.19 Pod metrics not scraped Missing pod metrics on Docker 1.18 (kubeadm K8s version 1.19) Sep 1, 2020
@serathius
Copy link
Contributor

Thank you for confirming

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants