Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: metrics not available yet #247

Closed
leeruibin opened this issue Apr 15, 2019 · 42 comments
Closed

error: metrics not available yet #247

leeruibin opened this issue Apr 15, 2019 · 42 comments

Comments

@leeruibin
Copy link

ENV:
kubernetes: v1.13.2
docker: 18.06.1-ce
system: ubuntu 18.04
architecture: arm-64
metrics-server: v0.3.1

Before I apply the 1.8+/ development, I try the command
kubectl top nodes
Then I got the error:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

After that I download the heapster from https://github.com/kubernetes-retired/heapster and apply the deploy.
The I run the

kubectl apply -f 1.8+/

and check the pods by
kubectl get pods --all-namespaces

The result are as follows:

NAMESPACE NAME READY STATUS RESTARTS AGE
default back-flask-86cbc75cd9-jdgwr 1/1 Running 0 3h53m
default demo-flask-756f45c879-68fvk 1/1 Running 0 3h53m
kube-system coredns-86c58d9df4-c4bv4 1/1 Running 0 4h16m
kube-system coredns-86c58d9df4-cbdvq 1/1 Running 0 4h16m
kube-system etcd-robbon-vivobook 1/1 Running 0 4h15m
kube-system heapster-f64999bc-2klfs 1/1 Running 0 91s
kube-system kube-apiserver-robbon-vivobook 1/1 Running 0 4h15m
kube-system kube-controller-manager-robbon-vivobook 1/1 Running 0 4h15m
kube-system kube-flannel-ds-amd64-8fxvb 1/1 Running 0 4h13m
kube-system kube-flannel-ds-amd64-xjsk6 1/1 Running 0 4h14m
kube-system kube-proxy-4smxt 1/1 Running 0 4h13m
kube-system kube-proxy-vv5xh 1/1 Running 0 4h16m
kube-system kube-scheduler-robbon-vivobook 1/1 Running 0 4h15m
kube-system kubernetes-dashboard-697f86d999-jk4b7 1/1 Running 0 3h33m
kube-system metrics-server-7d8db6b444-tzgbk 1/1 Running 0 19s
kube-system monitoring-grafana-57cbc5c659-zj6fq 1/1 Running 0 91s
kube-system monitoring-influxdb-7f897d5cc8-c2fsh 1/1 Running 0 91s

But when I run kubectl top nodes to check, I get another error:

error: metrics not available yet

So, what's wrong with this?

@ymrsmns
Copy link

ymrsmns commented Apr 22, 2019

Run
helm upgrade --install metrics stable/metrics-server --namespace kube-system
Or
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.7.0.yaml

Add
Removal (Kubernetes 1.13)
Heapster will be migrated to the kubernetes-retired organization. No new code will be merged by the Heapster maintainers.
https://github.com/kubernetes-retired/heapster/blob/master/docs/deprecation.md

@yanghaichao12
Copy link

Does your apiserver open the aggregation layer?

--requestheader-client-ca-file=/etc/kubernetes/certs/proxy-ca.crt --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --proxy-client-cert-file=/etc/kubernetes/certs/proxy.crt --proxy-client-key-file=/etc/kubernetes/certs/proxy.key

and how about your rolebinding?

@latchmihay
Copy link

latchmihay commented May 29, 2019

its probably the self signed cert on your cluster.
Try enabling --kubelet-insecure-tls

As in

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: metrics-server
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.3
        imagePullPolicy: Always
        args: [ "--kubelet-insecure-tls" ]
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

@tomsherrod
Copy link

I had the "error: metrics not available yet" message. @latchmihay pointer fixed it. Thank you.

@sreedharbukya
Copy link

I have similar problem. It says metrics not available for node or pod. It still reports

kubectl top nodes
error: metrics not available yet

Here is my configuration.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.3
        args: [ "--kubelet-insecure-tls" ]
        imagePullPolicy: Always
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

@sreedharbukya
Copy link

I got it working.

@regardfs
Copy link

@sreedharbukya I have the same error and try all the method above mentioned but still not work, may u post ur solution?

@sreedharbukya
Copy link

sreedharbukya commented Jul 23, 2019

@regardfs, Please follow below instructions.

If you are using kops for creating the cluster.

kops edit cluster --name {cluster_name}

edit following part

    kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook

you have run following commands

kops update cluster --yes
kops rolling-update cluster--yes

Watch out your logs.

This should resolve the issue.

Next is update your metrics-server deployments to following.

kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: ""
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"labels":{"k8s-app":"metrics-server"},"name":"metrics-server","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"k8s-app":"metrics-server"}},"template":{"metadata":{"labels":{"k8s-app":"metrics-server"},"name":"metrics-server"},"spec":{"containers":[{"image":"k8s.gcr.io/metrics-server-amd64:v0.3.3","imagePullPolicy":"Always","name":"metrics-server","volumeMounts":[{"mountPath":"/tmp","name":"tmp-dir"}]}],"serviceAccountName":"metrics-server","volumes":[{"emptyDir":{},"name":"tmp-dir"}]}}}}
  creationTimestamp: "2019-07-19T10:31:07Z"
  generation: 2
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
  resourceVersion: "8715"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/metrics-server
  uid: 5145dcbb-aa10-11e9-bd85-06b142917002
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: metrics-server
      name: metrics-server
    spec:
      containers:
      - command:
        - /metrics-server
        - --v=2
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-insecure-tls=true
        image: k8s.gcr.io/metrics-server-amd64:v0.3.3
        imagePullPolicy: Always
        name: metrics-server
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: metrics-server
      serviceAccountName: metrics-server
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: tmp-dir
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-07-19T10:31:07Z"
    lastUpdateTime: "2019-07-19T10:31:07Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1```




@davydkov
Copy link

davydkov commented Aug 5, 2019

@sreedharbukya Thank you!

I have an AWS EKS Cluster and default installation of metrics server from Helm.
These two parameters solved the problem:

--kubelet-preferred-address-types=InternalIP
--kubelet-insecure-tls=true

@girol
Copy link

girol commented Aug 29, 2019

As pointed in the documentation, using --kubelet-insecure-tls=true is not recommended for production environments.

That said, how do we deploy metrics-server securely using tls?

@wajdi-datalvo
Copy link

I have the same error here "metrics not available yet" .. The pod logs is showing this message:

I0925 20:15:33.886593 1 serving.go:312] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0925 20:15:34.430030 1 secure_serving.go:116] Serving securely on [::]:443
E0925 20:15:46.541160 1 reststorage.go:135] unable to fetch node metrics for node "k8s-worker-1": no metrics known for node

@ledroide
Copy link

ledroide commented Nov 8, 2019

I'm facing the same problem for nodes, but not for pods. Kubelet Insecure TLS is set : no change.

$ kubectl top pod -n default
NAME                                        CPU(cores)   MEMORY(bytes)   
resource-consumer-5766d495c6-bm47z          15m           5Mi             
resource-consumer-5766d495c6-rtkzk           6m           5Mi             
unprivileged-hello-world-658f6f7f49-dq6js    0m           6Mi             
$ kubectl top node
error: metrics not available yet
$ kubectl get deploy metrics-server -o yaml | grep -B6 insecure
      - command:
        - /metrics-server
        - --logtostderr
        - --cert-dir=/tmp
        - --secure-port=8443
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-insecure-tls

@serathius
Copy link
Contributor

no metrics known for node itself doesn't point to a problem. More details here: #349 for details.

@ledroide Could you provide metrics-server logs

@ledroide
Copy link

ledroide commented Nov 8, 2019

@serathius : Here are logs from metrics-server, filtered for matching "node" : https://gist.github.com/ledroide/0cbc6750cab7d6ae0371b88c97aee44e

Exemple of what I'm seeing in these logs :

E1105 13:30:06.940698       1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:kube-poc-compute3: unable to get CPU for container "sentinel" in pod webs/webs-sentinel-redis-slave-0 on node "10.150.233.53", discarding data: missing cpu usage metric
E1105 13:44:06.541969       1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:kube-poc-compute4: unable to get CPU for container "sentinel" in pod webs/webs-sentinel-redis-slave-0 on node "10.150.233.54", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:kube-poc-compute3: unable to get CPU for container "metrics" in pod webs/webs-sentinel-redis-master-0 on node "10.150.233.53", discarding data: missing cpu usage metric]

@serathius
Copy link
Contributor

serathius commented Nov 8, 2019

Can you verify that kubelet is exposing cpu metrics?
kubectl get --raw /api/v1/nodes/kube-poc-compute3/proxy/stats/summary | jq '.node'

You can skip using | jq '.node', it just filters node data

@ledroide
Copy link

ledroide commented Nov 8, 2019

I have copied the output of the command kubectl get --raw /api/v1/nodes/kube-poc-compute3/proxy/stats/summary | jq '.node' at the end of the same gist.

Output looks good to me.

@serathius
Copy link
Contributor

Looks like node exposes UsageNanoCores correctly and I don't see log that mentions broken node metrics (unable to get CPU for node) .

Can you verify that kubectl top node still fails?

@ledroide
Copy link

ledroide commented Nov 8, 2019

Of course, still failing :

$ kubectl top node
error: metrics not available yet

@serathius
Copy link
Contributor

Last check before I would need to deepdive into code. Need to eliminate if the problem is with kubectl top and not the api.
Please run

kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/kube-poc-compute3
kubectl version

Are you running heapster in cluster?

@ledroide
Copy link

ledroide commented Nov 8, 2019

I think we reach the problem here : the nodes list is empty

$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/"},"items":[]}
$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/kube-poc-compute3
Error from server (NotFound): nodemetrics.metrics.k8s.io "kube-poc-compute3" not found

By the way, I use kubernetes 1.16.2, and no heapster

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-17T17:16:09Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl api-versions | grep metrics
metrics.k8s.io/v1beta1

$ kubectl api-resources -o wide | egrep '^NAME|metrics'
NAME                              SHORTNAMES   APIGROUP                        NAMESPACED   KIND                             VERBS
nodes                                          metrics.k8s.io                  false        NodeMetrics                      [get list]
pods                                           metrics.k8s.io                  true         PodMetrics                       [get list]

@ledroide
Copy link

ledroide commented Nov 12, 2019

Solved.
I removed metrics-server and reinstalled it (using Kubespray).
It is ugly, but smoother or cleaner methods did not work at all.
Now it works properly. Thank you very much @serathius for you precious help :

$ kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/kube-poc-compute3
{"kind":"NodeMetrics","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"name":"kube-poc-compute3","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/kube-poc-compute3","creationTimestamp":"2019-11-12T09:42:21Z"},"timestamp":"2019-11-12T09:41:57Z","window":"30s","usage":{"cpu":"849755531n","memory":"8506004Ki"}}

@neerajjain92
Copy link

neerajjain92 commented Nov 16, 2019

Solved by following steps :

  1. Clone https://github.com/kubernetes-sigs/metrics-server
  2. Disable the metrics-server addon for minikube in case it was enabled
    minikube addons disable metrics-server
  3. Deploy the latest metric-server
    kubectl create -f deploy/1.8+/
  4. Edit metric-server deployment to add the flags
# args:
# - --kubelet-insecure-tls
# - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
$ kubectl edit deploy -n kube-system metrics-server
  1. Wait for 1 minute and boom metrics are available.

Screenshot 2019-11-16 at 1 00 56 PM

@oceaneLonneux
Copy link

Hello, I have the same problem...
I have deleted the metrics-server then re-installed it with the args mentioned by @neerajjain92 but I'm afraid nothing is happening.
1 desired | 1 updated | 1 total | 0 available | 1 unavailable

@illagrenan
Copy link

I had the same problem with Kubernetes 1.16 installed by kops on AWS. I found that a heapster is still installed on the server. When I removed the heapster (it was installed using helm), the command kubectl top nodes started to work.

@terencebor
Copy link

Thanks @latchmihay your solution works for me

@Guervyl
Copy link

Guervyl commented Jan 27, 2020

The problem happened to me and none of the suggestion above worked because the problem wasn't metrics. The problem was because the 443 and 80 was busy. I had apache server running. If yo have any application running on those ports try to kill them before anything.

@ricpf
Copy link

ricpf commented Feb 8, 2020

@regardfs, Please follow below instructions.

If you are using kops for creating the cluster.

kops edit cluster --name {cluster_name}

edit following part

    kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook

you have run following commands

kops update cluster --yes
kops rolling-update cluster--yes

Watch out your logs.

This should resolve the issue.

Next is update your metrics-server deployments to following.

kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: ""
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"labels":{"k8s-app":"metrics-server"},"name":"metrics-server","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"k8s-app":"metrics-server"}},"template":{"metadata":{"labels":{"k8s-app":"metrics-server"},"name":"metrics-server"},"spec":{"containers":[{"image":"k8s.gcr.io/metrics-server-amd64:v0.3.3","imagePullPolicy":"Always","name":"metrics-server","volumeMounts":[{"mountPath":"/tmp","name":"tmp-dir"}]}],"serviceAccountName":"metrics-server","volumes":[{"emptyDir":{},"name":"tmp-dir"}]}}}}
  creationTimestamp: "2019-07-19T10:31:07Z"
  generation: 2
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
  resourceVersion: "8715"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/metrics-server
  uid: 5145dcbb-aa10-11e9-bd85-06b142917002
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: metrics-server
      name: metrics-server
    spec:
      containers:
      - command:
        - /metrics-server
        - --v=2
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-insecure-tls=true
        image: k8s.gcr.io/metrics-server-amd64:v0.3.3
        imagePullPolicy: Always
        name: metrics-server
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: metrics-server
      serviceAccountName: metrics-server
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: tmp-dir
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-07-19T10:31:07Z"
    lastUpdateTime: "2019-07-19T10:31:07Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 2
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1```

didn't had to update the deployment tough. --insecure tls and kops update cluster did the trick for me thank you

@mbabauer
Copy link

I'm facing the same problem for nodes, but not for pods. Kubelet Insecure TLS is set : no change.

$ kubectl top pod -n default
NAME                                        CPU(cores)   MEMORY(bytes)   
resource-consumer-5766d495c6-bm47z          15m           5Mi             
resource-consumer-5766d495c6-rtkzk           6m           5Mi             
unprivileged-hello-world-658f6f7f49-dq6js    0m           6Mi             
$ kubectl top node
error: metrics not available yet
$ kubectl get deploy metrics-server -o yaml | grep -B6 insecure
      - command:
        - /metrics-server
        - --logtostderr
        - --cert-dir=/tmp
        - --secure-port=8443
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-insecure-tls

This setup worked for me, except I left the secure-port at 4443, which was how it was installed via the deployment script. Now I can get top on both node and pods.

@mayquanxi
Copy link

its probably the self signed cert on your cluster.
Try enabling --kubelet-insecure-tls

As in

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: metrics-server
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.3
        imagePullPolicy: Always
        args: [ "--kubelet-insecure-tls" ]
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

I had the same problem, and I had fix it by add args as you, thankyou

@regardfs
Copy link

@sreedharbukya thanks a lot!!!

@kevinsingapore
Copy link

Of course, still failing :

$ kubectl top node
error: metrics not available yet

do u have solve it?

@serathius
Copy link
Contributor

@kevinsingapore

I recommend to create separate issue, as in this issue we didn't even manage to fix original problem that reporter had.

error: metrics not available yet

Signifies that there is a problem with metrics-server, but says nothing about what's the problem. Meaning that this issue isn't really helpful for others (e.g. @ledroide just recreated MS).

Could try some suggestions from this issue, still it would be better to tackle each setup separately.

@rpolnx
Copy link

rpolnx commented May 27, 2020

I had a similiar problem on minikube running on windows 10 home. Your tutorial helped me.
Thank you, @neerajjain92 !

@Rock981119
Copy link

Solved by following steps :

  1. Clone https://github.com/kubernetes-sigs/metrics-server
  2. Disable the metrics-server addon for minikube in case it was enabled
    minikube addons disable metrics-server
  3. Deploy the latest metric-server
    kubectl create -f deploy/1.8+/
  4. Edit metric-server deployment to add the flags
# args:
# - --kubelet-insecure-tls
# - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
$ kubectl edit deploy -n kube-system metrics-server
  1. Wait for 1 minute and boom metrics are available.
Screenshot 2019-11-16 at 1 00 56 PM

It worked, Thx.

@zhou880220
Copy link

I had the same problem,this is my config:
[root@k8s-master data]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default whoami-7c8d684c55-vmc9x 1/1 Running 1 20h
kube-system coredns-6d56c8448f-g98fb 1/1 Running 3 2d
kube-system coredns-6d56c8448f-nhmk7 1/1 Running 3 2d
kube-system etcd-k8s-master 1/1 Running 5 2d
kube-system heapster-7fcb4d8889-nm2ln 1/1 Running 0 15h
kube-system kube-apiserver-k8s-master 1/1 Running 5 2d
kube-system kube-controller-manager-k8s-master 1/1 Running 8 47h
kube-system kube-flannel-ds-662nb 1/1 Running 1 21h
kube-system kube-flannel-ds-8njs6 1/1 Running 0 21h
kube-system kube-flannel-ds-d2z4w 1/1 Running 3 21h
kube-system kube-proxy-2blhp 1/1 Running 3 46h
kube-system kube-proxy-6hjpl 1/1 Running 1 47h
kube-system kube-proxy-zsvrh 1/1 Running 4 2d
kube-system kube-scheduler-k8s-master 1/1 Running 8 47h
kube-system monitoring-grafana-6d69444f6-fr9bp 1/1 Running 2 41h
kube-system monitoring-influxdb-64596c7b6d-jzskg 1/1 Running 1 41h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-4mkbb 1/1 Running 0 16h
kubernetes-dashboard kubernetes-dashboard-665f4c5ff-9v8kf 1/1 Running 0 16h

[root@k8s-master k8s]# kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:32:58Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

[root@k8s-master k8s]# kubectl top po
error: Metrics API not available

heapster logs:

W0924 01:44:25.002300 1 manager.go:152] Failed to get all responses in time (got 0/3)
E0924 01:45:05.001192 1 manager.go:101] Error in scraping containers from kubelet:192.168.28.143:10250: failed to get all container stats from Kubelet URL "https://192.168.28.143:10250/stats/container/": "https://192.168.28.143:10250/stats/container/" not found
E0924 01:45:05.007179 1 manager.go:101] Error in scraping containers from kubelet:192.168.28.141:10250: failed to get all container stats from Kubelet URL "https://192.168.28.141:10250/stats/container/": "https://192.168.28.141:10250/stats/container/" not found
E0924 01:45:05.015769 1 manager.go:101] Error in scraping containers from kubelet:192.168.28.142:10250: failed to get all container stats from Kubelet URL "https://192.168.28.142:10250/stats/container/":

[root@k8s-master data]# netstat -nultp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:31303 0.0.0.0:* LISTEN 126245/kube-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1059/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 126245/kube-proxy
tcp 0 0 192.168.28.141:2379 0.0.0.0:* LISTEN 126075/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 126075/etcd
tcp 0 0 192.168.28.141:2380 0.0.0.0:* LISTEN 126075/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 126075/etcd
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 15888/kube-controll
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 15947/kube-schedule
tcp 0 0 0.0.0.0:30101 0.0.0.0:* LISTEN 126245/kube-proxy
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1056/sshd
tcp 0 0 127.0.0.1:41111 0.0.0.0:* LISTEN 1059/kubelet
tcp 0 0 0.0.0.0:31001 0.0.0.0:* LISTEN 126245/kube-proxy
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1206/master
tcp 0 0 0.0.0.0:30108 0.0.0.0:* LISTEN 126245/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1059/kubelet
tcp6 0 0 :::10251 :::* LISTEN 15947/kube-schedule
tcp6 0 0 :::6443 :::* LISTEN 125805/kube-apiserv
tcp6 0 0 :::10252 :::* LISTEN 15888/kube-controll
tcp6 0 0 :::10256 :::* LISTEN 126245/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 1056/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1206/master
udp 0 0 10.244.0.1:123 0.0.0.0:* 838/ntpd
udp 0 0 10.244.0.0:123 0.0.0.0:* 838/ntpd
udp 0 0 172.17.0.1:123 0.0.0.0:* 838/ntpd
udp 0 0 192.168.28.141:123 0.0.0.0:* 838/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 838/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 838/ntpd
udp 0 0 0.0.0.0:8472 0.0.0.0:* -
udp6 0 0 fe80::98d8:4aff:fef:123 :::* 838/ntpd
udp6 0 0 fe80::888f:dfff:feb:123 :::* 838/ntpd
udp6 0 0 fe80::886a:32ff:fed:123 :::* 838/ntpd
udp6 0 0 fe80::c89e:ecff:fed:123 :::* 838/ntpd
udp6 0 0 fe80::20c:29ff:febc:123 :::* 838/ntpd
udp6 0 0 ::1:123 :::* 838/ntpd
udp6 0 0 :::123 :::* 838/ntpd

is namespace question?

@debu99
Copy link

debu99 commented Nov 27, 2020

if you use bitnami metrics-server, enable this even k8s > 1.18
apiService:
create: true

@Type1J
Copy link

Type1J commented Dec 30, 2020

I only needed --kubelet-preferred-address-types=InternalIP to fix it. With the Terraform Helm Provider the value must be an array, which uses { and } and values are separated by ,.

resource "helm_release" "metrics-server" {
  create_namespace = true
  namespace        = "platform"
  name             = "metrics-server"
  repository       = "https://charts.helm.sh/stable"
  chart            = "metrics-server"

  set {
    name  = "args"
    value = "{\"--kubelet-preferred-address-types=InternalIP\"}"
  }
}

@kingdonb
Copy link

kingdonb commented Feb 2, 2021

Edit: this report definitely does not belong here, sorry for contributing to the noise

I was able to resolve it by switching from the bitnami helm chart for metrics server, to kustomized deploy of metrics-server from this repo, with very similar to "test" kustomize manifests. Thank you for providing this.

I am on kubeadm v1.20.2 with a matching kubectl and I had to set apiService.create: true as @debu99 suggested.

This is in conflict with the docs in values.yaml which are maybe incorrect

## API service parameters
##
apiService:
  ## Specifies whether the v1beta1.metrics.k8s.io API service should be created
  ## This should not be necessary in k8s version >= 1.8, but depends on vendors and cloud providers.
  ##
  create: false

Else I ran into error: Metrics API not available

While that's not the subject of this issue report, this issue is one of the top results for "error: Metrics API not available" and it helped me, so I am highlighting it here.

I am not sure if this information belongs here, I'm using bitnami/metrics-server which has a repo of its own in https://github.com/bitnami/charts/ and so I guess the new issue report should go there, if there is a problem.

@varmasravan
Copy link

varmasravan commented Jul 5, 2021

I also was getting the same problem, by running all the YAML files in metrics-server the problem has resolved:

  1. First clone this git repository into your system using git clone https://github.com/Sonal0409/DevOps_ClassNotes.git
  2. Then go to the KUBERNETES folder
  3. Inside that again go into 'hpa' folder
  4. Again go into 'metric-server folder.
  5. And finally from this folder execute all the YAML files by giving this command " kubectl create -f ."
  6. In the fifth step there is a dot at the end of the command, don't forget to include that while running your command.
  7. After 2 to 3 minutes you can get the metrics of the pods and nodes as well by using this command " kubectl top nodes" or "kubectl top pods"

@xinfengliu
Copy link

I have the same issue but a different cause. I enabled cgroup v2 on the nodes, now kubectl top pods works well, but kubectl top nodes got error: metrics not available yet. After I added --v=2 to metrics-server command line args, I saw below messages in the metrics-server logs:

I0913 13:33:52.271928       1 scraper.go:136] "Scraping node" node="ub2004"
I0913 13:33:52.279175       1 decode.go:71] "Skipped node CPU metric" node="ub2004" err="Got UsageCoreNanoSeconds equal zero"
I0913 13:33:52.279275       1 decode.go:75] "Skipped node memory metric" node="ub2004" err="Got WorkingSetBytes equal zero"
I0913 13:33:52.279309       1 scraper.go:157] "Scrape finished" duration="9.497485ms" nodeCount=0 podCount=6

Is this a known issue?

@b0nete
Copy link

b0nete commented Sep 18, 2021

Edit: this report definitely does not belong here, sorry for contributing to the noise

I was able to resolve it by switching from the bitnami helm chart for metrics server, to kustomized deploy of metrics-server from this repo, with very similar to "test" kustomize manifests. Thank you for providing this.

I am on kubeadm v1.20.2 with a matching kubectl and I had to set apiService.create: true as @debu99 suggested.

This is in conflict with the docs in values.yaml which are maybe incorrect

## API service parameters
##
apiService:
  ## Specifies whether the v1beta1.metrics.k8s.io API service should be created
  ## This should not be necessary in k8s version >= 1.8, but depends on vendors and cloud providers.
  ##
  create: false

Else I ran into error: Metrics API not available

While that's not the subject of this issue report, this issue is one of the top results for "error: Metrics API not available" and it helped me, so I am highlighting it here.

I am not sure if this information belongs here, I'm using bitnami/metrics-server which has a repo of its own in https://github.com/bitnami/charts/ and so I guess the new issue report should go there, if there is a problem.

Thanks you, i'm using minikube 1.22 with kubernetes version 1.21.2 and this was the solution. I enabled creation of apiServer in values and it works fine!.

minikube version: v1.22.0

NAME       STATUS   ROLES                  AGE     VERSION
minikube   Ready    control-plane,master   5d12h   v1.21.2

@shubhamsaroj
Copy link

shubhamsaroj commented Sep 18, 2022

If you are installing a bit higher version then update the deployment as,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server/metrics-server:v0.3.7
        imagePullPolicy: IfNotPresent
        args:
          - --kubelet-insecure-tls
          - --cert-dir=/tmp
          - --secure-port=4443
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests