Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash loop: Failed to generate AWS EC2 Instance Types: unable to load EC2 Instance Type list #4036

Closed
fitchtech opened this issue Apr 26, 2021 · 9 comments
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@fitchtech
Copy link

I0426 16:22:12.584855 1 flags.go:52] FLAG: --add-dir-header="false"
I0426 16:22:12.584904 1 flags.go:52] FLAG: --address=":8085"
I0426 16:22:12.584910 1 flags.go:52] FLAG: --alsologtostderr="false"
I0426 16:22:12.584915 1 flags.go:52] FLAG: --aws-use-static-instance-list="false"
I0426 16:22:12.584919 1 flags.go:52] FLAG: --balance-similar-node-groups="false"
I0426 16:22:12.584923 1 flags.go:52] FLAG: --balancing-ignore-label="[]"
I0426 16:22:12.584928 1 flags.go:52] FLAG: --cloud-config=""
I0426 16:22:12.584936 1 flags.go:52] FLAG: --cloud-provider="aws"
I0426 16:22:12.584941 1 flags.go:52] FLAG: --cloud-provider-gce-l7lb-src-cidrs="130.211.0.0/22,35.191.0.0/16"
I0426 16:22:12.584947 1 flags.go:52] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16"
I0426 16:22:12.584953 1 flags.go:52] FLAG: --cluster-name=""
I0426 16:22:12.584963 1 flags.go:52] FLAG: --clusterapi-cloud-config-authoritative="false"
I0426 16:22:12.584968 1 flags.go:52] FLAG: --cores-total="0:320000"
I0426 16:22:12.584972 1 flags.go:52] FLAG: --estimator="binpacking"
I0426 16:22:12.584995 1 flags.go:52] FLAG: --expander="random"
I0426 16:22:12.585000 1 flags.go:52] FLAG: --expendable-pods-priority-cutoff="-10"
I0426 16:22:12.585005 1 flags.go:52] FLAG: --gpu-total="[]"
I0426 16:22:12.585010 1 flags.go:52] FLAG: --ignore-daemonsets-utilization="false"
I0426 16:22:12.585014 1 flags.go:52] FLAG: --ignore-mirror-pods-utilization="false"
I0426 16:22:12.585018 1 flags.go:52] FLAG: --ignore-taint="[]"
I0426 16:22:12.585022 1 flags.go:52] FLAG: --kubeconfig=""
I0426 16:22:12.585026 1 flags.go:52] FLAG: --kubernetes=""
I0426 16:22:12.585030 1 flags.go:52] FLAG: --leader-elect="true"
I0426 16:22:12.585037 1 flags.go:52] FLAG: --leader-elect-lease-duration="15s"
I0426 16:22:12.585043 1 flags.go:52] FLAG: --leader-elect-renew-deadline="10s"
I0426 16:22:12.585047 1 flags.go:52] FLAG: --leader-elect-resource-lock="leases"
I0426 16:22:12.585071 1 flags.go:52] FLAG: --leader-elect-resource-name=""
I0426 16:22:12.585078 1 flags.go:52] FLAG: --leader-elect-resource-namespace=""
I0426 16:22:12.585082 1 flags.go:52] FLAG: --leader-elect-retry-period="2s"
I0426 16:22:12.585087 1 flags.go:52] FLAG: --log-backtrace-at=":0"
I0426 16:22:12.585095 1 flags.go:52] FLAG: --log-dir=""
I0426 16:22:12.585100 1 flags.go:52] FLAG: --log-file=""
I0426 16:22:12.585105 1 flags.go:52] FLAG: --log-file-max-size="1800"
I0426 16:22:12.585109 1 flags.go:52] FLAG: --logtostderr="true"
I0426 16:22:12.585114 1 flags.go:52] FLAG: --max-autoprovisioned-node-group-count="15"
I0426 16:22:12.585118 1 flags.go:52] FLAG: --max-bulk-soft-taint-count="10"
I0426 16:22:12.585122 1 flags.go:52] FLAG: --max-bulk-soft-taint-time="3s"
I0426 16:22:12.585126 1 flags.go:52] FLAG: --max-empty-bulk-delete="10"
I0426 16:22:12.585130 1 flags.go:52] FLAG: --max-failing-time="15m0s"
I0426 16:22:12.585160 1 flags.go:52] FLAG: --max-graceful-termination-sec="600"
I0426 16:22:12.585166 1 flags.go:52] FLAG: --max-inactivity="10m0s"
I0426 16:22:12.585172 1 flags.go:52] FLAG: --max-node-provision-time="15m0s"
I0426 16:22:12.585177 1 flags.go:52] FLAG: --max-nodes-total="0"
I0426 16:22:12.585182 1 flags.go:52] FLAG: --max-total-unready-percentage="45"
I0426 16:22:12.585188 1 flags.go:52] FLAG: --memory-total="0:6400000"
I0426 16:22:12.585193 1 flags.go:52] FLAG: --min-replica-count="0"
I0426 16:22:12.585197 1 flags.go:52] FLAG: --namespace="kube-system"
I0426 16:22:12.585202 1 flags.go:52] FLAG: --new-pod-scale-up-delay="0s"
I0426 16:22:12.585206 1 flags.go:52] FLAG: --node-autoprovisioning-enabled="false"
I0426 16:22:12.585211 1 flags.go:52] FLAG: --node-deletion-delay-timeout="2m0s"
I0426 16:22:12.585216 1 flags.go:52] FLAG: --node-group-auto-discovery="[asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/spireon-dev-eks]"
I0426 16:22:12.585251 1 flags.go:52] FLAG: --nodes="[]"
I0426 16:22:12.585257 1 flags.go:52] FLAG: --ok-total-unready-count="3"
I0426 16:22:12.585261 1 flags.go:52] FLAG: --one-output="false"
I0426 16:22:12.585266 1 flags.go:52] FLAG: --profiling="false"
I0426 16:22:12.585270 1 flags.go:52] FLAG: --regional="false"
I0426 16:22:12.585275 1 flags.go:52] FLAG: --scale-down-candidates-pool-min-count="50"
I0426 16:22:12.585280 1 flags.go:52] FLAG: --scale-down-candidates-pool-ratio="0.1"
I0426 16:22:12.585285 1 flags.go:52] FLAG: --scale-down-delay-after-add="10m0s"
I0426 16:22:12.585290 1 flags.go:52] FLAG: --scale-down-delay-after-delete="0s"
I0426 16:22:12.585295 1 flags.go:52] FLAG: --scale-down-delay-after-failure="3m0s"
I0426 16:22:12.585314 1 flags.go:52] FLAG: --scale-down-enabled="true"
I0426 16:22:12.585322 1 flags.go:52] FLAG: --scale-down-gpu-utilization-threshold="0.5"
I0426 16:22:12.585328 1 flags.go:52] FLAG: --scale-down-non-empty-candidates-count="30"
I0426 16:22:12.585341 1 flags.go:52] FLAG: --scale-down-unneeded-time="10m0s"
I0426 16:22:12.585347 1 flags.go:52] FLAG: --scale-down-unready-time="20m0s"
I0426 16:22:12.585352 1 flags.go:52] FLAG: --scale-down-utilization-threshold="0.5"
I0426 16:22:12.585357 1 flags.go:52] FLAG: --scale-up-from-zero="true"
I0426 16:22:12.585362 1 flags.go:52] FLAG: --scan-interval="10s"
I0426 16:22:12.585367 1 flags.go:52] FLAG: --skip-headers="false"
I0426 16:22:12.585372 1 flags.go:52] FLAG: --skip-log-headers="false"
I0426 16:22:12.585376 1 flags.go:52] FLAG: --skip-nodes-with-local-storage="true"
I0426 16:22:12.585381 1 flags.go:52] FLAG: --skip-nodes-with-system-pods="true"
I0426 16:22:12.585385 1 flags.go:52] FLAG: --stderrthreshold="0"
I0426 16:22:12.585389 1 flags.go:52] FLAG: --unremovable-node-recheck-timeout="5m0s"
I0426 16:22:12.585394 1 flags.go:52] FLAG: --v="4"
I0426 16:22:12.585399 1 flags.go:52] FLAG: --vmodule=""
I0426 16:22:12.585420 1 flags.go:52] FLAG: --write-status-configmap="true"
I0426 16:22:12.585434 1 main.go:379] Cluster Autoscaler 1.20.0
I0426 16:22:12.600957 1 leaderelection.go:243] attempting to acquire leader lease kube-system/cluster-autoscaler...
I0426 16:22:12.612546 1 leaderelection.go:253] successfully acquired lease kube-system/cluster-autoscaler
I0426 16:22:12.612821 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"cluster-autoscaler", UID:"666af171-4bb5-4e49-bba3-c43bf6ddf18b", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"940803", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cluster-autoscaler-aws-cluster-autoscaler-chart-7cbc7f8f48hbq9b became leader
I0426 16:22:12.615242 1 reflector.go:219] Starting reflector *v1.Pod (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:212
I0426 16:22:12.615264 1 reflector.go:255] Listing and watching *v1.Pod from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:212
I0426 16:22:12.615425 1 reflector.go:219] Starting reflector *v1.Node (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
I0426 16:22:12.615481 1 reflector.go:255] Listing and watching *v1.Node from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
I0426 16:22:12.615575 1 reflector.go:219] Starting reflector *v1.Node (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
I0426 16:22:12.615582 1 reflector.go:255] Listing and watching *v1.Node from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:246
I0426 16:22:12.615659 1 reflector.go:219] Starting reflector *v1beta1.PodDisruptionBudget (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309
I0426 16:22:12.615666 1 reflector.go:255] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309
I0426 16:22:12.615781 1 reflector.go:219] Starting reflector *v1.DaemonSet (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:320
I0426 16:22:12.615917 1 reflector.go:255] Listing and watching *v1.DaemonSet from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:320
I0426 16:22:12.616054 1 reflector.go:219] Starting reflector *v1.Pod (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:188
I0426 16:22:12.616064 1 reflector.go:255] Listing and watching *v1.Pod from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:188
I0426 16:22:12.616166 1 reflector.go:219] Starting reflector *v1.ReplicationController (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:329
I0426 16:22:12.616174 1 reflector.go:255] Listing and watching *v1.ReplicationController from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:329
I0426 16:22:12.616250 1 reflector.go:219] Starting reflector *v1.Job (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:338
I0426 16:22:12.616257 1 reflector.go:255] Listing and watching *v1.Job from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:338
I0426 16:22:12.616407 1 reflector.go:219] Starting reflector *v1.ReplicaSet (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:347
I0426 16:22:12.616419 1 reflector.go:255] Listing and watching *v1.ReplicaSet from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:347
I0426 16:22:12.616596 1 reflector.go:219] Starting reflector *v1.StatefulSet (1h0m0s) from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:356
I0426 16:22:12.616605 1 reflector.go:255] Listing and watching *v1.StatefulSet from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:356
I0426 16:22:12.679516 1 cloud_provider_builder.go:29] Building aws cloud provider.
I0426 16:22:12.679696 1 aws_util.go:76] fetching https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/us-west-2/index.json
I0426 16:22:12.679835 1 reflector.go:219] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.679851 1 reflector.go:255] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.680228 1 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.680246 1 reflector.go:255] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.680719 1 reflector.go:219] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.680758 1 reflector.go:255] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.681043 1 reflector.go:219] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.681195 1 reflector.go:219] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.681215 1 reflector.go:255] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.681713 1 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.681802 1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.681900 1 reflector.go:219] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.681926 1 reflector.go:255] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.681205 1 reflector.go:255] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.682356 1 reflector.go:219] Starting reflector *v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.682422 1 reflector.go:255] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.682799 1 reflector.go:219] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.682902 1 reflector.go:255] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.683163 1 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.683178 1 reflector.go:255] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.683379 1 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.683460 1 reflector.go:255] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0426 16:22:12.815447 1 request.go:591] Throttling request took 133.484723ms, request: GET:https://10.100.0.1:443/api/v1/nodes?limit=500&resourceVersion=0
I0426 16:22:13.015483 1 request.go:591] Throttling request took 332.201605ms, request: GET:https://10.100.0.1:443/api/v1/services?limit=500&resourceVersion=0
I0426 16:22:13.215494 1 request.go:591] Throttling request took 531.901777ms, request: GET:https://10.100.0.1:443/api/v1/pods?limit=500&resourceVersion=0
W0426 16:22:42.680123 1 aws_util.go:79] Error fetching https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/us-west-2/index.json skipping...
F0426 16:22:42.680176 1 aws_cloud_provider.go:358] Failed to generate AWS EC2 Instance Types: unable to load EC2 Instance Type list
goroutine 68 [running]:
k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/klog/v2.stacks(0xc00012a001, 0xc000724000, 0x8a, 0xed)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/klog/v2/klog.go:1026 +0xb8
k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/klog/v2.(*loggingT).output(0x6169d60, 0xc000000003, 0x0, 0x0, 0xc0007cae70, 0x608993e, 0x15, 0x166, 0x0)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/klog/v2/klog.go:975 +0x1a3
k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/klog/v2.(*loggingT).printf(0x6169d60, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x3b3f52c, 0x2d, 0xc00100ef90, 0x1, ...)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/klog/v2/klog.go:750 +0x18b
k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/klog/v2.Fatalf(...)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/klog/v2/klog.go:1502
k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws.BuildAWS(0xa, 0x3fe0000000000000, 0x3fe0000000000000, 0x8bb2c97000, 0x1176592e000, 0x0, 0x4e200, 0x0, 0x186a0000000000, 0x0, ...)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider.go:358 +0x290
k8s.io/autoscaler/cluster-autoscaler/cloudprovider/builder.buildCloudProvider(0xa, 0x3fe0000000000000, 0x3fe0000000000000, 0x8bb2c97000, 0x1176592e000, 0x0, 0x4e200, 0x0, 0x186a0000000000, 0x0, ...)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/cloudprovider/builder/builder_all.go:63 +0x185
k8s.io/autoscaler/cluster-autoscaler/cloudprovider/builder.NewCloudProvider(0xa, 0x3fe0000000000000, 0x3fe0000000000000, 0x8bb2c97000, 0x1176592e000, 0x0, 0x4e200, 0x0, 0x186a0000000000, 0x0, ...)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/cloudprovider/builder/cloud_provider_builder.go:45 +0x1e6
k8s.io/autoscaler/cluster-autoscaler/core.initializeDefaultOptions(0xc000fdf770, 0xc0000d2b40, 0x140)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/autoscaler.go:101 +0x2fd
k8s.io/autoscaler/cluster-autoscaler/core.NewAutoscaler(0xa, 0x3fe0000000000000, 0x3fe0000000000000, 0x8bb2c97000, 0x1176592e000, 0x0, 0x4e200, 0x0, 0x186a0000000000, 0x0, ...)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/autoscaler.go:65 +0x43
main.buildAutoscaler(0x0, 0x0, 0x0, 0xc00025efc0)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/main.go:325 +0x364
main.run(0xc000638820)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/main.go:331 +0x39
main.main.func2(0x41d0280, 0xc00025fa00)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/main.go:435 +0x2a
created by k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:207 +0x113

Using https://kubernetes.github.io/autoscaler/cluster-autoscaler-chart and image: us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.20.0

--set cloudProvider="aws"
--set awsRegion="us-west-2"
--set autoDiscovery.clusterName=""
--set autoDiscovery.enabled=true
--set rbac.create=true
--set rbac.serviceAccount.name="cluster-autoscaler-aws-cluster-autoscaler-chart"
--set rbac.serviceAccount.annotations."eks.amazonaws.com/role-arn"=
--namespace="kube-system"

@fitchtech fitchtech added the kind/bug Categorizes issue or PR as related to a bug. label Apr 26, 2021
@tarapon
Copy link

tarapon commented May 27, 2021

I have the same issue.

It looks like while fetching pricing file (in my case it's https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/us-west-2/index.json) it gets OOM error.

Currently file size is around 130Mb. As a workaround I increased memory limit for autoscaler deployment to 1Gi

@metasim
Copy link

metasim commented Jun 23, 2021

Anyone know if there's an option to disable reading of the pricing file when the price auto-expander is not being used?

@Slach
Copy link

Slach commented Jul 12, 2021

--aws-use-static-instance-list=true could be added to command in spec.containers for your kind: Deployment cluster autoscaler, look to #3095 , for details
thanks a lot for @shyiko for explain details

@amitpd
Copy link

amitpd commented Aug 3, 2021

@Slach I am using KOPS and was trying to install cluster autoscaler as KOPS Addon. Refer https://kops.sigs.k8s.io/addons/#cluster-autoscaler
I don't see an option to pass extra args while installing CA as KOPS Addon. Do you happen to know any workaround for this.

@Slach
Copy link

Slach commented Aug 3, 2021

@amitbitcse I don't use kops, according to kops source https://github.com/kubernetes/kops/blob/master/addons/cluster-autoscaler/v1.8.0.yaml#L217-L223
You should try to customize deployment parameters to pass advanced parameters into kops auto-scaler addon to use static instance list, feel free to make pull request

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants