Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Velero client side throttling errors not going away even after setting qps and burst values #7895

Closed
akshaysgithub opened this issue Jun 17, 2024 · 3 comments
Assignees
Labels
Helm Issues related to Helm charts

Comments

@akshaysgithub
Copy link

akshaysgithub commented Jun 17, 2024

What steps did you take and what happened:

I0617 02:06:14.947801       1 request.go:690] Waited for 1.045616396s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/kyverno.io/v2alpha1?timeout=32s
I0617 02:11:19.801533       1 request.go:690] Waited for 1.045695818s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/cloud.google.com/v1beta1?timeout=32s
I0617 02:16:24.654408       1 request.go:690] Waited for 1.046240987s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/external.metrics.k8s.io/v1beta1?timeout=32s
I0617 02:21:29.506850       1 request.go:690] Waited for 1.044034906s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/keda.sh/v1alpha1?timeout=32s
I0617 02:26:34.359538       1 request.go:690] Waited for 1.045469412s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/auto.gke.io/v1?timeout=32s
I0617 02:31:39.213488       1 request.go:690] Waited for 1.045155957s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/networking.k8s.io/v1?timeout=32s
I0617 02:36:44.066274       1 request.go:690] Waited for 1.044660792s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/extensions.istio.io/v1alpha1?timeout=32s
I0617 02:41:48.919167       1 request.go:690] Waited for 1.046081056s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta3?timeout=32s
I0617 02:46:53.771331       1 request.go:690] Waited for 1.04595507s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/warden.gke.io/v1?timeout=32s
I0617 02:51:58.623486       1 request.go:690] Waited for 1.045352614s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/autoscaling.x-k8s.io/v1beta1?timeout=32s
I0617 02:57:03.477398       1 request.go:690] Waited for 1.04536967s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/networking.gke.io/v1?timeout=32s
I0617 03:02:08.330885       1 request.go:690] Waited for 1.045913965s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/nodemanagement.gke.io/v1alpha1?timeout=32s
I0617 03:07:13.183382       1 request.go:690] Waited for 1.045527363s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/auto.gke.io/v1?timeout=32s
I0617 03:12:18.037064       1 request.go:690] Waited for 1.04525556s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/cloud.google.com/v1beta1?timeout=32s
I0617 03:17:22.889794       1 request.go:690] Waited for 1.046095722s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/config.istio.io/v1alpha2?timeout=32s
I0617 03:22:27.742132       1 request.go:690] Waited for 1.04503457s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/velero.io/v1?timeout=32s
I0617 03:27:32.596016       1 request.go:690] Waited for 1.045757978s due to client-side throttling, not priority and fairness, request: GET:https://10.103.0.1:443/apis/cloud.google.com/v1?timeout=32s
I0617 03:32:37.44840

What did you expect to happen:
Those logs shouldn't be there

The following information will help us better understand what's going on:

If you are using velero v1.7.0+:
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help

If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl logs deployment/velero -n velero
  • velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
  • velero backup logs <backupname>
  • velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
  • velero restore logs <restorename>

Anything else you would like to add:

backupsEnabled: true
snapshotsEnabled: false
configuration:
  logFormat: json
  logLevel: warning
  backupStorageLocation:
  - name: gcp
    bucket: BUCKET_NAME
    provider: gcp
    clientBurst: 100
    clientQPS: 75.0
  volumeSnapshotLocation:
  - name: gcp
    provider: gcp
    bucket: BUCKET_NAME
    default: true
    clientBurst: 100
    clientQPS: 75.0
  defaultVolumesToFsBackup: false
  annotations:
    meta.helm.sh/release-namespace: 'velero'
    meta.helm.sh/release-name: 'velero'
credentials:
  existingSecret: SA_NAME
  extraEnvVars: {}
  secretContents: {}
  useSecret: true
initContainers:
- image: velero/velero-plugin-for-gcp:v1.8.0
  imagePullPolicy: IfNotPresent
  name: velero-plugin-for-gcp
  volumeMounts:
  - mountPath: /target
    name: plugins
upgradeCRDs: true
metrics:
  enabled: true
  podAnnotations:
    ad.datadoghq.com/velero.check_names: |
      ["openmetrics"]
    ad.datadoghq.com/velero.init_configs: |
      [{}]
    ad.datadoghq.com/velero.instances: |
      [
        {
          "prometheus_url": "http://%%host%%:8085/metrics",
          "namespace": "platform-tooling",
          "metrics": ["*"]
        }
      ]
resources:
  limits:
    cpu: "2"
    memory: 3Gi
  requests:
    cpu: "1"
    memory: 2Gi
schedules:
  default:
    schedule: 0 */2 * * *
    template:
      ttl: 336h
      storageLocation: gcp      

Environment:

  • Velero version (use velero version):
  • Velero features (use velero client config get features):
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
velero       	velero	8       	2024-05-03 12:26:20.773078172 +0000 UTC	deployed	velero-6.0.0        	1.13.0     
 kubectl version           
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3-gke.1286000
Cloud provider: GCP

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "I would like to see this bug fixed as soon as possible"
  • 👎 for "There are more important bugs to focus on right now"
@ywk253100
Copy link
Contributor

Is the configuration Helm chart values.yaml? Seems you have incorrect indent before qps and burst settings.

@dariodsa
Copy link

#7806 , already investigated, waiting for solution.

@ywk253100 ywk253100 self-assigned this Jun 24, 2024
@ywk253100 ywk253100 added the Helm Issues related to Helm charts label Jun 24, 2024
@ywk253100
Copy link
Contributor

ywk253100 commented Jul 22, 2024

This issue is a configuration error, closing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Helm Issues related to Helm charts
Projects
None yet
Development

No branches or pull requests

3 participants