Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP 429 \"Too Many Requests\": Ingestion rate limit exceeded for user default-logs #15140

Open
meSATYA opened this issue Nov 27, 2024 · 0 comments

Comments

@meSATYA
Copy link

meSATYA commented Nov 27, 2024

Describe the bug
While exporting to loki-distributed using loki exporter in otel-collector, it throws below error.

2024-11-27T00:47:45.243Z info internal/retry_sender.go:126 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "loki/default-logs", "error": "HTTP 429 "Too Many Requests": Ingestion rate limit exceeded for user default-logs (limit: 12582912 bytes/sec) while attempting to ingest '60' lines totaling '116866' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased", "interval": "2.537297293s"}
2024-11-27T00:47:45.243Z info internal/retry_sender.go:126 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "loki/default-logs", "error": "HTTP 429 "Too Many Requests": Ingestion rate limit exceeded for user default-logs (limit: 12582912 bytes/sec) while attempting to ingest '60' lines totaling '130781' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased", "interval": "4.084086341s"}
2024-11-27T00:47:45.317Z info internal/retry_sender.go:126 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "loki/default-logs", "error": "HTTP 429 "Too Many Requests": Ingestion rate limit exceeded for user default-logs (limit: 12582912 bytes/sec) while attempting to ingest '60' lines totaling '104668' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased", "interval": "2.55432931s"}

Surprisingly, loki doesn't throws any error in the loki gateway or distributor or ingester component. It is not known from where the number 12582912 bytes/sec comes because it is not configured in loki. If we consider this number even if the limit is set at 12 MB, but the size in the error is way lesser than that e.g 116866' bytes / '130781' bytes / '104668' bytes.

To Reproduce
Use the otel-collector and loki configuration below

Expected behavior
The rate limit error shouldn't be thrown or the related loki configuration should be known.

Environment:

  • Infrastructure: Kubernetes
  • Deployment tool: helm

Screenshots, Promtail config, or terminal output

OpenTelemetry Collector configuration

exporters:
  debug:
    verbosity: basic

  loki/default-logs:
    endpoint: http://loki-loki-distributed-gateway.logs:80/loki/api/v1/push
    headers:
      x-scope-orgid: default-logs
    tls:
      insecure: true

extensions:
  health_check:
    endpoint: ${env:MY_POD_IP}:13133
processors:
  batch: {}

  batch/default-logs:
    send_batch_max_size: 60
    send_batch_size: 50
    timeout: 10s

  memory_limiter:
    check_interval: 5s
    limit_percentage: 80
    spike_limit_percentage: 25
receivers:
  kafka/processor-logs:
    auth:
      sasl:
        mechanism: PLAIN
        password: ${EVENT_HUB_NAMESPACE_LISTEN_CONNECTION_STRING}
        username: $$ConnectionString
      tls:
        insecure: false
    brokers:
    - dev-event-hub-namespace.servicebus.windows.net:9093
    encoding: otlp_proto
    protocol_version: 3.7.0
    topic: dev-otlp-logs
service:
  extensions:
  - health_check
  pipelines:

    logs/default-logs:
      exporters:
      - loki/default-logs
      processors:
      - filter/default-logs
      - batch/default-logs
      receivers:
      - kafka/processor-logs

    metrics:
      address: ${env:MY_POD_IP}:8888

Loki Distributed Limits configuration

    limits_config:
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      max_cache_freshness_per_query: 10m
      split_queries_by_interval: 15m
      retention_period: 72h       #new
      max_query_series: 5000
      ingestion_rate_mb: 24           # default = 4 (MB)
      ingestion_burst_size_mb: 36     # default = 6 (MB)
      allow_structured_metadata: true
      per_stream_rate_limit: 15MB
      per_stream_rate_limit_burst: 20MB
      volume_enabled: true

Image

Respective issue raised on otel-collector-contrib:
open-telemetry/opentelemetry-collector-contrib#36558

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants