Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.32.0 caused spike in network traffic #7213

Open
ben-nelson-nbcuni opened this issue Mar 15, 2024 · 8 comments
Open

0.32.0 caused spike in network traffic #7213

ben-nelson-nbcuni opened this issue Mar 15, 2024 · 8 comments

Comments

@ben-nelson-nbcuni
Copy link

Thanos, Prometheus and Golang version used:

All Thanos components are using 0.32.4 but we've tested using 0.34.1 and the issue persisted.
Prometheus is on version v0.69.1.

Object Storage Provider: AWS S3 bucket

What happened: Upgrading from 0.31.0 to 0.32.0 causes a large spike in network traffic between chained Thanos Query components.

What you expected to happen: Network traffic to be consistent with previous versions.

How to reproduce it (as minimally and precisely as possible):

  1. Setup prometheus with a 0.31.0 thanos sidecar. Issue scales with higher cardinality in Prometheus metrics so you may need to add mock data.
  2. Setup a 0.31.0 thanos store gateway. Once again high cardinality and long time range (1 year+) scale the network traffic spike. The only flag we use is --store.enable-index-header-lazy-reader.
  3. Setup a 0.31.0 thanos query with --endpoint=dnssrv+_grpc._tcp.thanos-store-gateway.monitoring.svc behind an ALB configured for gRPC traffic.
  4. Setup a central 0.31.0 thanos query with --endpoint=$GRPC_HOST to the child thanos query ALB.
  5. View network traffic for the child thanos query to the central thanos query.
  6. Upgrade all components to 0.32.0.
  7. View network traffic again. We've seen it spike 100x on large clusters. When this traffic is going across regions and over the public internet, the cost increase can be substantial. This cost occurs without any active queries and just appears to be caused by the 5s interval refreshes of endpoints from the central thanos.

Turning on --grpc-compression=snappy helped reduce the spike, but it definitely still exists.
Removing --store.enable-index-header-lazy-reader did not seem to noticeably reduce the network traffic spike.
If the child thanos query or store are rolled back to 0.31.0, the network traffic returns to pre-upgrade levels.

Full logs to relevant components:

No relevant logs. Only screenshots of prometheus.

We will occasionally get a warning on one of the thanos query pods says detecting store that does not support without replica label setting. Falling back to eager retrieval with additional sort. Make sure your storeAPI supports it to speed up your queries but its not frequent enough and doesn't seem to indicate that an increase in network traffice would occur.

Anything else we need to know:

Graph of network transmitted out from the child thanos query instances when upgrading from 0.31.0 to 0.32.0.
image

@fpetkovski
Copy link
Contributor

This could be related to #6329. Do you know many blocks you have approximately in object storage?

@ben-nelson-nbcuni
Copy link
Author

That looks like the right addition. The metric thanos_bucket_store_blocks_loaded at its highest is 35,433. That value and others near it are on dev clusters that have had instability in their prometheus / thanos components during large performance tests. I'm not sure if that's contributing to the high block count. Does each interruption in Prometheus service equal a new block?

Is there a way to cache these lookups for older blocks that are unlikely to change? Or can you add a mechanism to turn off this information either on a particular store or query component?

@jtb-sre
Copy link

jtb-sre commented Apr 4, 2024

I can reproduce Ben's findings -- I have a development environment on Thanos 0.34.1 and was experiencing the high network traffic noted above. The 100x factor is also true in my environment -- running an intensive query on 0.34.1 generates peak network activity of 40MB/s. I downgraded to 0.31.0 and the same query peaked at about 480 KB/s.

My Thanos queriers have three grpc endpoints (two TLS/grpc ingresses for Thanos sidecars, and a TLS/grpc ingress for a Thanos store service). The development environment I reproduced this on has a small number of blocks in object storage due to limited retention time (230 blocks each of which containing 1-4 chunks, 924 objects in total dating back to 03/12), but relatively high series cardinality (prometheus_tsdb_head_series totals to 500,000 across 2 K8S clusters).

@jtb-sre
Copy link

jtb-sre commented Apr 6, 2024

I was able to do a little bit more digging and think I found the cause!

I think the cause is actually #6317 -- as Douglas notes, this change causes the store/sidecar instances to send labels in their response for filtering purposes, which seems a likely cause for the extra traffic we're seeing. Digging through the PR a bit further, I noticed that the newFlushableServer function skips label flushing if --query.replica-label isn't specified. I verified that I could return to the pre-0.32 traffic volume by removing --query.replica-label .

In my case, the development environment is not using HA Prometheus and I do not need to use dedup. It may be worth calling out the network impacts of dedup because they were significant enough to be the cause of some instability in my development clusters. It's also not clear to me why removing --query.replica-label works in light of the changes made in #6706 -- I guess the label check ultimately moved from flushable.go to proxy_heap.go?

@ben-nelson-nbcuni Would you be willing to test whether removing dedup improves matters for your development cluster?

@fpetkovski Am I right in understanding that a feature flag to disable the cuckoo filter would be duplicative, because without it you can't rely on --query.replica-label for deduplication? Also, that it should be sufficient to remove --query.replica-labels from our deployments as long as our pods are uniquely identified including external labels?

Thanks!
jtb

@ben-nelson-nbcuni
Copy link
Author

We have 2 thanos queries in the chain. One local and one central. Removing --query.replica-label from the local and the central did not have any effect on the traffic volume spike. For this round of testing, I've attached all of our settings.

Local:

  - args:
    - query
    - --log.level=info
    - --log.format=json
    - --grpc-address=0.0.0.0:10901
    - --http-address=0.0.0.0:10902
    - --query.auto-downsampling
    - --endpoint=dnssrv+_grpc._tcp.thanos-store-gateway.monitoring.svc
    - --endpoint=dnssrv+_grpc._tcp.prometheus-operated.monitoring.svc

Central:

  - args:
    - query
    - --log.level=info
    - --log.format=logfmt
    - --grpc-address=0.0.0.0:10901
    - --http-address=0.0.0.0:10902
    - --query.auto-downsampling
    - --grpc-client-tls-secure
    - --grpc-compression=snappy
    - --endpoint=...

@ben-nelson-nbcuni
Copy link
Author

Here is a screenshot of prometheus metrics.

  1. At 12:44, I upgraded the local thanos-query to 0.32.4 from 0.28.0 and removed the --query.replica-label.
  2. At 12:50 (once it was clear network was still spiking), I updated the central thanos-query to remove --query.replica-label (the central is always on version 0.32.4).
  3. At 12:59, I downgraded the local thanos-query and re-added --query.replica-label.
  4. As of 13:04, the central thanos still doesn't have --query.replica-label.
image

@fpetkovski
Copy link
Contributor

fpetkovski commented Apr 26, 2024

I suggest we group all blocks by labels here https://github.com/thanos-io/thanos/blob/main/pkg/store/bucket.go#L873-L889 and return one TSDBInfo per stream rather than per block. @MichaHoffmann has noticed a trend of network usage going down with reduction in number of blocks.

@MichaHoffmann
Copy link
Contributor

@ben-nelson-nbcuni are you able to test #7308 by any chance?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants