-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.32.0 caused spike in network traffic #7213
Comments
This could be related to #6329. Do you know many blocks you have approximately in object storage? |
That looks like the right addition. The metric Is there a way to cache these lookups for older blocks that are unlikely to change? Or can you add a mechanism to turn off this information either on a particular |
I can reproduce Ben's findings -- I have a development environment on Thanos 0.34.1 and was experiencing the high network traffic noted above. The 100x factor is also true in my environment -- running an intensive query on 0.34.1 generates peak network activity of 40MB/s. I downgraded to 0.31.0 and the same query peaked at about 480 KB/s. My Thanos queriers have three grpc endpoints (two TLS/grpc ingresses for Thanos sidecars, and a TLS/grpc ingress for a Thanos store service). The development environment I reproduced this on has a small number of blocks in object storage due to limited retention time (230 blocks each of which containing 1-4 chunks, 924 objects in total dating back to 03/12), but relatively high series cardinality ( |
I was able to do a little bit more digging and think I found the cause! I think the cause is actually #6317 -- as Douglas notes, this change causes the store/sidecar instances to send labels in their response for filtering purposes, which seems a likely cause for the extra traffic we're seeing. Digging through the PR a bit further, I noticed that the newFlushableServer function skips label flushing if --query.replica-label isn't specified. I verified that I could return to the pre-0.32 traffic volume by removing --query.replica-label . In my case, the development environment is not using HA Prometheus and I do not need to use dedup. It may be worth calling out the network impacts of dedup because they were significant enough to be the cause of some instability in my development clusters. It's also not clear to me why removing --query.replica-label works in light of the changes made in #6706 -- I guess the label check ultimately moved from flushable.go to proxy_heap.go? @ben-nelson-nbcuni Would you be willing to test whether removing dedup improves matters for your development cluster? @fpetkovski Am I right in understanding that a feature flag to disable the cuckoo filter would be duplicative, because without it you can't rely on --query.replica-label for deduplication? Also, that it should be sufficient to remove --query.replica-labels from our deployments as long as our pods are uniquely identified including external labels? Thanks! |
We have 2 thanos queries in the chain. One local and one central. Removing Local: - args:
- query
- --log.level=info
- --log.format=json
- --grpc-address=0.0.0.0:10901
- --http-address=0.0.0.0:10902
- --query.auto-downsampling
- --endpoint=dnssrv+_grpc._tcp.thanos-store-gateway.monitoring.svc
- --endpoint=dnssrv+_grpc._tcp.prometheus-operated.monitoring.svc Central: - args:
- query
- --log.level=info
- --log.format=logfmt
- --grpc-address=0.0.0.0:10901
- --http-address=0.0.0.0:10902
- --query.auto-downsampling
- --grpc-client-tls-secure
- --grpc-compression=snappy
- --endpoint=... |
I suggest we group all blocks by labels here https://github.com/thanos-io/thanos/blob/main/pkg/store/bucket.go#L873-L889 and return one |
@ben-nelson-nbcuni are you able to test #7308 by any chance? |
Thanos, Prometheus and Golang version used:
All Thanos components are using
0.32.4
but we've tested using0.34.1
and the issue persisted.Prometheus is on version
v0.69.1
.Object Storage Provider: AWS S3 bucket
What happened: Upgrading from
0.31.0
to0.32.0
causes a large spike in network traffic between chained Thanos Query components.What you expected to happen: Network traffic to be consistent with previous versions.
How to reproduce it (as minimally and precisely as possible):
0.31.0
thanos sidecar. Issue scales with higher cardinality in Prometheus metrics so you may need to add mock data.0.31.0
thanos store gateway. Once again high cardinality and long time range (1 year+) scale the network traffic spike. The only flag we use is--store.enable-index-header-lazy-reader
.0.31.0
thanos query with--endpoint=dnssrv+_grpc._tcp.thanos-store-gateway.monitoring.svc
behind an ALB configured for gRPC traffic.0.31.0
thanos query with--endpoint=$GRPC_HOST
to the child thanos query ALB.0.32.0
.5s
interval refreshes of endpoints from the central thanos.Turning on
--grpc-compression=snappy
helped reduce the spike, but it definitely still exists.Removing
--store.enable-index-header-lazy-reader
did not seem to noticeably reduce the network traffic spike.If the child thanos query or store are rolled back to
0.31.0
, the network traffic returns to pre-upgrade levels.Full logs to relevant components:
No relevant logs. Only screenshots of prometheus.
We will occasionally get a warning on one of the thanos query pods says
detecting store that does not support without replica label setting. Falling back to eager retrieval with additional sort. Make sure your storeAPI supports it to speed up your queries
but its not frequent enough and doesn't seem to indicate that an increase in network traffice would occur.Anything else we need to know:
Graph of network transmitted out from the child thanos query instances when upgrading from
0.31.0
to0.32.0
.The text was updated successfully, but these errors were encountered: