-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve store timeouts #1789
Improve store timeouts #1789
Conversation
This PR fixes the case when 2 or more stores are responding slowly. This PR also fixes double timeout in case of warnings Also, we’ve separated RT metrics with/without payload: This PR change works well in most of cases because if store responding slowly - it’s usually since first data chunk. Example: |
Seems like you have rebased this on |
@GiedriusS sure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I think I like it, but would be nice to have a commit title to be open about what we fix which is:
- client timeout was only used when
Next
for corresponding store was used, which might after another slow store.
However, I have some comments and suggestions (:
pkg/store/proxy.go
Outdated
@@ -41,6 +41,20 @@ type Client interface { | |||
Addr() string | |||
} | |||
|
|||
const WITH_PAYLOAD_LABEL = "with_payload" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Go constant variables should be still camel case (:
pkg/store/proxy.go
Outdated
func newProxyStoreMetrics(reg *prometheus.Registry) *proxyStoreMetrics { | ||
var m proxyStoreMetrics | ||
|
||
m.firstRecvDuration = prometheus.NewSummaryVec(prometheus.SummaryOpts{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we really use summaries here? (: Can we switch to histograms, maybe?
pkg/store/proxy.go
Outdated
|
||
m.timeoutRecvCount = prometheus.NewCounterVec(prometheus.CounterOpts{ | ||
Name: "thanos_proxy_timeout_recv_count", | ||
Help: "Timeout recv count.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really helpful help (:
pkg/store/proxy.go
Outdated
|
||
m.firstRecvDuration = prometheus.NewSummaryVec(prometheus.SummaryOpts{ | ||
Name: "thanos_proxy_first_recv_duration", | ||
Help: "Time to get first part data from store(ms).", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably should be float64 seconds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Time to first byte
is what we can call it
pkg/store/proxy.go
Outdated
Help: "Time to get first part data from store(ms).", | ||
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001}, | ||
MaxAge: 2 * time.Minute, | ||
}, []string{"store", "payload"}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked about store
in metrics - this might leak cardinaltity (changing IP address), so I think we have to hook it to external labels and store TYPE as we do in storeset.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have too large external labels in our thanos-store's, therefore external labels is not comfortable. Also external_labels can have different order time from time. Maybe we can think about comfortable way to identify stores? For example, by bucket_name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See storeset, we already do that (we also sort it), so if that's the problem we need to fix it everywhere (:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you show querier /metrics page?
In this case, we need to fix this in separate PR `stores hash as you already have metric with this label:
Line 94 in 7e11afe
[]string{"external_labels", "store_type"}, nil, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="B"},{dc="mts",env="production",prometheus_replica="prometheus01z1.h.o3.ru"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="B"}"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="B"}",store_type="store"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="B"},{dc="mts",env="production",prometheus_replica="prometheus01z1.h.o3.ru"}",store_type="store"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="B"}",store_type="store"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="B"}",store_type="sidecar"} 1
All metrics with external_labels
. It looks ugly and unreadable, also json string to hard to process in grafana (for naming graphs)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's crazy!
I think, still this should be resolved outside of this PR (:
Again, I think address
might add some cardinality, but I am tempted to allow it under some flag...
pkg/store/proxy.go
Outdated
if r.Size() > 0 { | ||
metrics.withPayload.Observe(time.Since(t0).Seconds()) | ||
} else { | ||
metrics.withoutPayload.Observe(time.Since(t0).Seconds()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just skip the recv without payload? We care about time to first byte IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, this is because it's EOF which we should filter out. This the normal response when the server closes the stream, it should be filtered out. This is also received when context is canceled or timeout was triggered that's why you see 10s I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I don't understand... I think that we must to see minimal RT for first byte without payload...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I EOF is when server closes a stream, but we account that as first byte without payload
- I believe that's wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that without_payload
== EOF on first byte
and it's represetnatine metric...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a check for err
here before sending it back? It probably is equal to io.EOF
in such case and it should be filtered out. In fact, on line 448 there is: if rr. err == io.EOF { ... }
. We probably shouldn't check for this in two places as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a reasons to check on some errors here. All errors processed out of this goroutine. This goroutine needed only for get data async.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think we don't need without_payload
metrics, let's just have time_to_first_byte metrics. What are you going to differently when you see without_payload
vs with_payload
metrics increasing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree here. This metric roughly says "time until a node tells us that it doesn't have any metrics". In what cases do you think it could be useful? payload
is a bit misleading as well, IMHO. Payload is "the actual information or message in transmitted data" and even if we get EOF it still is some kind of data. Even if we will keep this around I'd suggest renaming this to perhaps response_kind
that could be no_metrics
or metrics
.
pkg/store/proxy.go
Outdated
ctx, cancel = context.WithTimeout(ctx, s.responseTimeout) | ||
defer cancel() | ||
} | ||
rCh := make(chan *recvResponse, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of allocating this every time we can reuse this channel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to wait last recv on timeout for close channel for send unused data frame, therefore we can't reuse firs channel
pkg/store/proxy.go
Outdated
for { | ||
r, err := s.stream.Recv() | ||
var cancel context.CancelFunc | ||
if s.responseTimeout != 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious: can we have this timeout for a whole response? Do we really need it per frame? Plus we might allocate a lot here so canceling per stream frame would be nice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to use timeout for a whole response:
- We must to set such timeout ~query response. in this case we lose opportunity to fast cancel response from slow store (for example)
- We need to increase channel buffer (for real parallel read from stores)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right all is connected and dependent.
I actually think we should increase buffer at some point, to some degree , but not here. Let's keep it per frame.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to have timeout naming adjusted then to mention frame right now it is:
If a Store doesn't send any data in this specified duration then a Store will be ignored and partial data will be returned if it's enabled. 0 disables timeout.
Maybe to:
If a Store doesn't send any frame in this specified duration then a Store will be ignored and partial data will be returned if it's enabled. 0 disables timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I try to increase buffer to fit all response.
Increase buffer affected only usage memory, but not increase RT, independed on slow/fast stores )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion, this memory will be buffered anyway at some point, but anyway, let's not introduce this here. I think we all like this PR in such logic: To make sure we timeout on the first byte from the slow store instead precisely.
pkg/store/proxy.go
Outdated
case rr = <-rCh: | ||
} | ||
close(rCh) | ||
err := rr.err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use those directly? Do we need those local variables?
I think I addressed all the comments (: |
I think this PR makes sense, just some suggestions, any movement here? (: |
I suppose we'll continue with this PR on next week, @IKSIN is on vacations currently :) |
780cc1d
to
0a287a3
Compare
PR updated |
pkg/store/proxy.go
Outdated
queryTimeoutCount: s.metrics.queryTimeoutCount.WithLabelValues(st.LabelSetsString(), storeTypeStr), | ||
} | ||
|
||
seriesSet = append(seriesSet, startStreamSeriesSet(seriesCtx, s.logger, closeSeries, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be under the comment, no? (:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, It's not to be commented. Maybe you mean add a comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a comment on the 317 line:
// Schedule streamSeriesSet that translates gRPC streamed response
...
Could this be moved under that comment or removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved under the comment
pkg/store/proxy.go
Outdated
frameTimeoutCtx := context.Background() | ||
var cancel context.CancelFunc | ||
if s.responseTimeout != 0 { | ||
frameTimeoutCtx, cancel = context.WithTimeout(frameTimeoutCtx, s.responseTimeout) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we move out the construction of this context out of this function? It should probably make things easier to understand. WDYT, @bwplotka? This part is really becoming complex :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are your thoughts on this, @IKSIN ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moved to function
updated |
cca28ff
to
b5313d5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks ok to me. Thanks for the work 🥇 Let's wait for other maintainers opinions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pkg/store/proxy.go
Outdated
err error | ||
} | ||
|
||
func startFrameCtx(responseTimeout time.Duration) (context.Context, context.CancelFunc) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func startFrameCtx(responseTimeout time.Duration) (context.Context, context.CancelFunc) { | |
func frameCtx(responseTimeout time.Duration) (context.Context, context.CancelFunc) { |
pkg/store/proxy.go
Outdated
@@ -384,14 +394,34 @@ func startStreamSeriesSet( | |||
} | |||
}() | |||
for { | |||
r, err := s.stream.Recv() | |||
frameTimeoutCtx, cancel := startFrameCtx(s.responseTimeout) | |||
if cancel != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just defer cancel always and return func() {}
not nil
pkg/store/proxy.go
Outdated
} | ||
rCh := make(chan *recvResponse, 1) | ||
var rr *recvResponse | ||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we care about the first frame right? or timeout for all frames?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In any case I think we need to start this go routine before for
loop and have here for
loop as well.
This will make sure that we only have one 2 go routines running: one waiting for recv or context cancel, second for reading.
Current implementation will constantly allocate new channel and go routine. For 1000 frames x 100 concurrent queries this might matter.
I wish we have benchmarks for querier ))): Like we do for Store now:
thanos/pkg/store/bucket_test.go
Line 1071 in 88f6be8
func BenchmarkSeries(b *testing.B) { |
Actually added issue: #2105
pkg/store/proxy.go
Outdated
func (s *streamSeriesSet) timeoutHandling(isQueryTimeout bool, ctx context.Context) { | ||
var err error | ||
if isQueryTimeout { | ||
err = errors.Wrap(ctx.Err(), fmt.Sprintf("failed to receive any data from %s", s.name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about just passing error to propagate instead here? and rename method to handleErr
?
b6b13a1
to
d850c92
Compare
Signed-off-by: Aleskey Sin <[email protected]>
Signed-off-by: Aleskey Sin <[email protected]>
Signed-off-by: Aleskey Sin <[email protected]>
@bwplotka PR updated) w/o benchmarks( |
Signed-off-by: Aleskey Sin <[email protected]>
Looking! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just small nits.
@@ -383,78 +393,79 @@ func startStreamSeriesSet( | |||
emptyStreamResponses.Inc() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't we want to increment this if the context was actually cancelled.. right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, therefore numResponses++
only on recv processed.
pkg/store/proxy.go
Outdated
select { | ||
case <-ctx.Done(): | ||
close(done) | ||
err = errors.Wrap(ctx.Err(), fmt.Sprintf("failed to receive any data from %s", s.name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why separate err
variable?
also can we use Wrapf
instead of sprintf?
pkg/store/proxy.go
Outdated
return | ||
case <-frameTimeoutCtx.Done(): | ||
close(done) | ||
err = errors.Wrap(frameTimeoutCtx.Err(), fmt.Sprintf("failed to receive any data in %s from %s", s.responseTimeout.String(), s.name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
return | ||
} | ||
|
||
if rr.err != nil { | ||
wrapErr := errors.Wrapf(rr.err, "receive series from %s", s.name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why another var? (: We can inline this.. not a blocker though.
pkg/store/proxy.go
Outdated
if rr.err != nil { | ||
wrapErr := errors.Wrapf(rr.err, "receive series from %s", s.name) | ||
s.handleErr(wrapErr) | ||
close(done) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we close done in handleErr
?
Signed-off-by: Aleskey Sin <[email protected]>
@bwplotka Updated!) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's go! 💪
Thank you for this!
return | ||
} | ||
|
||
if rr.err != nil { | ||
wrapErr := errors.Wrapf(rr.err, "receive series from %s", s.name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why another var? (: We can inline this.. not a blocker though.
Hooray! :) |
* Improve proxyStore timeouts. Signed-off-by: Aleskey Sin <[email protected]> * Fix send to closed channel. Signed-off-by: Aleskey Sin <[email protected]> * Update for PR. Signed-off-by: Aleskey Sin <[email protected]> * Fix recv done channel. Signed-off-by: Aleskey Sin <[email protected]> * PR fixes. Signed-off-by: Aleskey Sin <[email protected]>
We want to increase stability Thanos infrastructure with many stores/sidecars (49 in our case) when some stores respond slowly.
Changes
More details in comments.
Verification