Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve store timeouts #1789

Merged
merged 5 commits into from
Feb 18, 2020
Merged

Conversation

IKSIN
Copy link
Contributor

@IKSIN IKSIN commented Nov 26, 2019

We want to increase stability Thanos infrastructure with many stores/sidecars (49 in our case) when some stores respond slowly.

Changes

  • Fix and improve TestProxyStore_SeriesSlowStores:
    • Fixed broken test
    • Add some use-cases
    • check on elapsed time on get data from stores.
  • Change logic in ProxyStore:
    • store response timeout moved from MergedSet.Next() function to goroutine in startStreamSeriesSet to check timeouts in parallel
    • add ProxyStore metrics

More details in comments.

Verification

  • Tests
  • On production in our system with 49 store backends already 2 weeks.

@IKSIN
Copy link
Contributor Author

IKSIN commented Nov 26, 2019

Old ProxyStore logic:

Image from iOS

New ProxyStore logic:

Image from iOS (1)

@IKSIN
Copy link
Contributor Author

IKSIN commented Nov 26, 2019

This PR fixes the case when 2 or more stores are responding slowly.
It is the case if, for example,
a) common S3 storage degraded -> all stores become slow
b) multiple prom instances deployed to the same host and the host is under high CPU pressure.
new messages

This PR also fixes double timeout in case of warnings

Also, we’ve separated RT metrics with/without payload:
before this change we’ve seen pretty low RT for all stores because most of queries doesn’t return any data. But if store return any payload - RT usually is much more.

This PR change works well in most of cases because if store responding slowly - it’s usually since first data chunk.

Example:
You have 3 stores backed by Ceph and 3 fast proms/sidecars
Ceph is degraded and responding very slowly (~infinite time).
Query timeout = 2m, response timeout = 10s
As it was before:
We will wait at least 10s X 3 because we’ve started response timeout timer inside MergedSet -> Next()
Now:
We will wait at least 10s, because timeout works in parallel way for all stores.

@d-ulyanov
Copy link
Contributor

mishka_vodka_balalayka_

@GiedriusS
Copy link
Member

Seems like you have rebased this on master but you have recreated all of the commits with you as the author. Could we please fix this and only leave the proper commits before a review?

@d-ulyanov
Copy link
Contributor

@GiedriusS sure

@IKSIN IKSIN closed this Nov 27, 2019
@IKSIN IKSIN mentioned this pull request Nov 27, 2019
9 tasks
Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I think I like it, but would be nice to have a commit title to be open about what we fix which is:

  • client timeout was only used when Next for corresponding store was used, which might after another slow store.

However, I have some comments and suggestions (:

@@ -41,6 +41,20 @@ type Client interface {
Addr() string
}

const WITH_PAYLOAD_LABEL = "with_payload"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Go constant variables should be still camel case (:

func newProxyStoreMetrics(reg *prometheus.Registry) *proxyStoreMetrics {
var m proxyStoreMetrics

m.firstRecvDuration = prometheus.NewSummaryVec(prometheus.SummaryOpts{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we really use summaries here? (: Can we switch to histograms, maybe?


m.timeoutRecvCount = prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "thanos_proxy_timeout_recv_count",
Help: "Timeout recv count.",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really helpful help (:


m.firstRecvDuration = prometheus.NewSummaryVec(prometheus.SummaryOpts{
Name: "thanos_proxy_first_recv_duration",
Help: "Time to get first part data from store(ms).",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should be float64 seconds

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Time to first byte is what we can call it

Help: "Time to get first part data from store(ms).",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
MaxAge: 2 * time.Minute,
}, []string{"store", "payload"})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We talked about store in metrics - this might leak cardinaltity (changing IP address), so I think we have to hook it to external labels and store TYPE as we do in storeset.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have too large external labels in our thanos-store's, therefore external labels is not comfortable. Also external_labels can have different order time from time. Maybe we can think about comfortable way to identify stores? For example, by bucket_name?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See storeset, we already do that (we also sort it), so if that's the problem we need to fix it everywhere (:

Copy link
Contributor Author

@IKSIN IKSIN Nov 27, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Снимок экрана 2019-11-27 в 17 58 38
I can't imagine comfortable work with such external_labels, presented as json string =)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you show querier /metrics page?

In this case, we need to fix this in separate PR `stores hash as you already have metric with this label:

[]string{"external_labels", "store_type"}, nil,
(:

Copy link
Contributor Author

@IKSIN IKSIN Nov 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="B"},{dc="mts",env="production",prometheus_replica="prometheus01z1.h.o3.ru"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="B"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="A"}"} 1
thanos_store_node_info{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="B"}"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="B"}",store_type="store"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-0",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-1",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-2",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-3",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-4",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-apps-5",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="B"},{dc="mts",env="production",prometheus_replica="prometheus01z1.h.o3.ru"}",store_type="store"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-hardware-0",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="B"},{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="A"},{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="B"}",store_type="store"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-0",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-ingress-1",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-other-0",prometheus_replica="B"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="A"}",store_type="sidecar"} 1
thanos_store_nodes_grpc_connections{external_labels="{dc="m9",env="production",prometheus="monitoring/prometheus-system-0",prometheus_replica="B"}",store_type="sidecar"} 1

All metrics with external_labels. It looks ugly and unreadable, also json string to hard to process in grafana (for naming graphs)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's crazy!

I think, still this should be resolved outside of this PR (:

Again, I think address might add some cardinality, but I am tempted to allow it under some flag...

if r.Size() > 0 {
metrics.withPayload.Observe(time.Since(t0).Seconds())
} else {
metrics.withoutPayload.Observe(time.Since(t0).Seconds())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just skip the recv without payload? We care about time to first byte IMO

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We observe strange data from this metrics (we set 10s timeout):
Снимок экрана 2019-11-27 в 17 44 21
Therefore, we want to save without_payload metrics, for investigate problems in future.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, this is because it's EOF which we should filter out. This the normal response when the server closes the stream, it should be filtered out. This is also received when context is canceled or timeout was triggered that's why you see 10s I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I don't understand... I think that we must to see minimal RT for first byte without payload...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I EOF is when server closes a stream, but we account that as first byte without payload - I believe that's wrong

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that without_payload == EOF on first byte and it's represetnatine metric...

Copy link
Member

@GiedriusS GiedriusS Dec 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a check for err here before sending it back? It probably is equal to io.EOF in such case and it should be filtered out. In fact, on line 448 there is: if rr. err == io.EOF { ... }. We probably shouldn't check for this in two places as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a reasons to check on some errors here. All errors processed out of this goroutine. This goroutine needed only for get data async.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think we don't need without_payload metrics, let's just have time_to_first_byte metrics. What are you going to differently when you see without_payload vs with_payload metrics increasing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree here. This metric roughly says "time until a node tells us that it doesn't have any metrics". In what cases do you think it could be useful? payload is a bit misleading as well, IMHO. Payload is "the actual information or message in transmitted data" and even if we get EOF it still is some kind of data. Even if we will keep this around I'd suggest renaming this to perhaps response_kind that could be no_metrics or metrics.

ctx, cancel = context.WithTimeout(ctx, s.responseTimeout)
defer cancel()
}
rCh := make(chan *recvResponse, 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of allocating this every time we can reuse this channel

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to wait last recv on timeout for close channel for send unused data frame, therefore we can't reuse firs channel

for {
r, err := s.stream.Recv()
var cancel context.CancelFunc
if s.responseTimeout != 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious: can we have this timeout for a whole response? Do we really need it per frame? Plus we might allocate a lot here so canceling per stream frame would be nice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to use timeout for a whole response:

  1. We must to set such timeout ~query response. in this case we lose opportunity to fast cancel response from slow store (for example)
  2. We need to increase channel buffer (for real parallel read from stores)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right all is connected and dependent.

I actually think we should increase buffer at some point, to some degree , but not here. Let's keep it per frame.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have timeout naming adjusted then to mention frame right now it is:

If a Store doesn't send any data in this specified duration then a Store will be ignored and partial data will be returned if it's enabled. 0 disables timeout.

Maybe to:

If a Store doesn't send any frame in this specified duration then a Store will be ignored and partial data will be returned if it's enabled. 0 disables timeout.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I try to increase buffer to fit all response.
Increase buffer affected only usage memory, but not increase RT, independed on slow/fast stores )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, this memory will be buffered anyway at some point, but anyway, let's not introduce this here. I think we all like this PR in such logic: To make sure we timeout on the first byte from the slow store instead precisely.

case rr = <-rCh:
}
close(rCh)
err := rr.err
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use those directly? Do we need those local variables?

pkg/store/proxy.go Show resolved Hide resolved
@bwplotka
Copy link
Member

I think I addressed all the comments (:

@bwplotka
Copy link
Member

bwplotka commented Dec 3, 2019

I think this PR makes sense, just some suggestions, any movement here? (:

@d-ulyanov
Copy link
Contributor

I suppose we'll continue with this PR on next week, @IKSIN is on vacations currently :)

@IKSIN IKSIN force-pushed the slow_tests branch 2 times, most recently from 780cc1d to 0a287a3 Compare December 13, 2019 10:49
@IKSIN IKSIN requested a review from bwplotka December 13, 2019 13:05
@IKSIN
Copy link
Contributor Author

IKSIN commented Dec 13, 2019

PR updated

queryTimeoutCount: s.metrics.queryTimeoutCount.WithLabelValues(st.LabelSetsString(), storeTypeStr),
}

seriesSet = append(seriesSet, startStreamSeriesSet(seriesCtx, s.logger, closeSeries,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be under the comment, no? (:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, It's not to be commented. Maybe you mean add a comment?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a comment on the 317 line:

 			// Schedule streamSeriesSet that translates gRPC streamed response 
...

Could this be moved under that comment or removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved under the comment

pkg/store/proxy.go Outdated Show resolved Hide resolved
frameTimeoutCtx := context.Background()
var cancel context.CancelFunc
if s.responseTimeout != 0 {
frameTimeoutCtx, cancel = context.WithTimeout(frameTimeoutCtx, s.responseTimeout)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we move out the construction of this context out of this function? It should probably make things easier to understand. WDYT, @bwplotka? This part is really becoming complex :(

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are your thoughts on this, @IKSIN ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved to function

pkg/store/proxy.go Outdated Show resolved Hide resolved
@IKSIN
Copy link
Contributor Author

IKSIN commented Jan 13, 2020

updated

@IKSIN IKSIN force-pushed the slow_tests branch 2 times, most recently from cca28ff to b5313d5 Compare February 3, 2020 10:29
@IKSIN IKSIN requested a review from povilasv February 3, 2020 11:18
@IKSIN
Copy link
Contributor Author

IKSIN commented Feb 6, 2020

@povilasv @bwplotka Any comments?

Copy link
Member

@povilasv povilasv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks ok to me. Thanks for the work 🥇 Let's wait for other maintainers opinions

Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nice I love it!

I have some suggestions though, but it looks almost ready to go ❤️
Thanks!

BTW: It was super nice to see you at FOSDEM! (: 👍

image

pkg/store/proxy.go Show resolved Hide resolved
err error
}

func startFrameCtx(responseTimeout time.Duration) (context.Context, context.CancelFunc) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func startFrameCtx(responseTimeout time.Duration) (context.Context, context.CancelFunc) {
func frameCtx(responseTimeout time.Duration) (context.Context, context.CancelFunc) {

@@ -384,14 +394,34 @@ func startStreamSeriesSet(
}
}()
for {
r, err := s.stream.Recv()
frameTimeoutCtx, cancel := startFrameCtx(s.responseTimeout)
if cancel != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just defer cancel always and return func() {} not nil

}
rCh := make(chan *recvResponse, 1)
var rr *recvResponse
go func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we care about the first frame right? or timeout for all frames?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In any case I think we need to start this go routine before for loop and have here for loop as well.

This will make sure that we only have one 2 go routines running: one waiting for recv or context cancel, second for reading.

Current implementation will constantly allocate new channel and go routine. For 1000 frames x 100 concurrent queries this might matter.

I wish we have benchmarks for querier ))): Like we do for Store now:

func BenchmarkSeries(b *testing.B) {

Actually added issue: #2105

func (s *streamSeriesSet) timeoutHandling(isQueryTimeout bool, ctx context.Context) {
var err error
if isQueryTimeout {
err = errors.Wrap(ctx.Err(), fmt.Sprintf("failed to receive any data from %s", s.name))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about just passing error to propagate instead here? and rename method to handleErr?

@IKSIN IKSIN force-pushed the slow_tests branch 4 times, most recently from b6b13a1 to d850c92 Compare February 13, 2020 16:10
Aleskey Sin and others added 3 commits February 13, 2020 19:11
Signed-off-by: Aleskey Sin <[email protected]>
Signed-off-by: Aleskey Sin <[email protected]>
Signed-off-by: Aleskey Sin <[email protected]>
@IKSIN
Copy link
Contributor Author

IKSIN commented Feb 13, 2020

@bwplotka PR updated) w/o benchmarks(

@IKSIN IKSIN requested a review from bwplotka February 13, 2020 18:28
pkg/store/proxy.go Outdated Show resolved Hide resolved
Signed-off-by: Aleskey Sin <[email protected]>
@IKSIN IKSIN requested a review from povilasv February 14, 2020 09:01
@bwplotka
Copy link
Member

Looking!

Copy link
Member

@povilasv povilasv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just small nits.

@@ -383,78 +393,79 @@ func startStreamSeriesSet(
emptyStreamResponses.Inc()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't we want to increment this if the context was actually cancelled.. right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, therefore numResponses++ only on recv processed.

select {
case <-ctx.Done():
close(done)
err = errors.Wrap(ctx.Err(), fmt.Sprintf("failed to receive any data from %s", s.name))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why separate err variable?

also can we use Wrapf instead of sprintf?

return
case <-frameTimeoutCtx.Done():
close(done)
err = errors.Wrap(frameTimeoutCtx.Err(), fmt.Sprintf("failed to receive any data in %s from %s", s.responseTimeout.String(), s.name))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

return
}

if rr.err != nil {
wrapErr := errors.Wrapf(rr.err, "receive series from %s", s.name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why another var? (: We can inline this.. not a blocker though.

if rr.err != nil {
wrapErr := errors.Wrapf(rr.err, "receive series from %s", s.name)
s.handleErr(wrapErr)
close(done)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we close done in handleErr?

Signed-off-by: Aleskey Sin <[email protected]>
@IKSIN
Copy link
Contributor Author

IKSIN commented Feb 18, 2020

@bwplotka Updated!)

Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's go! 💪

Thank you for this!

return
}

if rr.err != nil {
wrapErr := errors.Wrapf(rr.err, "receive series from %s", s.name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why another var? (: We can inline this.. not a blocker though.

@bwplotka bwplotka merged commit a354bfb into thanos-io:master Feb 18, 2020
@d-ulyanov
Copy link
Contributor

Hooray! :)
Thank you for review, @bwplotka @GiedriusS @povilasv !
🍻

vankop pushed a commit to monitoring-tools/thanos that referenced this pull request Feb 28, 2020
* Improve proxyStore timeouts.

Signed-off-by: Aleskey Sin <[email protected]>

* Fix send to closed channel.

Signed-off-by: Aleskey Sin <[email protected]>

* Update for PR.

Signed-off-by: Aleskey Sin <[email protected]>

* Fix recv done channel.

Signed-off-by: Aleskey Sin <[email protected]>

* PR fixes.

Signed-off-by: Aleskey Sin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants