-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
distsqlrun: remove queuing behavior from flowScheduler #34430
distsqlrun: remove queuing behavior from flowScheduler #34430
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that queueing at this layer is a bad idea, we don't want each node to queue flows for the same query independently. Ideally we'd keep track of relevant stats (eg # of flows running on each node) and queue the entire query if needed before setting up any flows.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner and @asubiotto)
pkg/sql/distsqlrun/flow_scheduler.go, line 126 at r1 (raw file):
} log.VEventf(ctx, 1, "flow scheduler enqueuing flow %s to be run later", f.id) fs.metrics.FlowsQueued.Inc(1)
Maybe we want a metric of how many flows errored out this way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @asubiotto and @RaduBerinde)
pkg/sql/distsqlrun/flow_scheduler.go, line 126 at r1 (raw file):
Previously, RaduBerinde wrote…
Maybe we want a metric of how many flows errored out this way
Definitely a good thing to track, will add.
@RaduBerinde, good suggestion but I think we need to clarify our desired behavior in these scenarios. If a user sends a query and we cannot run it due to overload of some sort, should we immediately return an error or do we still want to queue the query on the gateway? Queuing on the gateway will get rid of inbound stream connection errors as well, but might still have undesirable behavior in the form of long wait times (possibly bounded though, if we do introduce a notion of a "full" query queue). @ajwerner, I think this change is definitely a good place to start, thank you for taking the initiative. I think that apart from unit tests, we might want to add some integration tests which exercise scenarios in which a |
I totally buy the idea that it'd be a better and more robust solution to add some sort of queuing mechanism or back pressure mechanism combined with some retry logic in the face of failure due to a resource constraint. My feeling is that this change shouldn't be blocked on the the more complete solution, though it certainly is worth verifying that this change doesn't break existing workloads which may have relied on the queuing behavior. I've been giving some thought to admission control, backpressure, and queueing lately and this is an obvious area for improvement, both at the top level of the query graph as well as at the leaves here. For now this is a generally inadequate stopgap that hopefully has easier to understand failure properties than the previous inadequate stopgap.
@asubiotto any chance I could bother you some time this week for advice on how to contrive such flows in a testing setting? As of now I've added the metric but I haven't gotten around to the testing. I'm digging in to the flows and all that they entail, it looks like they aren't super straightforward to mock out but I may be missing something. |
Prior to this PR when SetupFlowRequests in excess of `max_running_flows` were received they would be queued for processing. This queueing was unbounded and had no notion of a "full" queue. While the queue was well intentioned and may have led to desirable behavior in bursty, short-lived workloads, in cases of high volumes of longer running flows the system would observe arbitrarily long queuing delays. These delays would ultimately result propagate to the user in the form of a timeout setting up the other side of the connection. This error was both delayed and difficult for customers to interpret. The immediate action is to remove the queuing behavior and increase the flow limit. It is worth noting that this new limit is still arbitrary and unjustified except to say that it's larger than the previous limit by a factor of 2. After this PR when the `max_running_flows` limit is hit, clients will receive a more meaningful error faster which hopefully will guide them to adjusting the setting when appropriate or scaling out. This commit could use a unit test to ensure that we do indeed see the newly introduced error and I intend to write one soon. The machinery around setting up a unit test seemed much more daunting than this change but I wanted to get this out just to start a discussion about whether this is the right thing to do and if not, what better approaches y'all might see. Fixes cockroachdb#34229. Release note: None
1ca6cc2
to
cb3612b
Compare
This happened at some point. |
Prior to this PR when SetupFlowRequests in excess of
max_running_flows
werereceived they would be queued for processing. This queueing was unbounded and
had no notion of a "full" queue. While the queue was well intentioned and may
have led to desirable behavior in bursty, short-lived workloads, in cases of
high volumes of longer running flows the system would observe arbitrarily long
queuing delays. These delays would ultimately result propagate to the user in
the form of a timeout setting up the other side of the connection. This error
was both delayed and difficult for customers to interpret. The immediate action
is to remove the queuing behavior and increase the flow limit. It is worth
noting that this new limit is still arbitrary and unjustified except to say that
it's larger than the previous limit by a factor of 2. After this PR when the
max_running_flows
limit is hit, clients will receive a more meaningful errorfaster which hopefully will guide them to adjusting the setting when appropriate
or scaling out.
This commit could use a unit test to ensure that we do indeed see the newly
introduced error and I intend to write one soon. The machinery around setting
up a unit test seemed much more daunting than this change but I wanted to get
this out just to start a discussion about whether this is the right thing to do
and if not, what better approaches y'all might see.
Fixes #34229.
Release note: None