Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Browse the repository at this point in the history
61207: cli: don't refresh prompt on line continuations r=jordanlewis a=jordanlewis Previously, the CLI would recalculate its prompt (including any required network roundtrips to determine transaction status and current database) on every newline, even if the newline came in the middle of a SQL statement. This was wasteful, of course, but it's also very confusing for users when entering a large multi-line statement via paste: the prompt refreshes defer until enter is pressed for the first time, and there is a lot of unexplained latency during the statement that doesn't reflect the actual SQL being executed. It also doesn't match the reported execution time, leading to mass confusion and sadness. Now, we don't bother refreshing the prompt during line continuations. Closes #61095 Release note (cli change): optimize handling of multi-line SQL strings to avoid unwanted extra server roundtrips. Release justification: bug fixes and low-risk updates to new functionality 61305: kvserver: add closedts side-transport consumer r=andreimatei a=andreimatei Add the consumer of closed timestamps communicated by the side transport (i.e. the gRPC server for our new push-based streaming protocol). This side-transport consumer accumulates closed timestamps communicated to it by other nodes (the leaseholders of the respective ranges). Its state is queried whenever a range needs a higher closed timestamp than what it has locally in the Replica state, at which point the Replica's state is lazily updated. Release note: None Release justification: Needed for GLOBAL tables. 61386: kv: configure leading closed timestamp target for global_read ranges r=nvanbenschoten a=nvanbenschoten Informs #59680. Informs #52745. This commit updates `closedts.TargetForPolicy` to calculate a target closed timestamp that leads present time for ranges with the LEAD_FOR_GLOBAL_READS closed timestamp policy. This is needed for non-blocking transactions, which require ranges to closed time in the future. TargetForPolicy's LEAD_FOR_GLOBAL_READS calculation is more complex than its LAG_BY_CLUSTER_SETTING calculation. Instead of the policy defining an offset from the publisher's perspective, the policy defines a goal from the consumer's perspective - the goal being that present time reads (with a possible uncertainty interval) can be served from all followers. To accomplish this, we must work backwards to establish a lead time to publish closed timestamps at. The calculation looks something like the following: ``` // this should be sufficient for any present-time transaction, // because its global uncertainty limit should be <= this time. // For more, see (*Transaction).RequiredFrontier. closed_ts_at_follower = now + max_offset // the sender must account for the time it takes to propagate a // closed timestamp update to its followers. closed_ts_at_sender = closed_ts_at_follower + propagation_time // closed timestamps propagate in two ways. Both need to make it to // followers in time. propagation_time = max(raft_propagation_time, side_propagation_time) // raft propagation takes 3 network hops to go from a leader proposing // a write (with a closed timestamp update) to the write being applied. // 1. leader sends MsgProp with entry // 2. followers send MsgPropResp with vote // 3. leader sends MsgProp with higher commit index // // we also add on a small bit of overhead for request evaluation, log // sync, and state machine apply latency. raft_propagation_time = max_network_rtt * 1.5 + raft_overhead // side-transport propagation takes 1 network hop, as there is no voting. // However, it is delayed by the full side_transport_close_interval in // the worst-case. side_propagation_time = max_network_rtt * 0.5 + side_transport_close_interval // put together, we get the following result closed_ts_at_sender = now + max_offset + max( max_network_rtt * 1.5 + raft_overhead, max_network_rtt * 0.5 + side_transport_close_interval, ) ``` While writing this, I explored what it would take to use dynamic network latency measurements in this calculation to complete #59680. The code for that wasn't too bad, but brought up a number of questions, including how far into the tail we care about and whether we place upper and lower bounds on this value. To avoid needing to immediately answer these questions, the commit hardcodes a maximum network RTT of 150ms, which should be an overestimate for almost any cluster we expect to run on. The commit also adds a new `kv.closed_timestamp.lead_for_global_reads_override` cluster setting, which, if nonzero, overrides the lead time that global_read ranges use to publish closed timestamps. The cluster setting is hidden, but should provide an escape hatch for cases where we get the calculation (especially when it becomes dynamic) wrong. Release justification: needed for new functionality 61499: sql: Require FORCE to modify protected zone config fields r=otan a=ajstorm With the introduction of the multi-region simplification in 21.1, there are a set of fields in the zone configurations which are protected by the system. These fields are transparently modified by the system when certain multi-region abstractions are used (e.g. when a region is added to the database, the constraints field is modified to add the new region). Due the protected status of these field, we want to prevent users from setting them if they're not aware of the impact in doing so (that they could be overwritten by the system, and that they could result in sub-optimal performance). To make this more explicit to users, we block modifications to these fields and if the users wish to modify them anyway, they must provide a FORCE argument along with the modification statement. This impacts both the setting of the field in the zone configuration, as well as any eventual multi-region statement which follows and will result in over-writing the user's zone configuration update. Release justification: Prevent bad user experience with new feature. Release note (sql change): Updates to certain fields in the zone configurations are blocked for multi-region enabled databases. This block can be overridden through the use of the FORCE keyword on the blocked statement. Resolves #60447 61556: sql: read db descriptor from store when constructing mr zone configs r=arulajmani a=arulajmani Previously, when constructing multi-region zone configs for a database in the type schema changer, we were doing so using a leased version of the database descriptor. This meant it could be the case that we were constructing a stale zone configuration, which manifested itself in some CI failures. This patch fixes that issue by always reading the database descriptor from the store when constructing zone configurations in the type schema changer. Previously, this was failing consistently under stress in ~30s. I have this thing running perfectly for the last 10 minutes. Fixes: #61320 Release justification: bug fix to new functionality Release note: None Co-authored-by: Jordan Lewis <[email protected]> Co-authored-by: Andrei Matei <[email protected]> Co-authored-by: Nathan VanBenschoten <[email protected]> Co-authored-by: Adam Storm <[email protected]> Co-authored-by: arulajmani <[email protected]>
- Loading branch information