-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ui: fix inflight request replacement, cleanup database details api #99095
ui: fix inflight request replacement, cleanup database details api #99095
Conversation
bb78837
to
c3aa776
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing this quickly!
Mind adding a loom to show it working?
Assuming all tests pass and you add the loom,
Reviewed 6 of 6 files at r1, all commit messages.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @xinhaoz)
Changes to allow in-flight request replacement in this PR: cockroachdb#98331 caused breaking changes to `KeyedCachedDataReducer`s using the sql api. A `KeyedCachedDataReducer` using the sql api would issue the requests to the same endpoint and despite the added `allowReplacementRequests` flag being `false`, in-flight requests would be replaced (specifically, when fetching for initial state). This would prevent all requests but the last one from being stored in the reducer state and put them in a permanent `inFlight` status. Consequently, our pages would break as the `inFlight` status would prevent us from ever fetching data, putting these pages in a permanent loading state. This PR adds checks to ensure `allowReplacementRequests` is enabled when we receive a response, before deciding whether to omit it from the reducer state. It's important to note that this is still not an ideal solution for `KeyedCachedDataReducer`s, should we ever want to replace inflight requests for one. The checks to replace an in-flight request still occur at the reducer-level, whereas in a `KeyedCachedDataReducer` we'd like them to occur at the state-level (i.e. instead of replacement checks for all requests across the reducer, we should check for requests that impact a specific key's state). This way we scope replacements to each key in the reducer, allowing us to fetch data for each key (which is the intended functionality). Enabling this for a `KeyedCachedDataReducer` without making this change will cause the breaking behaviour above. This PR also includes some cleanup to the database details API, namely: - camel casing response fields - encapsulating the database span stats response into its own object (instead of across multiple objects), this helps scope any errors deriving from the query to the single stats object Release note (bug fix): Fixed replacement of in-flight requests for `KeyedCachedDataReducer`s to prevent permanent loading on requests stuck on an `inFlight` status. https://cockroachlabs.atlassian.net/browse/DOC-1355 #Informs: cockroachdb#33316 #Epic: CRDB-8035
c3aa776
to
916b355
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added before/after demos :)
Reviewable status: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @maryliag and @xinhaoz)
Ah this is my bad -- This was due to my incorrect assumption that a CachedDataReducer would manage one CachedDataReducerState but I didn't realize there was a single CachedDataReducer managing all requests in the Keyed one. It looks like this affects the db pages since we fire a lot of requests there in parallel that each also take a while to resolve. So the problem is that we always compare the request time of the current in flight req with the one managed by the CachedDataReducer, and since there's one CachcedDataReducer issuing all reqs in the Keyed one, we always rewrite the pending request time for each new request. So multiple reqs managed by one KeyedCacheReducer will all use the same There's a couple of different things we could do as alternatives to the fix here:
WDYT? I'm open to merging this as a quick fix and investing some more time in #2 with you when we actually need to replace reqs in keyed data requests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given branch cut + stability, let's merge these changes and we can conform KeyedCachedDataReducer with something like approach 2 later. Thanks for catching this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can someone create an issue to track the changes/improvements that needs to be done for option 2? Just wanna make sure is being addressed
Reviewed 1 of 1 files at r2, all commit messages.
Reviewable status: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @THardy98)
Tracking issue: #99134 |
bors-ing this now |
TYFR |
bors r+ |
Build failed (retrying...): |
Build succeeded: |
Changes to allow in-flight request replacement in this PR: #98331 caused breaking changes to
KeyedCachedDataReducer
s. Despite the addedallowReplacementRequests
flag beingfalse
, in-flight requests would be replaced (specifically, when fetching for initial state). This would prevent all requests but the last one from being stored in the reducer state and leave the remaining requests in a permanentinFlight
status. Consequently, our pages would break as the permanentinFlight
status on these requests would prevent us from ever fetching data, putting these pages in a permanent loading state.This PR adds checks to ensure
allowReplacementRequests
is enabled when we receive a response, before deciding whether to omit it from the reducer state.It's important to note that this is still not an ideal solution for
KeyedCachedDataReducer
s, should we ever want to replace inflight requests for one. The checks to replace an in-flight request still occur at the reducer-level, whereas in aKeyedCachedDataReducer
we'd like them to occur at the state-level (i.e. instead of replacement checks for all requests across the reducer, we should check for requests that impact a specific key's state). This way we scope replacements to each key in the reducer, allowing us to fetch data for each key independently (which is the intended functionality). Enabling this for aKeyedCachedDataReducer
without making this change I believe will cause the breaking behaviour above.This PR also includes some cleanup to the database details API, namely:
BEFORE
https://www.loom.com/share/57c9f278376a4551977398316d6ed7b6
AFTER
https://www.loom.com/share/b2d4e074afca4b42a4f711171de3fd72
Release note (bug fix): Fixed replacement of in-flight requests for
KeyedCachedDataReducer
s to prevent permanent loading on requests stuck on aninFlight
status.