This repository has been archived by the owner on Nov 14, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 15
Timelock: compress lock and unlock requests #6196
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
These can have an unbounded number of tokens as per PDS-293365.
Generate changelog in
|
OK for RC! |
# Conflicts: # atlasdb-impl-shared/src/main/java/com/palantir/atlasdb/sweep/queue/SweepQueue.java
I don't know what happened but this branch is a bit screwed now. Re-made on #6253. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
General
Before this PR:
As per PDS-293365, lock and unlock requests can have an unbounded number of tokens. When many files are being referenced at once, this leads to a very large request body, which then results in the RequestEntityTooLarge exception.
Note that even with compressed requests, we might still hit the RequestEntityTooLarge issue, but this approach should give us a lot more runway. We did consider the possibility of implementing new lock/unlock endpoints that support streaming entities, but this would be a much larger effort from our side, and the internal shopping product has a long-term fix for this in the pipeline. If compressing requests turns out to be insufficient and the long-term fix is too far away, we can reconsider this.
After this PR:
==COMMIT_MSG==
Added the compress-request tag to lock and unlock endpoints, in order to support larger request bodies.
==COMMIT_MSG==
Priority: Medium
Concerns / possible downsides (what feedback would you like?):
Rollout plan
If the approach sounds right, the next move would be to tag an RC of this and then of TimeLock, and deploy to test stacks to check performance.
Is documentation needed?: No, this should be invisible to clients.
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?: Changing the tags should not present a break.
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?: No
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.): Yes
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?: We may need to bump the timelock dependency before the AtlasDB library is further deployable.
Does this PR need a schema migration? No
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?: Assumed that tags can happily be added to requests like this
What was existing testing like? What have you done to improve it?: No changes
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.: N/A
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?: N/A (ironically!)
Execution
How would I tell this PR works in production? (Metrics, logs, etc.): Request size of lock and unlock endpoints will shrink (hopefully dramatically!); RequestEntityTooLarge issues will be seen less often.
Has the safety of all log arguments been decided correctly?: N/A
Will this change significantly affect our spending on metrics or logs?: No
How would I tell that this PR does not work in production? (monitors, etc.): Request size of lock and unlock endpoints will not shrink. In extreme cases, lock and unlock will simply stop working, causing very obvious issues.
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: Rollback, but this is AtlasDB so a wider recall would be needed.
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC): (that's me! but @mdaudali is next)
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.: Hopefully less risk at scale
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?: Decompression could turn out to be expensive - we will have to monitor endpoint p99s closely.
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?: Possibly - the other option would be to switch to fully streaming endpoints. We'd find out via either (a) a regression in these endpoints; (b) RequestEntityTooLarge errors persisting.
Development Process
Where should we start reviewing?: +4/-0
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?: +4/-0
Please tag any other people who should be aware of this PR:
@jeremyk-91
@Dgleish
@carterkozak