-
Notifications
You must be signed in to change notification settings - Fork 15
Conversation
timelock-impl/src/main/java/com/palantir/atlasdb/timelock/lock/watch/LockEventLogImpl.java
Show resolved
Hide resolved
To be fair, I think I might do another PR where we'll actually keep the still open locks in the log, instead of a fixed size. Then snapshot calculation is just a matter of copying the whole log. They are already in memory, so shouldn't be a big deal. Actually, to account for slow consumers we'd still need to keep some backlog likely, but at least the snapshot calculation would be easier and less weird. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⏲️ 0:20
Broadly looks good! Minor questions on some of the edge-cases of semantics on the array window.
I can also imagine a version of this with read write locks (upside: would allow decent concurrency if, as intended, watched elements are hardly ever written to) though let's stick with synchronized for now. Having all accesses synchronized makes convincing myself that our usage is safe pretty straightforward.
The tracking of the open locks in memory seems reasonable (as before, if we're watching things that aren't meant to be frequently updated...)
...impl/src/main/java/com/palantir/atlasdb/timelock/lock/watch/ArrayLockEventSlidingWindow.java
Outdated
Show resolved
Hide resolved
...impl/src/main/java/com/palantir/atlasdb/timelock/lock/watch/ArrayLockEventSlidingWindow.java
Outdated
Show resolved
Hide resolved
...impl/src/main/java/com/palantir/atlasdb/timelock/lock/watch/ArrayLockEventSlidingWindow.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 looks good!
Released 0.234.0 |
Goals (and why):
Try to simplify the code by removing a response that we should be able to not have.
Implementation Description (bullets):
If the client is too far behind, calculate a snapshot. Operations that touch the log, always lock it exclusively (whether for reading or writing), therefore we can always calculate a cohesive snapshot: all mutations will wait while we are calculating the snapshot, therefore the client will then later be able to get all the events. They might get the same "event" multiple times (because HeldLocks iteration is not blocking), but the ordering of these events will always be causal (e.g. unlock always comes after lock), but they should be able to deal with that
Testing (What was existing testing like? What have you done to improve it?):
Existing unit tests
Concerns (what feedback would you like?):
Whether there is any problems with this approach.
Where should we start reviewing?:
Priority (whenever / two weeks / yesterday):