-
Notifications
You must be signed in to change notification settings - Fork 1
/
mechanism-description.txt
32 lines (26 loc) · 1.97 KB
/
mechanism-description.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
Mechanism for sharding:
The mechanism for sharding works by first ensuring that it is possible for every shard
to have 2 nodes under it. If this is not the case, then there is no way to ensure
fault tolerance after the reshard and so it will return an error. If there are enough
nodes for the distribution, the next step is creating a single large key-value store that
contains the keys across all shards. Next, every node in every shard should have its
key-value store erased as the keys are no longer appropriately hashed. With the new
shard count now available, the keys are redistributed accordingly and the reshard
is completed.
Mechanism for casual dependency tracking:
Our methodolgy for tracking casual dependency utilized a Counter that maps a replica's
socket address with a vector clock. Our implementation resembles the Birman-Schiper-Stephenson Protocol,
in which we increment a vector clock in a socket's position when it broadcasts a request from the
client to the other replicas. Additionally, we maintain a queue for requests when they must wait on previous
requests to maintian causal consistency. On every request received from a client, the replica
compares its local clock with the client's metadata. If the request cannot be delivered, it is
compared with the clocks of the requests in the queue to potentially result in the delivery of
a previous request or simply be added to the queue.
Mechanism for detecting downed replica:
We detect a downed replica by using a try, except block on broadcasted requests. If we detect that
we did not receive a response from a replica, we broadcast a request to all remaining replicas to
remove its socket from the view as well as its causual metadata.
When a downed replica comes back online, it first updates its key-value store with a replica
that is up. It then updates the causal meta-data it receives from the client with its
own vector clock. Finally, the reconnected replica broadcasts to the others to place itself in
their view.