-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add operational guides for M3DB (topology and bootstrapping) #924
Conversation
@@ -6,7 +6,7 @@ We recommend reading the [topology operational guide](topology.md) before readin | |||
|
|||
When an M3DB node is turned on (or experiences a topology change) it needs to go through a bootstrapping process to determine the integrity of data that it has, replay writes from the commit log, and/or stream missing data from its peers. In most cases, as long as you're running with the default and recommended bootstrapper configuration of: "filesystem,commitlog,peers,uninitialized_topology" then you should not need to worry about the bootstrapping process at all and M3DB will take care of doing the right thing such that you don't lose data and consistency guarantees are met. | |||
|
|||
In some rare cases, you may want to modify the bootstrapper configuration. The purpose of this document is to explain how all the different bootstrappers work. and what the implications of changing the bootstrappers order is. | |||
Generally speaking, we recommend that operators do not modify the bootstrappers configuration, but in the rare case that you need to this document is designed to help you understand the implications of doing so. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
... in the rare case that you need to, this document...
Codecov Report
@@ Coverage Diff @@
## master #924 +/- ##
==========================================
+ Coverage 77.95% 77.95% +<.01%
==========================================
Files 410 410
Lines 34401 34401
==========================================
+ Hits 26816 26817 +1
+ Misses 5741 5737 -4
- Partials 1844 1847 +3
Continue to review full report at Codecov.
|
docs/operational_guide/topology.md
Outdated
|
||
## Overview | ||
|
||
M3DB stores its topology (mapping of which hosts are responsible for which shards) in EtcD. There are three possible states that each host/shard pair can be in: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: s/EtcD/etcd
. Maybe link to their docs as well? https://coreos.com/etcd/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should use the word "placement" more often here instead of, or in addition to, topology. For example in m3cluster everything is a placement, and we're storing placements in etcd.
docs/operational_guide/topology.md
Outdated
2. Available | ||
3. Leaving | ||
|
||
Note that these states are not a reflection of the current status of an M3DB node, but an indicating of whether a given node has ever successfully bootstrapped and taken ownership of a given shard. For example, in a new cluster all the nodes will begin with all of their shards in the Initializing state. Once all the nodes finish bootstrapping, they will mark all of their shards as Available. If all the M3DB nodes are stopped at the same time, the cluster topology will still show all of the shards for all of the hosts as Available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/indicating/indication/
?
docs/operational_guide/topology.md
Outdated
|
||
## Leaving State | ||
|
||
The leaving state indicates that a node is attempting to leave the cluster. The purpose of this state is to allow the node to remain in the cluster long enough for thhe nodes that are taking over its responsibilities to stream data from it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/thhe/the/
docs/operational_guide/topology.md
Outdated
|
||
Replication factor: 3 | ||
|
||
### Initial Topology |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the first state be all I
? That's how we phrase it above
EDIT: Just realized this is for a node add. Maybe have another sample transition for a cluster init, but that might be pedantic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I wanted to put that in, but it seemed like it might just be more confusing than anything because it starts with all I's and then transitions to all A's but its not really clear what happened or why it happened
|
||
The commitlog bootstrapper's responsibility is to read the commitlog and snapshot (compacted commitlogs) files on disk and recover any data that has not yet been written out as an immutable fileset file. Unlike the filesystem bootstrapper, the commit log bootstrapper cannot simply check which files are on disk in order to determine if it can satisfy a bootstrap request. Instead, the commitlog bootstrapper determines whether it can satisfy a bootstrap request using a simple heuristic. | ||
|
||
On a shard-by-shard basis, the commitlog bootstrapper will consult the cluster topology to see if the host it is running on has ever achieved the "Available" status for the specified shard. If so, then the commit log bootstrapper should have all the data since the last fileset file was flushed and will return that it can satisfy any time range for that shard. In other words, the commit log bootstrapper is all-or-nothing for a given shard: it will either return that it can satisfy any time range for a given shard or none at all. In addition, the commitlog bootstrapper *assumes* it is running after the filesystem bootstrapper. M3DB will not allow you to run with a configuration where the filesystem bootstrappe is placed after the commitlog bootstrapper, but it will allow you to run the commitlog bootstrapper without the filesystem bootstrapper which can result in loss of data, depending on the workload. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/bootstrappe/bootstrapper/
| 3 | 2 | A | | ||
| 3 | 3 | A | | ||
|
||
Note that a bootstrap consistency level of majority is the default value, but can be modified by changing the value of the key "m3db.client.bootstrap-consistency-level" in EtcD to one of: "none", "one", "unstrict_majority" (attempt to read from majority, but settle for less if any errors occur), "majority" (strict majority), and "all". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/EtcD/etcd/
|
||
### Uninitialized Topology Bootstrapper | ||
|
||
The purpose of the uninitialzied topology bootstrapper is to succeed bootstraps for all time ranges for shards that have never been completely bootstrapped. This allows us to run the default bootstrapper configuration of: filesystem,commitlog,peers,topology_uninitialized such that filesystem and commitlog are used by default in node restarts, peer bootstrapper is only used for node adds/removes/replaces, and bootstraps still succeed for brand new topologies where both the commitlog and peers bootstrappers will be unable to succeed any bootstraps. In other words, the uninitialized topology bootstrapper allows us to place the commitlog bootstrapper *before* the peers bootstrapper and still succeed bootstraps with brand new topologies without resorting to using the noop-all bootstrapper which suceeds bootstraps for all shard/time-ranges regardless of the status of the topology. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/uninitialzied/uninitialized/
|
||
### No Operational All Bootstrapper | ||
|
||
The noop_all bootstrapper succeeds all bootstraps regardless of requests shards/time ranges. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: can you code highlight noop_all
1. What shards they should bootstrap, which can be determined from the cluster topology. | ||
2. What time-ranges they need to bootstrap those shards for, which can be determined from the namespace retention. | ||
|
||
For example, imagine an M3DB node that is responsible for shards 1, 5, 13, and 25 according to the cluster topology. In addition, it has a single namespace called "metrics" with a retention of 48 hours. When the M3DB node is started, the node will determine that it needs to bootstrap shards 1, 5, 13, and 25 for the time range starting at the current time and ending 48 hours ago. In order to obtain all this data, it will run the configured bootstrappers in the specified order. Every bootstrapper will notify the bootstrapping process of which shard/ranges it was able to bootstrap and the bootstrapping process will continue working its way through the list of bootstrappers until all the shards/ranges it requires have been marked as fulfilled. Otherwise the M3DB node will fail to start. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
starting 48 hours ago and ending at the current time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shards/ranges it requires
-> shards/ranges required
|
||
The commitlog bootstrapper's responsibility is to read the commitlog and snapshot (compacted commitlogs) files on disk and recover any data that has not yet been written out as an immutable fileset file. Unlike the filesystem bootstrapper, the commit log bootstrapper cannot simply check which files are on disk in order to determine if it can satisfy a bootstrap request. Instead, the commitlog bootstrapper determines whether it can satisfy a bootstrap request using a simple heuristic. | ||
|
||
On a shard-by-shard basis, the commitlog bootstrapper will consult the cluster topology to see if the host it is running on has ever achieved the "Available" status for the specified shard. If so, then the commit log bootstrapper should have all the data since the last fileset file was flushed and will return that it can satisfy any time range for that shard. In other words, the commit log bootstrapper is all-or-nothing for a given shard: it will either return that it can satisfy any time range for a given shard or none at all. In addition, the commitlog bootstrapper *assumes* it is running after the filesystem bootstrapper. M3DB will not allow you to run with a configuration where the filesystem bootstrappe is placed after the commitlog bootstrapper, but it will allow you to run the commitlog bootstrapper without the filesystem bootstrapper which can result in loss of data, depending on the workload. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where the filesystem bootstrappe is placed after
-> where the filesystem bootstrapper is placed after
|
||
The purpose of the uninitialzied topology bootstrapper is to succeed bootstraps for all time ranges for shards that have never been completely bootstrapped (at a cluster level). This allows us to run the default bootstrapper configuration of: `filesystem,commitlog,peers,topology_uninitialized` such that the filesystem and commitlog bootstrappers are used by default in node restarts, the peer bootstrapper is used for node adds/removes/replaces, and bootstraps still succeed for brand new topologies where both the commitlog and peers bootstrappers will be unable to succeed any bootstraps. In other words, the uninitialized topology bootstrapper allows us to place the commitlog bootstrapper *before* the peers bootstrapper and still succeed bootstraps with brand new topologies without resorting to using the noop-all bootstrapper which suceeds bootstraps for all shard/time-ranges regardless of the status of the topology. | ||
|
||
The uninitialized topology bootstrapper determines whether a topology is "new" for a given shard by counting the number of hosts in the Initializing state and Leaving states and if the number of Initializing - Leaving > 0 than it succeeds the bootstrap because that means the topology has never reached a state where all hosts are Available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the number of Initializing - Leaving > 0 than it succeeds
-> if there are more Initializing than Leaving, then it succeeds
|
||
### Bootstrappers Configuration | ||
|
||
Now that we've gone over the various bootstrappers, lets consider how M3DB will behave in different configurations. Note that we include uninitialized_topology at the end of all the lists of bootstrappers because its required to get a new topology up and running in the first place, but is not required after that (although leaving it in has no detrimental effects). Also note that any configuration that does not include the peers bootstrapper will not be able to handle dynamic topology changes like node adds/removes/replaces. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets
-> let's
|
||
#### peers,uninitialized_topology | ||
|
||
Everytime a node is restarted, it will attempt to stream in *all* of the data that it is responsible for from its peers, completely ignoring the immutable fileset files it already has on disk. We do not recommend running in this mode as it can lead to violations of M3DB's consistency guarantees due to the fact that the commit logs are being ignored, however, it *can* be useful if you want to repair the data on a node by forcing it to stream from its peers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, and elsewhere, Everytime a node is restarted ..
-> Every time a node is restarted, ...
docs/operational_guide/topology.md
Outdated
|
||
The leaving state indicates that a node is attempting to leave the cluster. The purpose of this state is to allow the node to remain in the cluster long enough for the nodes that are taking over its responsibilities to stream data from it. | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Double space here
|
||
We recommend reading the [topology operational guide](topology.md) before reading the rest of this document. | ||
|
||
When an M3DB node is turned on (or experiences a topology change) it needs to go through a bootstrapping process to determine the integrity of data that it has. In most cases, as long as you're running with the default and recommended bootstrapper configuration of: "filesystem,commitlog,peers,uninitialized_topology" then you should not need to worry about the bootstrapping process at all and M3DB will take care of doing the rigth thing such that you don't lose data and its consistency guarantees are met. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does ordering of "filesystem,commitlog,peers,uninitialized_topology"
matter? If so that should be called out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call
1. What shards they should bootstrap, which can be determined from the cluster topology | ||
2. What time-ranges they need to bootstrap those shards for, which can be determined from the namespace retention | ||
|
||
For example, imagine an M3DB node that is responsible for shards 1, 5, 13, and 25 according to the cluster topology. In addition, it has a single namespace called "metrics" with a retention of 48 hours. When the M3DB node is started, the node will determine that it needs to bootstrap shards 1, 5, 13, and 25 for the time range starting at the current time and ending 48 hours ago. In order to obtain all this data, it will run the configured bootstrappers in the specified order. Every bootstrapper will notify the bootstrapping process of which shard/ranges it was able to bootstrap and the bootstrapping process will continue working its way through the list of bootstrappers until all the shards/ranges it requires have been marked as fulfilled, otherwise the M3DB node will fail to start. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
an M3DB node
-> a M3DB node
|
||
We recommend reading the [topology operational guide](topology.md) before reading the rest of this document. | ||
|
||
When an M3DB node is turned on (or experiences a topology change) it needs to go through a bootstrapping process to determine the integrity of data that it has, replay writes from the commit log, and/or stream missing data from its peers. In most cases, as long as you're running with the default and recommended bootstrapper configuration of: "filesystem,commitlog,peers,uninitialized_topology" then you should not need to worry about the bootstrapping process at all and M3DB will take care of doing the right thing such that you don't lose data and consistency guarantees are met. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do order matter for "filesystem,commitlog,peers,uninitialized_topology"
, if so perhaps a mention couldn't hurt.
|
||
Now that we've gone over the various bootstrappers, lets consider how M3DB will behave in different configurations. Note that we include uninitialized_topology at the end of all the lists of bootstrappers because its required to get a new topology up and running in the first place, but is not required after that (although leaving it in has no detrimental effects). Also note that any configuration that does not include the peers bootstrapper will not be able to handle dynamic topology changes like node adds/removes/replaces. | ||
|
||
#### filesystem,commitlog,peers,uninitialized_topology (default) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should these be represented as yaml arrays just as they are in the configuration?
0d5f24c
to
00ef2c0
Compare
| 3 | 2 | A | | ||
| 3 | 3 | A | | ||
|
||
In this case, the peer bootstrapper running on node 1 will not be able to fullfill any requests because node 2 is in the Initializing state for all of its shards and cannot fulfill bootstrap requests. This means that node 1's peer bootstrapper cannot meet its default consistency level of majority for bootstrapping (1 < 2 which is majority with a replication factor of 3). On the other hand, node 1 would be able to peer bootstrap in the following topology because its peers (nodes 2/3) are available for all of their shards: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this case even possible? like 2 replicas in Initializing
state and 1 in Available
state? Also whenever there is a shard in I
state, there should be a shard in L
state, can we have that reflected in the examples if possible, I feel it's helpful for users to get a better picture of how the topology looks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah its totally possible to have 2 replicas in Initializing and 1 in Available and its not necessarily true that whenever there is a shard in I state there is a corresponding shard in L state (for example, when creating a new cluster)
docs/operational_guide/topology.md
Outdated
|
||
## Overview | ||
|
||
M3DB stores its topology (mapping of which hosts are responsible for which shards) in [etcd](https://coreos.com/etcd/). There are three possible states that each host/shard pair can be in: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe also say we call topology a Placement
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, it'd be good to explain the relationship between cluster/namespace/shard & topology here.
i.e.
- A cluster has exactly 1 Topology/Placement. The Topology/Placement maps Cluster shard replicas to hosts.
- A cluster has 0 or more namespaces. Each host serves every namespace for the shards it owns
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
2. commitlog | ||
3. peers | ||
4. uninitialized_topology | ||
5. noop_all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
noop_none
too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we agreed offline to leave this one out
docs/operational_guide/topology.md
Outdated
2. Available | ||
3. Leaving | ||
|
||
Note that these states are not a reflection of the current status of an M3DB node, but an indication of whether a given node has ever successfully bootstrapped and taken ownership of a given shard. For example, in a new cluster all the nodes will begin with all of their shards in the Initializing state. Once all the nodes finish bootstrapping, they will mark all of their shards as Available. If all the M3DB nodes are stopped at the same time, the cluster topology will still show all of the shards for all of the hosts as Available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just call it a goal state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add that at the beginning, but I think the explanation is important that its not a reflect of the health of the nodes
docs/operational_guide/topology.md
Outdated
|
||
## Initializing State | ||
|
||
The initializing state is the state in which all new host/shard combinations begin. For example, upon creating a new topology all the host/shard pairs will begin in the "Initializing" state and only once they have successfully bootstrapped will they transition to the "Available" state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit about rendering/casing: whenever you're referring to a state (except for in a header), use Intiailizing
instead of initializing/Initializing/"Initilializing"/etc
docs/operational_guide/topology.md
Outdated
|
||
## Leaving State | ||
|
||
The leaving state indicates that a node is attempting to leave the cluster. The purpose of this state is to allow the node to remain in the cluster long enough for the nodes that are taking over its responsibilities to stream data from it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of "is attempting to leave" - could you say "has been marked for removal from"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are hard to follow unless you know what to look for. Would be much better in a visual form, e.g.
│
│
│
│
│
│
│
│
│
┌─────────────────────┐ │ ┌─────────────────────┐
│Shard 1: AVAILABLE │ │ │Shard 1: AVAILABLE │
Initial Topology │Shard 2: AVAILABLE │ │ │Shard 2: AVAILABLE │
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
└─────────────────────┘ │ └─────────────────────┘
│
│
─────────────────────────────────────────────────┼─────────────────────────────────────────────────────
│
│
│
│
│
│
│
│
│
│
Begin Node Add │
│
│
│
│
│
│
│
────────────────────────────────────────────────────┼──────────────────────────────────────────────────
│
│
│
│
│
│
│
│
│
│
│
│
│
│
│
│
│
│
│
│
digram.rename-extension-monopic.txt <-- file needs its extension changed to use in monopic
docs/operational_guide/topology.md
Outdated
@@ -0,0 +1,173 @@ | |||
# Topology | |||
|
|||
## Overview |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you add lifecycle diagrams for state transitions along with arrows for who triggers any transition - e.g. an operator adds a shard to a node when it's being added in the Initializing
state, the node transitions to Available
once it's bootstrapped, and it goes to Leaving
when an operator marks it for removal, and its removed from the placement entirely by the other replica during a join goes from Initializing
to `Available.
docs/operational_guide/topology.md
Outdated
@@ -0,0 +1,173 @@ | |||
# Topology |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just call this Placement to stay consistent with code and say that it's the cluster topology in a note?
|
||
### Filesystem Bootstrapper | ||
|
||
The filesystem bootstrapper's responsibility is to determine which immutable [fileset files](../m3db/architecture/storage.md) exist on disk, and if so, mark them as fulfilled. The filesystem bootstrapper achieves this by scanning M3DB's directory structure and determining which fileset files already exist on disk. Unlike the other bootstrappers, the filesystem bootstrapper does not need to load any data into memory, it simply verifies the checksums of the data on disk and the M3DB node itself will handle reading (and caching) the data dynamically once it begins to serve reads. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like the M3DB node itself will handle reading (and caching) the data dynamically once it begins to serve reads.
. The bootstrapper is a part of a m3db node, instead maybe say other components of M3DB node will handle the reading and caching ...
?
|
||
## Overview | ||
|
||
**Note**: The words *placement* and *topology* are used interchangeably throughout the M3DB documentation and codebase. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a 100% but i think Topology
and Placement
are proper nouns for the sake of this document - i.e. they should always start Topology not topology (in the whole document)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
spoke offline, gonna stick with lowercase
|
||
Note that these states are not a reflection of the current status of an M3DB node, but an indication of whether a given node has ever successfully bootstrapped and taken ownership of a given shard (achieved goal state). For example, in a new cluster all the nodes will begin with all of their shards in the `Initializing` state. Once all the nodes finish bootstrapping, they will mark all of their shards as `Available`. If all the M3DB nodes are stopped at the same time, the cluster placement will still show all of the shards for all of the hosts as `Available`. | ||
|
||
## Initializing State |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
take a look at the rendering of this readme, you're missing a few block quotes in places -- https://github.com/m3db/m3/blob/a0b6410e8151927b128c8e68c3b5b1ac1bd0d15a/docs/operational_guide/placement.md
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
docs/operational_guide/placement.md
Outdated
|
||
## Initializing State | ||
|
||
The `Initializing` state is the state in which all new host/shard combinations begin. For example, upon creating a new placement all the host/shard pairs will begin in the `Initializing` state and only once they have successfully bootstrapped will they transition to the `Available`` state. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
end of sentence has an extra '`' around Available
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
9e5fc58
to
2b08628
Compare
No description provided.