Skip to content

Commit

Permalink
Typos
Browse files Browse the repository at this point in the history
  • Loading branch information
Simran-B committed Jun 27, 2018
1 parent 69775c2 commit 4d7e392
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion docs/Manual/Deployment/Kubernetes/Authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ as well as access from the ArangoDB Operator to the deployment.
To disable authentication, set `spec.auth.jwtSecretName` to `None`.

Initially the deployment is accessible through the web user-interface and
API's, using the user `root` with an empty password.
APIs, using the user `root` with an empty password.
Make sure to change this password immediately after starting the deployment!

## See also
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ spec:
This definition results in:

- the arangosync `SyncMaster` in deployment `cluster-b` is called to configure a synchronization
from the syncmasters located at the given list of endpoint URL's to the syncmasters `cluster-b`,
from the syncmasters located at the given list of endpoint URLs to the syncmasters `cluster-b`,
using the client authentication certificate stored in `Secret` `cluster-a-sync-auth`.
To access `cluster-a`, the keyfile (containing a client authentication certificate) is used.
To access `cluster-b`, the JWT secret found in the deployment of `cluster-b` is used.
Expand All @@ -69,7 +69,7 @@ This cluster configured as the replication source.

### `spec.source.masterEndpoint: []string`

This setting specifies zero or more master endpoint URL's of the source cluster.
This setting specifies zero or more master endpoint URLs of the source cluster.

Use this setting if the source cluster is not running inside a Kubernetes cluster
that is reachable from the Kubernetes cluster the `ArangoDeploymentReplication` resource is deployed in.
Expand Down Expand Up @@ -110,7 +110,7 @@ This cluster configured as the replication destination.

### `spec.destination.masterEndpoint: []string`

This setting specifies zero or more master endpoint URL's of the destination cluster.
This setting specifies zero or more master endpoint URLs of the destination cluster.

Use this setting if the destination cluster is not running inside a Kubernetes cluster
that is reachable from the Kubernetes cluster the `ArangoDeploymentReplication` resource is deployed in.
Expand Down
2 changes: 1 addition & 1 deletion docs/Manual/Deployment/Kubernetes/DeploymentResource.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ cluster is down, or in a bad state, irrespective of the value of this setting.

### `spec.rocksdb.encryption.keySecretName`

This setting specifies the name of a kubernetes `Secret` that contains
This setting specifies the name of a Kubernetes `Secret` that contains
an encryption key used for encrypting all data stored by ArangoDB servers.
When an encryption key is used, encryption of the data in the cluster is enabled,
without it encryption is disabled.
Expand Down
2 changes: 1 addition & 1 deletion docs/Manual/Deployment/Kubernetes/DriverConfiguration.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,5 +124,5 @@ The easiest way to work around it, is by making sure that the query results are
enough.
When that is not feasible, it is also possible to resolve this
when the internal DNS names of your Kubernetes cluster are exposed to your client application
and the resuling IP addresses are routeable from your client application.
and the resulting IP addresses are routable from your client application.
To expose internal DNS names of your Kubernetes cluster, your can use [CoreDNS](https://coredns.io).
4 changes: 2 additions & 2 deletions docs/Manual/Deployment/Kubernetes/Troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ those replicas.
There are two common causes for this.

1) The `Pods` cannot be scheduled because there are not enough nodes available.
This is usally only the case with a `spec.environment` setting that has a value of `Production`.
This is usually only the case with a `spec.environment` setting that has a value of `Production`.

Solution:
Add more nodes.
Expand Down Expand Up @@ -106,7 +106,7 @@ those `PersistentVolumes`, it depends on the type of server that was using the v
- If a `DBServer` was using the volume, and the replication factor of all database
collections is 2 or higher, and the remaining dbservers are still healthy,
the cluster will duplicate the remaining replicas to
bring the number of replicases back to the original number.
bring the number of replicas back to the original number.
- If a `DBServer` was using the volume, and the replication factor of a database
collection is 1 and happens to be stored on that dbserver, the data is lost.
- If a single server of an `ActiveFailover` deployment was using the volume, and the
Expand Down

0 comments on commit 4d7e392

Please sign in to comment.