diff --git a/docs/Manual/Deployment/Kubernetes/Authentication.md b/docs/Manual/Deployment/Kubernetes/Authentication.md index d9bff945f..b20d1e67d 100644 --- a/docs/Manual/Deployment/Kubernetes/Authentication.md +++ b/docs/Manual/Deployment/Kubernetes/Authentication.md @@ -10,7 +10,7 @@ as well as access from the ArangoDB Operator to the deployment. To disable authentication, set `spec.auth.jwtSecretName` to `None`. Initially the deployment is accessible through the web user-interface and -API's, using the user `root` with an empty password. +APIs, using the user `root` with an empty password. Make sure to change this password immediately after starting the deployment! ## See also diff --git a/docs/Manual/Deployment/Kubernetes/DeploymentReplicationResource.md b/docs/Manual/Deployment/Kubernetes/DeploymentReplicationResource.md index 2a1a23cc2..f0538d287 100644 --- a/docs/Manual/Deployment/Kubernetes/DeploymentReplicationResource.md +++ b/docs/Manual/Deployment/Kubernetes/DeploymentReplicationResource.md @@ -51,7 +51,7 @@ spec: This definition results in: - the arangosync `SyncMaster` in deployment `cluster-b` is called to configure a synchronization - from the syncmasters located at the given list of endpoint URL's to the syncmasters `cluster-b`, + from the syncmasters located at the given list of endpoint URLs to the syncmasters `cluster-b`, using the client authentication certificate stored in `Secret` `cluster-a-sync-auth`. To access `cluster-a`, the keyfile (containing a client authentication certificate) is used. To access `cluster-b`, the JWT secret found in the deployment of `cluster-b` is used. @@ -69,7 +69,7 @@ This cluster configured as the replication source. ### `spec.source.masterEndpoint: []string` -This setting specifies zero or more master endpoint URL's of the source cluster. +This setting specifies zero or more master endpoint URLs of the source cluster. Use this setting if the source cluster is not running inside a Kubernetes cluster that is reachable from the Kubernetes cluster the `ArangoDeploymentReplication` resource is deployed in. @@ -110,7 +110,7 @@ This cluster configured as the replication destination. ### `spec.destination.masterEndpoint: []string` -This setting specifies zero or more master endpoint URL's of the destination cluster. +This setting specifies zero or more master endpoint URLs of the destination cluster. Use this setting if the destination cluster is not running inside a Kubernetes cluster that is reachable from the Kubernetes cluster the `ArangoDeploymentReplication` resource is deployed in. diff --git a/docs/Manual/Deployment/Kubernetes/DeploymentResource.md b/docs/Manual/Deployment/Kubernetes/DeploymentResource.md index d0fbf6e83..01a3d80ab 100644 --- a/docs/Manual/Deployment/Kubernetes/DeploymentResource.md +++ b/docs/Manual/Deployment/Kubernetes/DeploymentResource.md @@ -126,7 +126,7 @@ cluster is down, or in a bad state, irrespective of the value of this setting. ### `spec.rocksdb.encryption.keySecretName` -This setting specifies the name of a kubernetes `Secret` that contains +This setting specifies the name of a Kubernetes `Secret` that contains an encryption key used for encrypting all data stored by ArangoDB servers. When an encryption key is used, encryption of the data in the cluster is enabled, without it encryption is disabled. diff --git a/docs/Manual/Deployment/Kubernetes/DriverConfiguration.md b/docs/Manual/Deployment/Kubernetes/DriverConfiguration.md index 483d9de92..8b4320669 100644 --- a/docs/Manual/Deployment/Kubernetes/DriverConfiguration.md +++ b/docs/Manual/Deployment/Kubernetes/DriverConfiguration.md @@ -124,5 +124,5 @@ The easiest way to work around it, is by making sure that the query results are enough. When that is not feasible, it is also possible to resolve this when the internal DNS names of your Kubernetes cluster are exposed to your client application -and the resuling IP addresses are routeable from your client application. +and the resulting IP addresses are routable from your client application. To expose internal DNS names of your Kubernetes cluster, your can use [CoreDNS](https://coredns.io). diff --git a/docs/Manual/Deployment/Kubernetes/Troubleshooting.md b/docs/Manual/Deployment/Kubernetes/Troubleshooting.md index 25363301f..4bfda279c 100644 --- a/docs/Manual/Deployment/Kubernetes/Troubleshooting.md +++ b/docs/Manual/Deployment/Kubernetes/Troubleshooting.md @@ -68,7 +68,7 @@ those replicas. There are two common causes for this. 1) The `Pods` cannot be scheduled because there are not enough nodes available. - This is usally only the case with a `spec.environment` setting that has a value of `Production`. + This is usually only the case with a `spec.environment` setting that has a value of `Production`. Solution: Add more nodes. @@ -106,7 +106,7 @@ those `PersistentVolumes`, it depends on the type of server that was using the v - If a `DBServer` was using the volume, and the replication factor of all database collections is 2 or higher, and the remaining dbservers are still healthy, the cluster will duplicate the remaining replicas to - bring the number of replicases back to the original number. + bring the number of replicas back to the original number. - If a `DBServer` was using the volume, and the replication factor of a database collection is 1 and happens to be stored on that dbserver, the data is lost. - If a single server of an `ActiveFailover` deployment was using the volume, and the