Skip to content

Commit

Permalink
Promoted Helm chart version to 1.0.0 (#2)
Browse files Browse the repository at this point in the history
* Promoted Helm chart version to 1.0.0
  • Loading branch information
solovyovk authored Jun 12, 2024
1 parent 0cbc1bc commit c97d083
Show file tree
Hide file tree
Showing 16 changed files with 374 additions and 119 deletions.
144 changes: 102 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,39 +15,23 @@ Before you begin, make sure you have the required environment:

## Basic installation

The chart uses `nodeAffinity` for mounting Persistent Volume of type `local`.
This also allows the user to specify which node will host the WProofreader Server
on a cluster (even a single-node one).

To assign this role to a node, one has to attach a label to it. It can be whatever you want it to be,
e.g. `proofreader.your-company.com/app`:
```shell
kubectl label node <name-of-the-node> proofreader.company-domain.com/app=
```
Note that `=` is required, but the value after it is not important (empty in this example).

Keep in mind that your custom label has to be either updated in `values.yaml`
(`affinityLabel` key, recommended), or passed to `helm` calls using
`--set affinityLabel=proofreader.company-domain.com/app`.

Now, the chart can be installed the usual way using all the defaults:
The Chart can be installed the usual way using all the defaults:
```shell
git clone https://github.com/WebSpellChecker/wproofreader-helm.git
cd wproofreader-helm
helm install --create-namespace --namespace wsc wsc-app-5-x-x wproofreader --set affinityLabel=proofreader.company-domain.com/app
helm install --create-namespace --namespace wsc wproofreader-app wproofreader
```
where `wsc` is the namespace the app should be installed to,
`wsc-app-5-25-0` – the release name, where we specifically mention the product version 5.25.0,
`wproofreader` – local chart directory,
`--set affinityLabel=proofreader.company-domain.com/app` – optional affinity label, see previous paragraph.
where `wsc` is the namespace where the app should be installed,
`wproofreader-app` is the Helm release name,
`wproofreader` is the local Chart directory.

API requests should be sent to the Kubernetes Service instance, reachable at
```text
http(s)://<service-name>.<namespace>.svc:<service-port>
```
where
- `http` or `https` depends on the protocol used;
- `<service-name>` is the name of the Service instance, which would be `wsc-app-5-25-0-wproofreader` with the above
- `<service-name>` is the name of the Service instance, which would be `wproofreader-app` with the above
command, unless overwritten using `fullnameOverride` `values.yaml` parameter;
- `<namespace>` is the namespace where the chart was installed;
- `.svc` can be omitted in most cases, but is recommended to keep;
Expand Down Expand Up @@ -88,30 +72,102 @@ The defaults for the DockerHub image are `cert.pem`, `key.pem`, and `/certificat

## Custom dictionaries

To allow WProofreader Server to use your custom dictionaries, you have to do the following:
1. Upload the files to some directory on the node, where the chart will be deployed
(remember, it's the one with the `proofreader.company-domain.com/app` label).
To enable WProofreader Server to use your custom dictionaries, follow these steps:
1. Upload the files to a directory on the node where the chart will be deployed.
Ensure this node has `wproofreader.domain-name.com/app` label.
2. Set `dictionaries.localPath` parameter to the absolute path of this directory.
3. Optionally, edit `dictionaries.mountPath` value if non-default one was used in `Dockerfile`,
as well as other `dictionaries` parameters if needed.
4. Install the chart normally.
4. Install the chart as usual.

The Chart uses `nodeAffinity` for mounting Persistent Volume of type `local`.
This allows the user to specify which node will host WProofreader Server
on a cluster, even a single-node one.

To assign this role to a node, you need to attach a label to it. It can be any label you choose,
e.g. `wproofreader.domain-name.com/app`:
```shell
kubectl label node <name-of-the-node> wproofreader.domain-name.com/app=
```
Note that `=` is required but the value after it is not important (empty in this example).

Keep in mind that your custom label has to be either updated in `values.yaml`
(`nodeAffinityLabel` key, recommended), or passed to `helm` calls using
`--set nodeAffinityLabel=wproofreader.domain-name.com/app`.

To install the Chart with custom dictionaries feature enabled and the local path set to the directory on the node where dictionaries are stored:
```shell
helm install --create-namespace --namespace wsc wproofreader-app wproofreader --set nodeAffinityLabel=wproofreader.domain-name.com/app --set dictionaries.enabled=true --set dictionaries.localPath=/dictionaries
```
The dictionary files can be uploaded after the chart installation, but the `dictionaries.localPath`
folder has to exist on the node beforehand.
Dictionaries can be uploaded to the node VM either the usual way (`scp`, `rsync`, `FTP` etc), or
using `kubectl cp` command. With `kubectl cp` we have to use one of pods of the deployment.
Once the files are uploaded, they will appear on all the pods automatically, and will persist
if any or all the pods are restarted. The workflow for this would look something like this:
1. Get the name of one of the pods. For the Helm release named `wsc-app-5-25-0` installed in the `wsc` namespace, we can use
folder must exist on the node beforehand.
Dictionaries can be uploaded to the node VM using standard methods (`scp`, `rsync`, `FTP` etc) or
the `kubectl cp` command. With `kubectl cp`, you need to use one of the deployment's pods.
Once uploaded, the files will automatically appear on all pods and persist
even if the pods are restarted. Follow these steps:
1. Get the name of one of the pods. For the Helm release named `wproofreader-app` in the `wsc` namespace, use
```shell
POD=$(kubectl get pods -n wsc -l app.kubernetes.io/instance=wsc-app-5-25-0 -o jsonpath="{.items[0].metadata.name}")
POD=$(kubectl get pods -n wsc -l app.kubernetes.io/instance=wproofreader-app -o jsonpath="{.items[0].metadata.name}")
```
2. Upload the files to the pod
```shell
kubectl cp -n wsc <local path to files> $POD:/dictionaries
```
where `/dictionaries` should be changed to whatever non-default `dictionaries.mountPath` value was used if applicable.
Replace `/dictionaries` with your custom `dictionaries.mountPath` value if applicable.

There is also a way in the Chart to specify an already existing Persistent Volume Claim (PVC) with dictionaries that can be configured to operate on multiple nodes (e.g., NFS). To do this, enable the custom dictionary feature by setting the `dictionaries.enabled` parameter to `true` and specifying the name of the existing PVC in the `dictionaries.existingClaim` parameter.

**Recommended approach:** Using an existing PVC is the recommended way because it ensures that your data will persist even if the Chart is uninstalled. This approach offers a reliable method to maintain data integrity and availability across deployments.

However, please note that provisioning the Persistent Volume (PV) and PVC for storage backends like NFS is outside the scope of this Chart. You will need to provision the PV and PVC separately according to your storage backend's documentation before using the `dictionaries.existingClaim` parameter.

## Use in production

For production deployments, it is highly recommended **to specify resource requests and limits for your Kubernetes pods**. This helps ensure that your applications have the necessary resources to run efficiently while preventing them from consuming excessive resources on the cluster which can impact other applications.
This can be configured in the `values.yaml` file under the `resources` section.

### Recommended resource requests and limits

Below are the recommended resource requests and limits for deploying WProofreader Server v5.34.x with enabled English dialects (en_US, en_GB, en_CA, and en_AU) for spelling & grammar check using the English AI language model for enhanced and more accurate proofreading. It also includes such features as a style guide, spelling autocorrect, named-entity recognition (NER), and text autocomplete suggestions (text prediction). These values represent the minimum requirements for running WProofreader Server in a production environment.

**Note:** Depending on your specific needs and usage patterns, especially when deploying AI language models for enhanced proofreading in other languages, you may need to adjust these values to ensure optimal performance and resource utilization. Alternatively, you can choose the bare-minimum configuration without AI language models. In this case, only algorithmic engines will be used to provide basic spelling and grammar checks.

```yaml
resources:
requests:
memory: "4Gi"
cpu: "1"
limits:
memory: "8Gi"
cpu: "4"
```
### Readiness and liveness probes
The Helm chart includes readiness and liveness probes to help Kubernetes manage the lifecycle of the WProofreader Server pods. These probes are used to determine when the pod is ready to accept traffic and when it should be restarted if it becomes unresponsive.
You may thoughtfully modify the Chart default values based on your environment's resources and application needs in the `values.yaml` file under the `readinessProbeOptions` and `livenessProbeOptions` sections.
Example:
```yaml
readinessProbeOptions:
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
```

### Application scaling
WProofreader Server can be scaled horizontally by changing the number of replicas.
This can be done by setting the `replicaCount` parameter in the `values.yaml` file.
The default value is `1`. For example, to scale the application to 3 replicas, set the `--set replicaCount=3` flag when installing the Helm chart.

For dynamic scaling based on resource utilization, you can use Kubernetes Horizontal Pod Autoscaler (HPA).
To use the HPA, you need to turn on the metrics server in your Kubernetes cluster. The HPA will then automatically change the number of pods in a deployment based on how much CPU is being used.
The HPA is not enabled by default in the Helm chart. To enable it, set the `autoscaling.enabled` parameter to `true` in the `values.yaml` file.

**Important Note:** WProofreader Server can be scaled only based on CPU usage metric. The `targetMemoryUtilizationPercentage` is not supported.

## Common issues
### Readiness probe failed

Expand Down Expand Up @@ -141,7 +197,7 @@ Otherwise, they are overwritten with the contents of `values.yaml`.
For illustration purposes, please find exported Kubernetes manifests in the `manifests` folder.
If you need to export the manifest files from this sample Helm Chart, please use the following command:
```shell
helm template --namespace wsc wsc-app-sample wproofreader \
helm template --namespace wsc wproofreader-app wproofreader \
--set licenseTicketID=qWeRtY123 \
--set useHTTPS=true \
--set certFile=cert.pem \
Expand All @@ -154,21 +210,25 @@ helm template --namespace wsc wsc-app-sample wproofreader \

The service might fail to start up properly if misconfigured. For troubleshooting, it can be beneficial to get the full configuration you attempted to deploy. If needed, later it can be shared with the support team for further investigation.

There are several options for how to gather needed details:
There are several ways to gather necessary details:
1. Get the values (user-configurable options) used by Help to generate Kubernetes manifests:
```shell
helm get values --all --namespace wsc wsc-app-5-25-0 > wsc-app-5-25-0-values.yaml
helm get values --all --namespace wsc wproofreader-app > wproofreader-app-values.yaml
```
where `wsc` is the namespace and `wsc-app-5-25-0` – the name of your release,
and `wsc-app-5-25-0-values.yaml` – name of the file the data will be written to.
where `wsc` is the namespace and `wproofreader-app` – the name of your release,
and `wproofreader-app-values.yaml` – name of the file the data will be written to.

2. Extract the full Kubernetes manifest(s) as follows:
```shell
helm get manifest --namespace wsc wsc-app-5-25-0 > manifests.yaml
helm get manifest --namespace wsc wproofreader-app > manifests.yaml
```

If, for any reason, you do not have access to `helm`, same can be accomplished using
`kubectl`. To get manifests for all resources in `wsc` namespace, use:
If you do not have access to `helm`, same can be accomplished using
`kubectl`. To get manifests for all resources in the `wsc` namespace, run:
```shell
kubectl get all --namespace wsc -o yaml > manifests.yaml
```
3. Retrieve the logs of all `wsproofreader-app` pods in the `wsc` namespace:
```shell
kubectl logs -n wsc -l app.kubernetes.io/instance=wproofreader-app
```
80 changes: 80 additions & 0 deletions manifests/deployment_http.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: wproofreader-app
labels:
helm.sh/chart: wproofreader-1.0.0
app.kubernetes.io/name: wproofreader
app.kubernetes.io/instance: wproofreader-app
app.kubernetes.io/version: "5.34.3"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: wproofreader
app.kubernetes.io/instance: wproofreader-app
template:
metadata:
annotations:
checksum/secrets: 3dc935bf0e71c4a7d9f2b3cf2c9743c0ad...
labels:
app.kubernetes.io/name: wproofreader
app.kubernetes.io/instance: wproofreader-app
spec:
serviceAccountName: wproofreader-app
securityContext:
fsGroup: 2000
containers:
- name: wproofreader
securityContext:
{}
image: "webspellchecker/wproofreader:5.34.3"
imagePullPolicy: IfNotPresent
ports:
- name: container-port
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /wscservice
port: container-port
scheme: HTTP
readinessProbe:
httpGet:
path: "/wscservice/api?cmd=status"
port: container-port
scheme: HTTP
resources:
{}
volumeMounts:
- mountPath: /dictionaries
name: dictionaries-volume
env:
- name: PROTOCOL
value: "2"
- name: WEB_PORT
value: "80"
- name: VIRTUAL_DIR
value: wscservice
- name: LICENSE_TICKET_ID
valueFrom:
secretKeyRef:
name: wproofreader-app-lic
key: license
volumes:
- name: dictionaries-volume
persistentVolumeClaim:
claimName: wproofreader-app-dict
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: wproofreader.domain-name.com/app
operator: Exists
30 changes: 16 additions & 14 deletions manifests/deployment.yaml → manifests/deployment_https.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: wsc-app-sample-wproofreader
name: wproofreader-app
labels:
helm.sh/chart: wproofreader-1.0.0
app.kubernetes.io/name: wproofreader
app.kubernetes.io/instance: wsc-app-sample
app.kubernetes.io/version: "5.25.0"
app.kubernetes.io/instance: wproofreader-app
app.kubernetes.io/version: "5.34.3"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
strategy:
Expand All @@ -16,23 +18,23 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: wproofreader
app.kubernetes.io/instance: wsc-app-sample
app.kubernetes.io/instance: wproofreader-app
template:
metadata:
annotations:
checksum/secrets: 3d370016ca764dcc4fbace6c41ec622b592a9cd42d3da47118149693c2b2b5e0
checksum/secrets: 3d370016ca764dcc4fbace6c41e...
labels:
app.kubernetes.io/name: wproofreader
app.kubernetes.io/instance: wsc-app-sample
app.kubernetes.io/instance: wproofreader-app
spec:
serviceAccountName: wsc-app-sample-wproofreader
serviceAccountName: wproofreader-app
securityContext:
fsGroup: 2000
containers:
- name: wproofreader
securityContext:
{}
image: "webspellchecker/wproofreader:5.25.0"
image: "webspellchecker/wproofreader:5.34.3"
imagePullPolicy: IfNotPresent
ports:
- name: container-port
Expand All @@ -45,7 +47,7 @@ spec:
scheme: HTTPS
readinessProbe:
httpGet:
path: "/wscservice/api?cmd=ver"
path: "/wscservice/api?cmd=status"
port: container-port
scheme: HTTPS
resources:
Expand All @@ -65,19 +67,19 @@ spec:
- name: LICENSE_TICKET_ID
valueFrom:
secretKeyRef:
name: wsc-app-sample-wproofreader-lic
name: wproofreader-app-lic
key: license
volumes:
- name: tls-secret-volume
secret:
secretName: wsc-app-sample-wproofreader-cert
secretName: wproofreader-app-cert
- name: dictionaries-volume
persistentVolumeClaim:
claimName: wsc-app-sample-wproofreader-dict
claimName: wproofreader-app-dict
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: proofreader.company-domain.com/app
operator: Exists
- key: wproofreader.domain-name.com/app
operator: Exists
24 changes: 24 additions & 0 deletions manifests/hpa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: wproofreader-app
labels:
helm.sh/chart: wproofreader-1.0.0
app.kubernetes.io/name: wproofreader
app.kubernetes.io/instance: wproofreader-app
app.kubernetes.io/version: "5.34.3"
app.kubernetes.io/managed-by: Helm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: wproofreader-app
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
Loading

0 comments on commit c97d083

Please sign in to comment.