This Helm chart provides all the basic infrastructure needed to deploy WProofreader Server to a Kubernetes cluster. By default, the image is pulled from WebSpellChecker Docker Hub, however, many users would require building their own local images with custom configuration. Please refer to our other repository to get started with building your own docker image.
Before you begin, make sure you have the required environment:
The Chart can be installed the usual way using all the defaults:
git clone https://github.com/WebSpellChecker/wproofreader-helm.git
cd wproofreader-helm
helm install --create-namespace --namespace wsc wproofreader-app wproofreader
where wsc
is the namespace where the app should be installed,
wproofreader-app
is the Helm release name,
wproofreader
is the local Chart directory.
API requests should be sent to the Kubernetes Service instance, reachable at
http(s)://<service-name>.<namespace>.svc:<service-port>
where
http
orhttps
depends on the protocol used;<service-name>
is the name of the Service instance, which would bewproofreader-app
with the above command, unless overwritten usingfullnameOverride
values.yaml
parameter;<namespace>
is the namespace where the chart was installed;.svc
can be omitted in most cases, but is recommended to keep;<service-port>
is80
or443
by default for HTTP and HTTPS, respectively, in which case it can be omitted, unless explicitly overwritten withservice.port
invalues.yaml
.
There are three ways the service can be activated:
- During
docker build
by setting theLICENSE_TICKET_ID
argument in Dockerfile or CLI (--build-arg LICENSE_TICKET_ID=${MY_LOCAL_VARIABLE}
). - Through the
values.yaml
config file (licenseTicketID
parameter). - During chart deployment/upgrade CLI call using the flag:
--set licenseTicketID=${LICENSE_TICKET_ID}
provided that LICENSE_TICKET_ID
is set in your environment.
Important
If you are attempting to build a production environment, it's recommended to use the custom Docker image with WProofreader Server instead of the public one published on Docker Hub. With the custom image, you won't need to activate the license on the container start. Thus, you just skip this step. Otherwise, you may face the issue with reaching the maximum allowed number of license activation attempts (by default, 25). In this case, you need to contact support to extend/reset the license activation limit. Nevertheless, using the public image is acceptable for evaluation, testing and development purposes.
By default, the server is set to communicate via HTTP, which is fine for communicating withing a closed network. For outbound connections it is of the utmost importance that clients communicate over TLS.
To do this, the following parameters have to change in values.yaml
:
useHTTPS
totrue
.certFile
andkeyFile
to relative paths of the certificate and key files within the chart directory. Keep in mind that Helm can't reach outside the chart directory.certMountPath
to whatever path was used in theDockerfile
. For the DockerHub image, one should stick to the default value, which is/certificate
.
Note
certFile
and keyFile
filenames, as well as certMountPath
have to match to values set in the
Dockerfile
used for building the image. Otherwise, nginx
config (/etc/nginx/conf.d/wscservice.conf
)
has to be updated with new filenames and locations.
The defaults for the DockerHub image are cert.pem
, key.pem
, and /certificate
, respectively.
To enable WProofreader Server to use your custom dictionaries, follow these steps:
- Upload the files to a directory on the node where the chart will be deployed.
Ensure this node has
wproofreader.domain-name.com/app
label. - Set
dictionaries.localPath
parameter to the absolute path of this directory. - Optionally, edit
dictionaries.mountPath
value if non-default one was used inDockerfile
, as well as otherdictionaries
parameters if needed. - Install the chart as usual.
The Chart uses nodeAffinity
for mounting Persistent Volume of type local
.
This allows the user to specify which node will host WProofreader Server
on a cluster, even a single-node one.
To assign this role to a node, you need to attach a label to it. It can be any label you choose,
e.g. wproofreader.domain-name.com/app
:
kubectl label node <name-of-the-node> wproofreader.domain-name.com/app=
Note that =
is required but the value after it is not important (empty in this example).
Keep in mind that your custom label has to be either updated in values.yaml
(nodeAffinityLabel
key, recommended), or passed to helm
calls using
--set nodeAffinityLabel=wproofreader.domain-name.com/app
.
To install the Chart with custom dictionaries feature enabled and the local path set to the directory on the node where dictionaries are stored:
helm install --create-namespace --namespace wsc wproofreader-app wproofreader --set nodeAffinityLabel=wproofreader.domain-name.com/app --set dictionaries.enabled=true --set dictionaries.localPath=/dictionaries
The dictionary files can be uploaded after the chart installation, but the dictionaries.localPath
folder must exist on the node beforehand.
Dictionaries can be uploaded to the node VM using standard methods (scp
, rsync
, FTP
etc) or
the kubectl cp
command. With kubectl cp
, you need to use one of the deployment's pods.
Once uploaded, the files will automatically appear on all pods and persist
even if the pods are restarted. Follow these steps:
- Get the name of one of the pods. For the Helm release named
wproofreader-app
in thewsc
namespace, usePOD=$(kubectl get pods -n wsc -l app.kubernetes.io/instance=wproofreader-app -o jsonpath="{.items[0].metadata.name}")
- Upload the files to the pod
Replace
kubectl cp -n wsc <local path to files> $POD:/dictionaries
/dictionaries
with your customdictionaries.mountPath
value if applicable.
There is also a way in the Chart to specify an already existing Persistent Volume Claim (PVC) with dictionaries that can be configured to operate on multiple nodes (e.g., NFS). To do this, enable the custom dictionary feature by setting the dictionaries.enabled
parameter to true
and specifying the name of the existing PVC in the dictionaries.existingClaim
parameter.
Tip
Using an existing PVC is the recommended way because it ensures that your data will persist even if the Chart is uninstalled. This approach offers a reliable method to maintain data integrity and availability across deployments.
However, please note that provisioning the Persistent Volume (PV) and PVC for storage backends like NFS is outside the scope of this Chart. You will need to provision the PV and PVC separately according to your storage backend's documentation before using the dictionaries.existingClaim
parameter.
For production deployments, it is highly recommended to specify resource requests and limits for your Kubernetes pods. This helps ensure that your applications have the necessary resources to run efficiently while preventing them from consuming excessive resources on the cluster which can impact other applications.
This can be configured in the values.yaml
file under the resources
section.
Below are the recommended resource requests and limits for deploying WProofreader Server v5.34.x with enabled English dialects (en_US, en_GB, en_CA, and en_AU) for spelling & grammar check using the English AI language model for enhanced and more accurate proofreading. It also includes such features as a style guide, spelling autocorrect, named-entity recognition (NER), and text autocomplete suggestions (text prediction). These values represent the minimum requirements for running WProofreader Server in a production environment.
resources:
requests:
memory: "4Gi"
cpu: "1"
limits:
memory: "8Gi"
cpu: "4"
Note
Depending on your specific needs and usage patterns, especially when deploying AI language models for enhanced proofreading in other languages, you may need to adjust these values to ensure optimal performance and resource utilization. Alternatively, you can choose the bare-minimum configuration without AI language models. In this case, only algorithmic engines will be used to provide basic spelling and grammar checks.
The Helm chart includes readiness and liveness probes to help Kubernetes manage the lifecycle of the WProofreader Server pods. These probes are used to determine when the pod is ready to accept traffic and when it should be restarted if it becomes unresponsive.
You may thoughtfully modify the Chart default values based on your environment's resources and application needs in the values.yaml
file under the readinessProbeOptions
and livenessProbeOptions
sections.
Example:
readinessProbeOptions:
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
WProofreader Server can be scaled horizontally by changing the number of replicas.
This can be done by setting the replicaCount
parameter in the values.yaml
file.
The default value is 1
. For example, to scale the application to 3 replicas, set the --set replicaCount=3
flag when installing the Helm chart.
For dynamic scaling based on resource utilization, you can use Kubernetes Horizontal Pod Autoscaler (HPA).
To use the HPA, you need to turn on the metrics server in your Kubernetes cluster. The HPA will then automatically change the number of pods in a deployment based on how much CPU is being used.
The HPA is not enabled by default in the Helm chart. To enable it, set the autoscaling.enabled
parameter to true
in the values.yaml
file.
Important
WProofreader Server can be scaled only based on CPU usage metric. The targetMemoryUtilizationPercentage
is not supported.
Check the pod logs to see if the license ID has not been provided:
POD=$(kubectl get pods -n <namespace> -l app.kubernetes.io/instance=<release-name> -o jsonpath="{.items[0].metadata.name}")
kubectl logs -n <namespace> $POD
If so, refer to license section. Existing release can be patched with
helm upgrade -n <namespace> <release-name> wproofreader --set licenseTicketID=<license ID>
Keep in mind, that upcoming helm upgrade
have to carry on the licenseTicketID
flag,
so that it's not overwritten with the (empty) value from values.yaml
.
Please make sure that all values arguments passed as --set
CLI arguments
were duplicated with your latest helm upgrade
call, or simply use --reuse-values
flag.
Otherwise, they are overwritten with the contents of values.yaml
.
For illustration purposes, please find exported Kubernetes manifests in the manifests
folder.
If you need to export the manifest files from this sample Helm Chart, please use the following command:
helm template --namespace wsc wproofreader-app wproofreader \
--set licenseTicketID=qWeRtY123 \
--set useHTTPS=true \
--set certFile=cert.pem \
--set keyFile=key.pem \
--set dictionaries.localPath=/var/local/dictionaries \
> manifests/manifests.yaml
The service might fail to start up properly if misconfigured. For troubleshooting, it can be beneficial to get the full configuration you attempted to deploy. If needed, later it can be shared with the support team for further investigation.
There are several ways to gather necessary details:
- Get the values (user-configurable options) used by Help to generate Kubernetes manifests:
helm get values --all --namespace wsc wproofreader-app > wproofreader-app-values.yaml
where wsc
is the namespace and wproofreader-app
– the name of your release,
and wproofreader-app-values.yaml
– name of the file the data will be written to.
- Extract the full Kubernetes manifest(s) as follows:
helm get manifest --namespace wsc wproofreader-app > manifests.yaml
If you do not have access to helm
, same can be accomplished using
kubectl
. To get manifests for all resources in the wsc
namespace, run:
kubectl get all --namespace wsc -o yaml > manifests.yaml
- Retrieve the logs of all
wsproofreader-app
pods in thewsc
namespace:
kubectl logs -n wsc -l app.kubernetes.io/instance=wproofreader-app