Skip to content

Commit

Permalink
Migrate epi 0.2.0 from old repo (#29)
Browse files Browse the repository at this point in the history
* checkin epi 0.2.0

* rename repo ph-ethadapter to pharmaledger-imi

* helm chart testing

* doc update
  • Loading branch information
tgip-work authored Apr 13, 2022
1 parent 659ba9a commit 6c8c778
Show file tree
Hide file tree
Showing 21 changed files with 2,190 additions and 0 deletions.
24 changes: 24 additions & 0 deletions charts/epi/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
tests
29 changes: 29 additions & 0 deletions charts/epi/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
apiVersion: v2
name: epi
description: A Helm chart for Pharma Ledger epi (electronic product information) application

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: "poc.1.6"

icon: https://avatars.githubusercontent.com/u/60230259?s=200&v=4

maintainers:
- name: tgip-work
url: https://github.com/tgip-work
354 changes: 354 additions & 0 deletions charts/epi/README.md

Large diffs are not rendered by default.

305 changes: 305 additions & 0 deletions charts/epi/README.md.gotmpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,305 @@
{{ template "chart.header" . }}

{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}

{{ template "chart.description" . }}

## Requirements

- [helm 3](https://helm.sh/docs/intro/install/)
- These mandatory configuration values:
- Domain - The Domain - e.g. `epipoc`
- Sub Domain - The Sub Domain - e.g. `epipoc.my-company`
- Vault Domain - The Vault Domain - e.g. `vault.my-company`
- ethadapterUrl - The Full URL of the Ethadapter including protocol and port - e.g. "https://ethadapter.my-company.com:3000"
- bdnsHosts - The Centrally managed and provided BDNS Hosts Config -

## Usage

- [Here](./README.md#values) is a full list of all configuration values.
- The [values.yaml file](./values.yaml) shows the raw view of all configuration values.

## Changelog

- From 0.1.x to 0.2.x - Technical release: Significant changes! Please uninstall old versions first! Upgrade from 0.1.x not tested and not guaranteed!
- Uses Helm hooks for Init and Cleanup
- Optimized Build process: SeedsBackup will only be created if the underlying Container image has changed, e.g. in case of an upgrade!
- Readiness probe implemented. Application container is considered as *ready* after build process has been finished.
- Value `config.ethadapterUrl` has changed from `https://ethadapter.my-company.com:3000` to `http://ethadapter.ethadapter:3000` in order to reflect changes in [ethadapter](https://github.com/PharmaLedger-IMI/helmchart-ethadapter/tree/epi-improve-build/charts/ethadapter).
- Value `persistence.storageClassName` has changed from `gp2` to empty string `""` in order to remove pre-defined setting for AWS and to be cloud-agnostic by default.
- Configurable sleep time between start of apihub and build process (`config.sleepTime`).
- Configuration options for PersistentVolumeClaim
- Configuration has been prepared for running as non-root user (commented out yet, see [values.yaml `podSecurityContext` and `securityContext`](./values.yaml)).
- Minor optimizations at Kubernetes resources, e.g. set sizeLimit of temporary shared volume, explictly set readOnly flags at volumeMounts.

## Helm Lifecycle and Kubernetes Resources Lifetime

This helm chart uses Helm [hooks](https://helm.sh/docs/topics/charts_hooks/) in order to install, upgrade and manage the application and its resources.

```mermaid
sequenceDiagram
participant PIN as pre-install
participant PUP as pre-upgrade
participant I as install
participant U as uninstall
participant PUN as post-uninstall
Note over PIN,PUN: PersistentVolumeClaim
Note over PIN,PUN: ConfigMap SeedsBackup
Note over PIN:Init Job
Note over PIN:ConfigMaps Init
Note over PIN:ServiceAccount Init
Note over PIN:Role Init
Note over PIN:RoleBinding Init
note right of PIN: Note: The Init Job stores <br/>Seeds in Configmap SeedsBackup and <br/> is either executed by a) pre-install hook or<br/>b)pre-upgrade hook
Note over PUP,U:Deployment
Note over PUP,U:ConfigMap build-info
Note over PUP,U:Configmaps for application
Note over PUP,U:Service
Note over PUP,U:Ingress
Note over PUP,U:ServiceAccount
Note over PUP:Init Job<br/>and more<br/>(see pre-install)
Note over PUN:Cleanup Job
Note over PUN:ServiceAccount Cleanup
Note over PUN:Role Cleanup
Note over PUN:RoleBinding Cleanup
note right of PUN: Note: The Cleanup job<br/>1. deletes PersistentVolumeClaim (optional)<br/>2. creates final backup of ConfigMap SeedsBackup<br/>3. deletes ConfigMap SeedsBackup
```

## Init Job

The Init Job is an important step and will be executed on helm [hooks](https://helm.sh/docs/topics/charts_hooks/) `pre-install` and `pre-upgrade`.
Its pod consists of three containers, two init containers and one main container.

```mermaid
flowchart LR
A(Init Container 1:<br/>Check necessity for build process) -->B(Init Container 2:<br/>Run build process if necessary)
B --> C(Main Container:<br/>Write/Update ConfigMap Seedsbackup)
```

### Init Job Details

1. On `helm install` and `helm upgrade`, helm will deploy a Kubernete Job named *job-init* which schedules a pod consisting of two Init Containers and one Main Container.

```mermaid
flowchart LR
A(Helm pre-install/pre-upgrade hook) -->|deploys| B(Init Job)
B -->|schedules| C(Init Pod)
```

2. The first Init Container runs `kubectl` command to check existance of ConfigMap `build-info` which contains information about latest successful build process.
1. If ConfigMap `build-info` does not exist or latest build process does not match current epi application container image, then a *signal* file will be written to a shared volume between containers.
2. Otherwise the build process has already been executed for current application container image.

```mermaid
flowchart LR
D(Init Container<br/>Kubectl) --> E{ConfigMap build-info<br/>exists and<br/>matches current<br/>container image?}
E -->|not exists| F[Write signal file to shared data volume]
F --> G[Exit Init Container<br/>Kubectl]
E -->|exists| G
```

3. The second Init Container uses the container image of the epi application and checks existance of *Signal* file from first Init Container.
4. If it does not exists, then no build process shall run and the container exists.
5. If the *Signal* file exists, then
1. Starts the apihub server (`npm run server`), waits for a short period of time and then starts the build process (`npm run build-all`).
2. After build process, it writes the SeedsBackup file on a shared temporary volume between init and main container.

```mermaid
flowchart LR
D(Init Container<br/>application) --> E{Signal file exists?}
E -->|yes, exists| F[start apihub server]
F --> G[sleep short time]
G --> H[build process]
H --> I[write SeedsBackup file to shared data with main container]
I --> J
E -->|no, does not exist| J[Exit Init Container<br/>application]
```

6. The Main Container has kubectl installed and checks if SeedsBackup file was handed over by Init Container.

```mermaid
flowchart LR
L(Main Container) --> M{SeedsBackup file exists?}
M -->|exists| N[Create ConfigMap SeedsBackup for current Image]
N --> O[Update ConfigMap SeedsBackup]
O --> P
M -->|not exists| P[Exit Pod]
```

After completion of the *Init Job* the application container will be deployed/restarted with the current *ConfigMap SeedsBackup*.

## Cleanup Job

On deletion/uninstall of the helm chart a Kubernetes `cleanup` will be deployed in order to delete unmanaged helm resources created by helm hooks at `pre-install`.
These resources are:

1. Init Job - The Init Job was created on pre-install/pre-upgrade and will remain after its execution.
2. PersistentVolumeClaim - In case the PersistentVolumeClaim shall not be deleted on deletion of the helm release, set `persistence.deletePvcOnUninstall` to `false`.
3. ConfigMap SeedsBackup - Prior to deletion of the ConfigMap, a backup ConfigMap will be created with naming schema `{HELM_RELEASE_NAME}-seedsbackup-{IMAGE_TAG}-final-backup-{EPOCH_IN_SECONDS}`, e.g. `epi-seedsbackup-poc.1.6-final-backup-1646063552`

### Quick install with internal service of type ClusterIP

By default, this helm chart installs the Ethereum Adapter Service at an internal ClusterIP Service listening at port 3000.
This is to prevent exposing the service to the internet by accident!

It is recommended to put non-sensitive configuration values in an configuration file and pass sensitive/secret values via commandline.

1. Create configuration file, e.g. *my-config.yaml*

```yaml
config:
domain: "domain_value"
subDomain: "subDomain_value"
vaultDomain: "vaultDomain_value"
ethadapterUrl: "https://ethadapter.my-company.com:3000"
bdnsHosts: |-
# ... content of the BDNS Hosts file ...

```

2. Install via helm to namespace `default`

```bash
helm upgrade my-release-name pharmaledger-imi/epi --version={{ template "chart.version" . }} \
--install \
--values my-config.yaml \
```

### Expose Service via Load Balancer

In order to expose the service **directly** by an **own dedicated** Load Balancer, just **add** `service.type` with value `LoadBalancer` to your config file (in order to override the default value which is `ClusterIP`).

**Please note:** At AWS using `service.type` = `LoadBalancer` is not recommended any more, as it creates a Classic Load Balancer. Use [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/) with an ingress instead. A full sample is provided later in the docs. Using an Application Load Balancer (managed by AWS LB Controller) increases security (e.g. by using a Web Application Firewall for your http based traffic) and provides more features like hostname, pathname routing or built-in authentication mechanism via OIDC or AWS Cognito.

Configuration file *my-config.yaml*

```yaml
service:
type: LoadBalancer

config:
# ... config section keys and values ...
```

There are more configuration options available like customizing the port and configuring the Load Balancer via annotations (e.g. for configuring SSL Listener).

**Also note:** Annotations are very specific to your environment/cloud provider, see [Kubernetes Service Reference](https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws) for more information. For Azure, take a look [here](https://kubernetes-sigs.github.io/cloud-provider-azure/topics/loadbalancer/#loadbalancer-annotations).

Sample for AWS (SSL and listening on port 1234 instead 80 which is the default):

```yaml
service:
type: LoadBalancer
port: 80
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "80"
# https://docs.aws.amazon.com/de_de/elasticloadbalancing/latest/classic/elb-security-policy-table.html
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"

# further config
```

### AWS Load Balancer Controler: Expose Service via Ingress

Note: You need the [AWS Load Balancer Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/) installed and configured properly.

1. Enable ingress
2. Add *host*, *path* *`/*`* and *pathType* `ImplementationSpecific`
3. Add annotations for AWS LB Controller
4. A SSL certificate at AWS Certificate Manager (either for the hostname, here `epi.mydomain.com` or wildcard `*.mydomain.com`)

Configuration file *my-config.yaml*

```yaml
ingress:
enabled: true
# Let AWS LB Controller handle the ingress (default className is alb)
# Note: Use className instead of annotation 'kubernetes.io/ingress.class' which is deprecated since 1.18
# For Kubernetes >= 1.18 it is required to have an existing IngressClass object.
# See: https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
className: alb
hosts:
- host: epi.mydomain.com
# Path must be /* for ALB to match all paths
paths:
- path: /*
pathType: ImplementationSpecific
# For full list of annotations for AWS LB Controller, see https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/annotations/
annotations:
# The ARN of the existing SSL Certificate at AWS Certificate Manager
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:REGION:ACCOUNT_ID:certificate/CERTIFICATE_ID
# The name of the ALB group, can be used to configure a single ALB by multiple ingress objects
alb.ingress.kubernetes.io/group.name: default
# Specifies the HTTP path when performing health check on targets.
alb.ingress.kubernetes.io/healthcheck-path: /
# Specifies the port used when performing health check on targets.
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
# Specifies the HTTP status code that should be expected when doing health checks against the specified health check path.
alb.ingress.kubernetes.io/success-codes: "200"
# Listen on HTTPS protocol at port 443 at the ALB
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
# Use internet facing
alb.ingress.kubernetes.io/scheme: internet-facing
# Use most current (as of Dec 2021) encryption ciphers
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
# Use target type IP which is the case if the service type is ClusterIP
alb.ingress.kubernetes.io/target-type: ip

config:
# ... config section keys and values ...
```

### Additional helm options

Run `helm upgrade --helm` for full list of options.

1. Install to other namespace

You can install into other namespace than `default` by setting the `--namespace` parameter, e.g.

```bash
helm upgrade my-release-name pharmaledger-imi/epi --version={{ template "chart.version" . }} \
--install \
--namespace=my-namespace \
--values my-config.yaml \
```

2. Wait until installation has finished successfully and the deployment is up and running.

Provide the `--wait` argument and time to wait (default is 5 minutes) via `--timeout`

```bash
helm upgrade my-release-name pharmaledger-imi/epi --version={{ template "chart.version" . }} \
--install \
--wait --timeout=600s \
--values my-config.yaml \
```

### Potential issues

1. `Error: admission webhook "vingress.elbv2.k8s.aws" denied the request: invalid ingress class: IngressClass.networking.k8s.io "alb" not found`

**Description:** This error only applies to Kubernetes >= 1.18 and indicates that no matching *IngressClass* object was found.

**Solution:** Either declare an appropriate IngressClass or omit *className* and add annotation `kubernetes.io/ingress.class`

Further information:

- [Kubernetes IngressClass](https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class)
- [AWS Load Balancer controller documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/ingress_class/)

## Helm Unittesting

[helm-unittest](https://github.com/quintush/helm-unittest) is being used for testing the output of the helm chart.
Tests can be found in [tests](./tests)


{{ template "chart.maintainersSection" . }}

{{ template "chart.requirementsSection" . }}

{{ template "chart.valuesSection" . }}

{{ template "helm-docs.versionFooter" . }}
22 changes: 22 additions & 0 deletions charts/epi/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "epi.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "epi.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "epi.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "epi.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}
Loading

0 comments on commit 6c8c778

Please sign in to comment.