Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add helm postal chart #20

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

dmitryzykov
Copy link

@dmitryzykov dmitryzykov commented Sep 11, 2024

This PR add kubernetes helm chart for postal.

init db and create admin user:

kubectl exec -it POSTAL-PODNAME-WEB postal initialize
kubectl exec -it POSTAL-PODNAME-WEB postal make-user

tested with:

  • postal 3.3.4
  • kubernetes v1.30
  • helm 3.15.4

It exposes web using ingress
and using port 25 on load balancer type for smtp

mysql installation is not part of chart, so DB should be installed before this chart

Will not be able to spend much time on polishing this chart, but still think might be helpful for community.

@willpower232
Copy link

Looks good, might be worth adding a HorizontalPodAutoscaler for the web/smtp or maybe a config option for the number of replicas of them?

@dmitryzykov
Copy link
Author

Looks good, might be worth adding a HorizontalPodAutoscaler for the web/smtp or maybe a config option for the number of replicas of them?

I added replicas to values

@ofekifergan
Copy link

Can the worker use more than one replica?
Can you provider which component of postal must be with one replica and which can not?
btw, looks good!

@dragoangel
Copy link

JFYI, stuff like:

kubectl exec -it POSTAL-PODNAME-WEB postal initialize
kubectl exec -it POSTAL-PODNAME-WEB postal make-user

Don't have to be dependency.

Both init and make user can be wrapped inside check if we have initialized before and have default admin user on not. This is question of entry point honestly.

There is many ways to handle it, f.e.: read env vars that will contains init user secret that will be set in helm values or via providing existing secret name and key name.

@dragoangel
Copy link

dragoangel commented Oct 7, 2024

Please add to each deployment stuff like:

      {{- with .Values.web.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      affinity:
      {{- with .Values.web.affinity }}
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- if not .Values.web.affinity.podAntiAffinity }}
        {{- if eq .Values.web.podAntiAffinityPreset "hard" }}
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    {{- include "postal.selectorLabels" . | nindent 20 }}
                    app.kubernetes.io/component: web
                topologyKey: kubernetes.io/hostname
        {{- else if eq .Values.web.podAntiAffinityPreset "soft" }}
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    {{- include "postal.selectorLabels" . | nindent 20 }}
                    app.kubernetes.io/component: web
                topologyKey: kubernetes.io/hostname
        {{- end }}
      {{- end }}
      {{- with .Values.web.topologySpreadConstraints }}
      topologySpreadConstraints:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.web.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

P.s:

  • default podAntiAffinityPreset: "soft" is good thing, to ask k8s put pods of same deployment on different nodes if possible, and if not - well put them where it can. This out of the box allows not worry that service will be down if one node going on some trip :)
  • app.kubernetes.io/component is more correct label name then app.

@dragoangel
Copy link

Looks good, might be worth adding a HorizontalPodAutoscaler for the web/smtp or maybe a config option for the number of replicas of them?

{{- if .Values.web.autoscaling.enabled }}
apiVersion: {{ include "postal.hpa.apiVersion" . }}
kind: HorizontalPodAutoscaler
metadata:
  name: {{ include "postal.fullname" . }}-web
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "postal.labels" . | nindent 4 }}
    app.kubernetes.io/component: {{ include "postal.fullname" . }}-web
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: {{ include "postal.fullname" . }}-web
  minReplicas: {{ .Values.web.autoscaling.minReplicas }}
  maxReplicas: {{ .Values.web.autoscaling.maxReplicas }}
  metrics:
    {{- if .Values.web.autoscaling.targetCPU }}
    - type: Resource
      resource:
        name: cpu
        {{- if eq (include "postal.hpa.apiVersion" .) "autoscaling/v2beta1" }}
        targetAverageUtilization: {{ .Values.web.autoscaling.targetCPU }}
        {{- else }}
        target:
          type: Utilization
          averageUtilization: {{ .Values.web.autoscaling.targetCPU }}
        {{- end }}
    {{- end }}
    {{- if .Values.web.autoscaling.targetMemory }}
    - type: Resource
      resource:
        name: memory
        {{- if eq (include "postal.hpa.apiVersion" .) "autoscaling/v2beta1" }}
        targetAverageUtilization: {{ .Values.web.autoscaling.targetMemory }}
        {{- else }}
        target:
          type: Utilization
          averageUtilization: {{ .Values.web.autoscaling.targetMemory }}
        {{- end }}
    {{- end }}
  {{- if .Values.web.autoscaling.behavior }}
  behavior: {{ toYaml .Values.web.autoscaling.behavior | nindent 4 }}
  {{- end }}
{{- end }}

@dmitryzykov
Copy link
Author

dmitryzykov commented Oct 7, 2024

@dragoangel thank you for improving this.
Can you please add these changes as PR to my repo: https://github.com/dmitryzykov/postal-install
from which this PR is created, I will merge it so it will be updated here.

Any other improvements for helm chart are also welcomed.

@dragoangel
Copy link

Hi @dmitryzykov, unfortunately, not have many time for that, I'm not using postal atm, but you can collect my feedback and improve your PR, I copied samples that I had from other places and adjusted a bit, you can "polish" them a bit and it will be cool.

@dmitryzykov
Copy link
Author

@dragoangel can you just attach your helm chart folder as zip archive? Will add it to pr

@dragoangel
Copy link

No, you got me wrong, I not have postal at all :) I just reviewing your PR

@dmitryzykov
Copy link
Author

@dragoangel got it now :)
I'm not actively working on postal too. Might add these changes later when will work again with postal chart.

Anyway if someone will find this helpful and tested this - any changes in PR format are welcomed.

@dragoangel
Copy link

dragoangel commented Oct 8, 2024

also we need have: livenessProbe & readinessProbe's, @willpower232 do postal has /healthy page that returns 200 OK if it's okay and 5xx if not? Then we can add stuff like:

          livenessProbe:
            httpGet:
              path: /healthy
              port: web
            periodSeconds: 3
            timeoutSeconds: 2
            failureThreshold: 10
          readinessProbe:
            httpGet:
              path: /healthy
              port: web
            periodSeconds: 3
            timeoutSeconds: 2
            successThreshold: 2
            failureThreshold: 2

And for smtp, can be same but instead of httpGet:

            tcpSocket:
                port: smtp

For worker - I don't know, we can exec command in container and check if proccess running, or sort of that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants