Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(woodpecker): add network-polices #252

Merged
merged 1 commit into from
Nov 22, 2024

Conversation

wrenix
Copy link
Contributor

@wrenix wrenix commented Nov 19, 2024

No description provided.

This was referenced Nov 19, 2024
@wrenix wrenix force-pushed the feat/network-policy branch 3 times, most recently from 6cf5ead to bdf32b7 Compare November 19, 2024 00:57
@wrenix
Copy link
Contributor Author

wrenix commented Nov 19, 2024

works for me, after fill:

egress (must explicide enabled after networkpolicy ist enabled):

  • server.networkPolicy.egress.database
  • agent.networkPolicy.egress.apiserver

ingress:

  • server.networkPolicy.ingress.http (my ingress-controller)
  • server.networkPolicy.ingress.metrics (my prometheus instance)

Copy link
Collaborator

@pat-s pat-s left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind adding some basic tests for the new resources? See the existing tests and https://github.com/helm-unittest/helm-unittest.

Besides, could add some words why/when networkPolicy is needed? I've never had any need yet to configure it in many years using k8s, so I'm curious 🙂️

charts/woodpecker/charts/agent/README.md Outdated Show resolved Hide resolved
@wrenix
Copy link
Contributor Author

wrenix commented Nov 20, 2024

I do not know, how to explain it. Network Policies are firewall rules for pods. Of couse you could run a Applications without restriction of networkPolicies/firewall rules ...
See: https://kubernetes.io/docs/concepts/services-networking/network-policies/

It is just in case that somebody create into a container/pod to shild the network-access to other pods. (Or if your pod has a public ipv6 Adresse from the interner).

There as example:

  • the grpc ports are only allowed between agent and Server, so that no other pods on the cluster are allowed to connect per grpc to the Server
  • all egress traffic is limited, so that if anybody Hacks the woodpecker-pods could not access other pods (the woodpecker-server even not the apiserver)

@pat-s pat-s added the feature 🚀️ Add new feature label Nov 22, 2024
@pat-s
Copy link
Collaborator

pat-s commented Nov 22, 2024

Thanks for the explanation, very interesting. I've heard a bit about this approach before but never saw it in action so far.

Makes a lot of sense. Let's see how it goes in practice and if this will cause any issues. Will also run in on my instance for a while before we release it.

@pat-s pat-s merged commit 0a02e93 into woodpecker-ci:main Nov 22, 2024
4 checks passed
@wrenix
Copy link
Contributor Author

wrenix commented Nov 22, 2024

Be carefull on egress, there i have adjusted the values (here are values of running a release with name codeberg, beside another):

agent:
  replicaCount: 1
  env:
    WOODPECKER_BACKEND_K8S_STORAGE_RWX: false
    WOODPECKER_MAX_WORKFLOWS: 4
    WOODPECKER_BACKEND_K8S_POD_LABELS_ALLOW_FROM_STEP: true
    WOODPECKER_BACKEND_K8S_POD_LABELS: |
      {
        "app.kubernetes.io/instance": "woodpecker",
        "app.kubernetes.io/name": "job"
      }
  networkPolicy:
    enabled: true
    egress:
      enabled: true
      server:
            to:
              - podSelector:
                  matchLabels:
                    app.kubernetes.io/name: server
                    app.kubernetes.io/instance: codeberg
      dns:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
          podSelector:
            matchLabels:
              k8s-app: kube-dns
      apiserver:
        ports:
          - port: 6443
            protocol: TCP
        to:
          - ipBlock:
             # See endpoint of the Server in the default namespace (not the service-ip)
              cidr: public.IP.of.API.server/32

server:
  metrics:
    enabled: true
  prometheus:
    podmonitor:
      enabled: true
      labels:
        prometheus: "default"
    rules:
      enabled: true
      labels:
        prometheus: "default"
  grafana:
    dashboards:
      enabled: true
  ingress:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      traefik.ingress.kubernetes.io/router.middlewares: ingress-redirect-https@kubernetescrd
  networkPolicy:
    enabled: true
    ingress:
      http:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: ingress
          podSelector:
            matchLabels:
              app.kubernetes.io/instance: traefik-ingress
              app.kubernetes.io/name: traefik
      metrics:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: monitoring
          podSelector:
            matchLabels:
              prometheus: kube-prometheus-stack-prometheus
      grpc:
            - podSelector:
                matchLabels:
                  app.kubernetes.io/name: agent
                  app.kubernetes.io/instance: codeberg
    egress:
      enabled: true
      dns:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
          podSelector:
            matchLabels:
              k8s-app: kube-dns
      database:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: shared-postgresql
          podSelector:
            matchLabels:
              app.kubernetes.io/instance: postgresql
              app.kubernetes.io/name: postgresql

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature 🚀️ Add new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants