diff --git a/docs/2023-cncf-ctf-walkthroughs/build-a-backdoor/README.md b/docs/2023-cncf-ctf-walkthroughs/build-a-backdoor/README.md index fc11dac5..c4870931 100644 --- a/docs/2023-cncf-ctf-walkthroughs/build-a-backdoor/README.md +++ b/docs/2023-cncf-ctf-walkthroughs/build-a-backdoor/README.md @@ -427,7 +427,7 @@ status: startTime: "2023-11-22T08:24:38Z" ``` -The pod spec reveals a few interesting things. We can see that the pod has a label of `app: ii` associated with it which is likely used for the service and network policy. Next we find two ports exposed, `8080` and `5724`. The `8080` port is used for the website and the `5724` port is for `ops-mgmt` or operations management. This is the port we needed to find and expose for Captain H位$魔饾攳群垄k to exploit. We can also see that the port is configured with a specific username and password. We shouldn't worry about this as soon as we expose it, Captain H位$魔饾攳群垄k will do the rest. +The pod spec reveals a few interesting things. We can see that the pod has a label of `app: ii` associated with it which is likely used for the service and network policy. Next we find two ports exposed, `8080` and `5724`. The `8080` port is used for the website and the `5724` port is for `ops-mgmt` or operations management. This is the port we needed to find and expose for [[>>QAIS Formatting is going weird for me here. It's the only place]] Captain H位$魔饾攳群垄k to exploit. We can also see that the port is configured with a specific username and password. We shouldn't worry about this as soon as we expose it, `Captain H位$魔饾攳群垄k` will do the rest. As reminder, we cannot modify the pod so our focus is on what we can change which is: @@ -503,7 +503,7 @@ root@jumpbox-terminal:~# kubectl apply -f np.yaml networkpolicy.networking.k8s.io/ii-prod-mgmt-np configured ``` -> Note: For more information about network policies, please see the official documentation [Kubernetes Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/). There is also an excellent resource here [Network Policy Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) which has examples of how to configure network policies. +> Note: For more information about network policies, please see the official documentation [Kubernetes Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/). There is also an excellent resource here: [Network Policy Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) which has examples of how to configure network policies. Next on our list is the service, so let's inspect it and see what needs to be changed. @@ -741,7 +741,7 @@ NAME READY STATUS RESTARTS AGE ii-management-services 1/1 Running 0 7m51s ``` -We can see that the `ii-management-services` pod is running, sneaky H位$魔饾攳群垄k it matches the name of the one in the `ii-prod` namespace. Let's inspect it and see what we can find. +We can see that the `ii-management-services` pod is running, sneaky H位$魔饾攳群垄k! It matches the name of the one in the `ii-prod` namespace. Let's inspect it and see what we can find. ```bash root@jumpbox-terminal:~# kubectl get pods ii-management-services -n ii-pord -oyaml @@ -782,8 +782,8 @@ Congratulations, you have completed Build a Backdoor! ## Remediation and Security Considerations -This CTF scenario does not have a remediation plan as it is provide participants hands-on experience of configuring Kubernetes Ingress, Services and Network Policy. But important security considerations are. +This CTF scenario does not have a remediation plan as it provides participants hands-on experience of configuring Kubernetes Ingress, Services and Network Policy. But important security considerations are. -- The Introspective Insight application demonstrates issues with "lifting and shifting" old applications to Kubernetes. Cloud native applications no longer require a management port for the configuration of an application and use configuration as code. This allows stricter control over the application configuration and security can review changes before being deployed into production. -- The scenario demonstrates how network policy can be used to restrict access to pods. The [Kubernetes Security Checklist](https://kubernetes.io/docs/concepts/security/security-checklist/#network-security) has a item to ensure that *"Ingress and egress network policies are applied to all workloads in the cluster."*. Whilst this scenario has covered ingress network policy, egress network policy is just as important to reduce what an adversary with a foothold in the cluster can do. +- The Introspective Insight application demonstrates issues with "lifting and shifting" old applications to Kubernetes. Cloud native applications should be managed through runtime configuration, ans so no longer require a management port for the configuration. This allows stricter control over the application configuration and security can review changes before being deployed into production. +- The scenario demonstrates how network policy can be used to restrict access to pods. The [Kubernetes Security Checklist](https://kubernetes.io/docs/concepts/security/security-checklist/#network-security) has a item to ensure that *"Ingress and egress network policies are applied to all workloads in the cluster."*. Whilst this scenario has covered ingress network policy, egress network policy is just as important to reduce what an adversary with a foothold in the cluster can do. - We encourage you to review the Kyverno policies included with the scenario as demonstrates the power of applying admission control to Kubernetes and how they can be customised to your environment. For more information about Kyverno, please see the official documentation [Kyverno](https://kyverno.io/). \ No newline at end of file diff --git a/docs/2023-cncf-ctf-walkthroughs/cease-and-desist/README.md b/docs/2023-cncf-ctf-walkthroughs/cease-and-desist/README.md index a3d6afb3..9794b766 100644 --- a/docs/2023-cncf-ctf-walkthroughs/cease-and-desist/README.md +++ b/docs/2023-cncf-ctf-walkthroughs/cease-and-desist/README.md @@ -83,7 +83,7 @@ rkls-password Opaque 1 5m17s It looks like the password for the reform-kube licensing server is stored in a secret. We can pull the secret with the following command: ```bash -root@admin-console:~# kubectl get secrets rkls-password -ojson | jq -Mr '.data.password' | base64 -d +root@admin-console:~# kubectl get secrets -o jsonpath='{.data.password}' | base64 -d access-2-reform-kube-server ``` @@ -194,7 +194,7 @@ root@admin-console:~# kubectl get pods -n production No resources found in production namespace. ``` -As expected based on the challenge description, there are no pods running in the `production` namespace and that is end objective. Let's turn our attention to the `licensing` namespace. We have permissions to `get` and `list` pods as well as `create` for `pods/exec`. This combination of permissions allows us to `exec` into a pod and run commands but before we do that, we also have a permissions to `ciliumnetworkpolicies.cilium.io`. But what is cilium and what is a cilium network policy? +As expected based on the challenge description, there are no pods running in the `production` namespace. Our objective is to get production up and running again. Let's turn our attention to the `licensing` namespace. We have permissions to `get` and `list` pods as well as `create` for `pods/exec`. This combination of permissions allows us to `exec` into a pod and run commands but before we do that, we also have a permissions to `ciliumnetworkpolicies.cilium.io`. But what is cilium and what is a cilium network policy? ### Step 2: Reviewing the Cilium Network Policy @@ -407,7 +407,9 @@ Navigate to `https://gist.github.com/` and click on the `+` icon in the top righ ![Trial Gist](./images/1-trial-gist.png) -Click on the raw value and copy the url of your Gist. We can now use this as our licensing server URL. +Click on the raw value and copy the url of your Gist. We can now use this as our licensing server URL. + +> Note: Despite the gist being 'secret', it can still be accessed by the URL directly. ![Trial Gist Raw](./images/2-trial-gist-raw.png) @@ -586,3 +588,4 @@ This CTF scenario does not have a remediation plan as it is to demonstrate how C - Cilium is a powerful tool which can be used for securing network connectivity within Kubernetes, allowing transparent encryption of network traffic between services, traffic observability and network policy enforcement. - A typical layer 7 egress restriction pattern is to run a reverse proxy within a dedicated namespace or node which all workloads are forced to use. With Cilium, this can be achieved with Cilium network policies but with unified layer 3 and layer 7 egress restrictions. +- Overly permissive egress allowed access to an endpoint service arbitrary, attacker controlled content. This under a large amount of the security provided by filtering egress in the first place. \ No newline at end of file diff --git a/docs/2023-cncf-ctf-walkthroughs/ci-runner-ng-breakout/README.md b/docs/2023-cncf-ctf-walkthroughs/ci-runner-ng-breakout/README.md index 3865eda4..c53079c7 100644 --- a/docs/2023-cncf-ctf-walkthroughs/ci-runner-ng-breakout/README.md +++ b/docs/2023-cncf-ctf-walkthroughs/ci-runner-ng-breakout/README.md @@ -17,9 +17,9 @@ The purpose of CI Runner Next Generation Breakout is to teach participants about ## Challenge Description ``` -During penetration testing of a client kubernetes cluster, a vulnerability in a pod has been noticed. +During penetration testing of a client Kubernetes cluster, a vulnerability in a pod has been noticed. -The pod is part of the CI/CD build infrastructure and you are concerned that a compromised runner may lead to compromised VMs. +The pod is part of the CI/CD build infrastructure and you are concerned that a compromised runner may lead to compromised VMs and further compromise of the whole CI/CD system. Verify the vulnerability by breaking out of the CI runner pod. ``` @@ -56,7 +56,7 @@ root@jenk-ng-runner-s82n6-7dc596dcd4-nlfrq:~# which kubectl ``` -There is a service account token mounted at `/var/run/secrets/kubernetes.io/serviceaccount/token` which we can use to authenticate to the API server. But if we download `kubectl`, we soon discover the service account token has no permissions in the normal Kubernetes namespace. +There is a service account token mounted at `/var/run/secrets/kubernetes.io/serviceaccount/token` which we can use to authenticate to the API server. But if we download `kubectl`, we soon discover the service account token has no permissions in the default Kubernetes namespace. > Note: the `kubectl` binary can downloaded from the [Kubernetes release page](https://kubernetes.io/releases/) @@ -194,7 +194,7 @@ It looks like we have access to the containerd socket. ### Step 2: Discovering ctr and interacting with containerd -So we have access to containerd socket but how do we access and interact with containerd. A quick search on the internet returns the [containerd via CLI](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#interacting-with-containerd-via-cli) which shows us that we can use `ctr` to interact with containerd. Let' see if we have access to it. +So we have access to containerd socket, but how do we access and interact with containerd? A quick search on the internet returns the [containerd via CLI](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#interacting-with-containerd-via-cli) which shows us that we can use `ctr` to interact with containerd. Let's see if we have access to it. ```bash root@jenk-ng-runner-s82n6-7dc596dcd4-nlfrq:/var/run/containerd# which ctr @@ -263,7 +263,7 @@ There are quite a few options but the most interesting are the ability to manage There are a couple of ways of running `nsenter` via a container. We could leverage a well known distribution such as `ubuntu:latest` or `alpine:latest` which has `nsenter` pre-installed and then `exec` into the spawned container to run it. We could also set the entrypoint of the container to `nsenter` and then run the container. For this scenario, we will use the former method. -> Note: We recommend that you learn to build your own container to do this. It is tempting to use an image from DockerHub but you have no idea of what else is included in the image. It is far better to build your own image from source to understand what is included in the image. Here is a link to an example repository by Justin Cormack [nsenter-dockerfile](https://github.com/justincormack/nsenter1). +> Note: We recommend that you learn to build your own container to do this. It is tempting to use an image from Docker Hub but you have no idea of what else is included in the image. It is far better to build your own image from source to understand what is included in the image. Here is a link to an example repository by Justin Cormack [nsenter-dockerfile](https://github.com/justincormack/nsenter1). Let's look at the options for `images`. @@ -310,7 +310,7 @@ unpacking linux/amd64 sha256:2b7412e6465c3c7fc5bb21d3e6f1917c167358449fecac8176c done: 1.433770706s ``` -Excellent we have our ubuntu image, let's see if we can run it. +Excellent we have our Ubuntu image. Let's see if we can run it. ### Step 4: Running a container with access to the host pid namespace @@ -507,4 +507,4 @@ Congratulations, you have completed CI Runner NG Breakout. ## Remediation and Security Considerations -This CTF scenario has a pretty simple remediation plan, don't give access to the containerd socket for building container images in CI runners. [Kaniko](https://github.com/GoogleContainerTools/kaniko) or [Buildah](https://github.com/containers/buildah) can be used without root privileges to build container images. +This CTF scenario has a pretty simple remediation plan, don't give access to the containerd socket for building container images in CI runners. [Kaniko](https://github.com/GoogleContainerTools/kaniko) or [Buildah](https://github.com/containers/buildah) can be used without root privileges to build container images, among others.