- Default Deny all Egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
spec:
podSelector: {}
policyTypes:
- Egress
- Default Allow all Egress and Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
ingress:
- {}
policyTypes:
- Egress
- Commands to run
$ k explain NetworkPolicy.spec.egress;
$ k explain NetworkPolicy.spec.ingress;
- Create a tls secret for ingress
$ k create secret tls secure-ingress --cert=cert.pem --key=key.pem
- Secure ingress with tls specifying secret
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- https-example.foo.com
secretName: testsecret-tls
rules:
- host: https-example.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
- Command to get sha of binary
$ sha512sum ${BINARY_NAME};
- Compare lines in a text file seperated by new lines
$ cat ${FILE} | uniq;
- Kubernetes binaries located in
kubernetes/server/bin
- Compare sha512 of binaries located in
kubernetes/server/bin
and docker images binary - Run the following to copy docker image file system locally
$ docker cp ${IMAGE_HASH}:/ ${LOCAL_DIR_NAME}
- Create a role imperatively
$ k create role secret-manager --verb=get --resources=secrets -n red -oyaml --dry-run=client > role.yaml
- Create roleBinding imperatively
k create rolebinding secret-manager --role=secret-manager --user=jane -n red -oyaml --dry-run=client > role_binding.yaml
- RoleBinding can bind a subject to a role in another namespace
- Check if user has role
k -n red auth can-i get secrets --as jane
- Create cluster role
k create clusterrole --verb=delete --resource=deployments
- Can not assign ClusterRoleBinding to a Role but a ClusterRole can be assigned through a RoleBinding
- Users in Kubernetes must have a certificate signed by Kubernetes CA with cert CN equal to the Users username in K8S.
- No way to remove a Kubernetes certificate; would have to do one of three things
- Remove all access via RBAC
- Username cannot be used until cert expired
- Create new CA; reissue certs
- Certificate Signing request for kube user
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: john
spec:
groups:
- system:authenticated
request: ${BASE64_ENCODED_CLIENT_CERT}
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
- Approve a certificate with the following command
k certificate approve ${CERTIFICATE_NAME}
- Test auth for service accounts
k auth can-i delete secrets --as system:serviceaccount:default:accessor
- Turn of automount of service account token with po.spec.automountServiceAccountToken or sa.spec.automountServiceAccountToken
- To configure this on kube-api server configure the following arguement in the pod. (Needed for liveness probe)
--anonymous-auth=true|false
- Anonymous user is known as system:anonymous
- Kube API server
--insecure-port=8080
(deprecated in 1.20) - Admission Controller plugin for Node Restrictions
- prevents kubelet from setting secure labels on nodes
- kubelet cant set
node-restriction.kubernetes.io/{text}
node label key
- Enable setting in Kube Api Server
...
containers:
- command:
- kube-apiserver
- --enable-admission-plugins=NodeRestrictions
...
TBC
- Hack secrets in Docker by running the following command then checking for env vars
docker inspect ${CONTAINER_ID}
- Hack etcd to get unencrypted secrets (ETCD in Kubernetes stores data under
/registry/{type}/{namespace}/{name}
)
ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/default/secret2
- Command to rewrite all secrets
k get secrets -A -o json | k replace -f -
- For Kubernetes EncryptianConfiguration resource, the first provider in resources.resources[].providers is the algorithm to encrypt new secrets. The rest are for read enablement.
- If the provider for and existing secrets encryptian is not in the EncryptianConfigurations list of providers, it cannot be read.
- Unencrypted secrets are under the
identity: {}
provider - Example of EncryptianConfiguration resources
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- identity: {}
- aescbc:
keys:
- name: key1
secret: <BASE 64 ENCODED SECRET>
- To enable the EncryptianConfiguration resource in the Kube API server add following config
...
containers:
- command:
- kube-apiserver
- --encryptian-provider-config=/etc/kubernetes/etcd/ec.yaml
...
- Make sure to also mount EncryptianConfiguration yaml in API Server from master node with the following snippets of manifest
...
volumeMounts:
- mountPath: /etc/kubernetes/etcd
name: etcd
readOnly: true
...
volumes:
- hostPath:
path: /etc/kubernetes/etcd
type: DirectoryOrCreate
name: etcd
...
- Best practice is to encrypt using aescbc.
- Linux Commands
- The follow command prints out syscalls made in linux command
$ strace ${LINUX_COMMAND}
- Runtime Class Resources
apiVersion: node.k8s.io/v1 # RuntimeClass is defined in the node.k8s.io API group
kind: RuntimeClass
metadata:
name: myclass # The name the RuntimeClass will be referenced by
# RuntimeClass is a non-namespaced resource
handler: myconfiguration # The name of the corresponding CRI configuration ex. runsc for gvisor
- Specify Runtime Class in a Pod manifests as follows
...
spec:
runtimeClassName: myclass
...
- Check uid, gid, and groups of current user
$ id
- disable run as root from container (will cause bug if container needs to run as root)
...
spec:
containers:
- securityContext:
runAsNonRoot: true
...
- Privileged Container: container user 0 (root) is mapped to host user 0 (root)
- run kubernetes pods container as privileged
...
spec:
containers:
- securityContext:
privileged: true
...
- PrivilegedEscalation: process can gain more privileges than its parents process
- Disable privileged escalation in pods container
...
spec:
containers:
- securityContext:
allowPrivilegeEscalation: true
...
- Enable PodSecurityPolicy kubernetes admission webhook in Kube API Server
...
containers:
- command:
- kube-apiserver
- --enable-admission-plugins=NodeRestrictions,PodSecurityPolicy
...
- Example PodSecurityPolicy
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: example
spec:
allowPrivilegedEscalation: false
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
- PodSecurityPolicy does nothing by default. To enable a target pods service account must have role to use the podsecuritypolicy resources
- You must configure iptables to forward app containers traffic to the side car proxy container to handle tls
- Require security context capability to manipulate ip tables.
...
spec:
containers:
- securityContext:
capabilities:
add: ["NET_ADMIN"]
...
- OPA Gatekeeper uses ConstraintTemplate K8S custom resources to create k8s Constraint resources.
- When enabling OPA gatekeeper only api server admission plugin enabled should be NodeRestriction.
- Constraints wont remove existing violating resources just mark them as violating when you describe the constraint.
- Example of a ConstraintTemplate Custom Resources. Policy Requires set of labels on resources.
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
labels:
type: array
items: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
- Example of a Constraint create based on ConstraintTemplate above. Validates namespaces have cks label.
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-cks
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["cks"]
- Notice ConstraintTemplate properties defines parameters you must enter in the Constraint
- To reference previous stage in dockerfile that is not aliased use the following. (example using COPY)
FROM ubuntu
...
FROM alpine
COPY --from=0 /app .
...
- Install and use proper(specific) versions of dependencies and base images.
- Do not run as root in Dockerfile, here is an example of avoiding this.
RUN addgroup -S appgroup && adduser -S appuser -G appgroup -h /home/appuser
COPY --from=0 /app /home/appuser/
USER appuser
- Make filesystem as readonly as possible
RUN chmod a-w /etc
- Remove shell access
RUN rm -rf /bin/*
- Can execute in various pieces of CI/CD pipeline.
- Git webhook before code commit
- Right before build
- Right before testing
- Live using things like OPA or PSP
- Kubesec: static analysis for K8S
- Can run as the following
- Binary
- Docker Container
- Kubectl plugin
- Admission Controller(kubesec-webhook)
- Can run as the following
- Example of using Kubesec through docker
docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < pod.yaml
- OPA conftest for static analysis of K8S manifest and docker files.
- Clair: Preforms static analysis of vulnerabilities in app containers
- Provides API (not one command run)
- Ingest vulnerability metadata from configured set of sources
- Trivy: simple vulnerability scanner for containers and other artifacts, suitable in CI
- Use Docker image digest as image identity
- ImagePolicyWebhooks use CR called ImageReview to validate image is from a valid registry in k8s.
- To enable Image Policy Webhook add to Kube API Server manifest
...
containers:
- command:
- kube-apiserver
- --enable-admission-plugins=ImagePolicyWebhook
- --admission-control-config-file=/etc/kubernetes/admission/admission_config.yaml
...
- The configuration file for the admission policy webhook (make sure pki is absolute dir in the kubeconf file) Make sure to mount this to api-server
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/admission/kubeconf
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: false
kubeconf file
apiVersion: v1
kind: Config
# clusters refers to the remote service.
clusters:
- cluster:
certificate-authority: /etc/kubernetes/admission/external-cert.pem # CA for verifying the remote service.
server: https://external-service:1234/check-image # URL of remote service to query. Must use 'https'.
name: image-checker
contexts:
- context:
cluster: image-checker
user: api-server
name: image-checker
current-context: image-checker
preferences: {}
# users refers to the API server's webhook configuration.
users:
- name: api-server
user:
client-certificate: /etc/kubernetes/admission/apiserver-client-cert.pem # cert for the webhook admission controller to use
client-key: /etc/kubernetes/admission/apiserver-client-key.pem # key matching the cert
- /proc directory is a secure directory in linux with info on processes
- Info and connection to processes and kernal
- configuration and administrative tasks
- contains files that dont exist yet you access
- find etcd process with
ps aux | grep etcd
- Use strace to find a process with
strace -p ${PROCESS_ID}
(-f to follow, forks as well) - use /proc to find processes info with the following path /proc/${PROCESS_ID} (cd fd then ls -lh)
tail -f 7
to follow the /proc/${PROCESS_ID}/fd files, find a secret usingcat ${fd} | grep ${SECRET}
- /proc has environ file with environment variables
- Find configuration for Falco in
/etc/falco
directory - default output logs is to
/var/log/syslog
- falco .local rules override the nodes general falco_rules
- rules files contain two main properties a list of rules,
rule
, and a list of macros,macro
- macro can be called in a rule condition.
grep -r "rule describe" .
in /etc/falco directory to find location of rule fast
- Use startup probes to modify container state only
- Use to set readOnlyFileSystem (emptyDir are writable)
... containers: - ... securityContext: readOnlyRootFilesystem: true
- Different stages in Kubernetes logging: (Comes from property omitStages)
- RequestRecieved
- ResponseStarted
- ResponseComplete
- Panic
- Levels of data in each logging stage:
- None
- Metadata
- Request
- RequestResponse
- Event Content consists of Kubernetes resources to be audited.
- Enable auditing in Kube API Server (Remember to mount /etc/kubernetes/audit as host path volume)
spec: containers: - command: - kube-apiserver - --audit-policy-file=/etc/kubernetes/audit/policy.yaml # add - --audit-log-path=/etc/kubernetes/audit/logs/audit.log # add - --audit-log-maxsize=500 # add - --audit-log-maxbackup=5 # add
- Steps to change audit policy in Kubernetes Cluster
- Change policy yaml configuration file
- Disable Auditing in Kube API Server (Comment out
audit-policy-file
config) - Enable auditing in Kube API Server (if doesnt restart check logs in /var/log/pods/kube-system_kube-apiserver)
- Test changes
- App armor is a tool that monitors and restricts kernal calls for process through profiles
- Types of Profiles
- Unconfined: Process can escape the restrictions
- Complain: Process can escape but is logged
- Enforce: Process cannot escape
- App Armor commands:
- Show all profiles:
aa-status
- Generate new Profile:
aa-genprof ${SHELL_COMMAND}
- Put a Profile:
aa-complain
- Put profile in Enforce mode:
aa-enforce
- Update Profile based on application needs:
aa-logprof
- Show all profiles:
- Profiles located in
/etc/apparmor.d
- Run
aa-logprod
after generating a Profile and running a command - Run
aa-parser
on new Profile file added to/etc/apparmor.d
- Specify apparmor with docker
docker run --security-opt apparmor=docker-default nginx
- How to run app armor on K8S
- Container Runtime must support app armor
- AppArmor needs to be installed on every node
- profile must be on every node.
- profiles specified per container using annotations
- Restricts the use of specific sys calls and will sig kill attempted to use.
- Specify seccomp with docker
docker run --security-opt seccomp=default.json nginx
- Setting seccomp profile location for kubelet
--secomp-profile-root=DIR
- Can specify in annotations or in securityContext
- Find a service running
systemctl list-units --type=service --state=running | grep SERVICE
- Find services network intefaces are using
netstat -plnt | grep PORT|SERVICE
- Check linux users
whoami
- switch users with
su USER