Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GKE 1.6 CIS benchmark for GCP environment #1672

Merged
merged 14 commits into from
Oct 11, 2024
Merged

Conversation

ttousai
Copy link
Contributor

@ttousai ttousai commented Sep 2, 2024

Implements #1662

@CLAassistant
Copy link

CLAassistant commented Sep 2, 2024

CLA assistant check
All committers have signed the CLA.

@deven0t
Copy link
Contributor

deven0t commented Sep 2, 2024

Hi @ttousai
We will need to update cis version selection based on k8s version here https://github.com/aquasecurity/kube-bench/blob/main/cmd/util.go#L497
can you check and update, so new version will selected as per k8s version

@ttousai
Copy link
Contributor Author

ttousai commented Sep 3, 2024

Hello @deven0t I have added the selection based on k8s version and also added updates to various documents about gke-1.6.0 support.

Copy link

@guyjerby guyjerby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ttousai
there are different errors while running the benchmark - tests are not completed succsfully - can you make the changes and verify again on GKE cluster that the tests are completed succsfully?

- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ttousai
.authentication.anonymous.enabled should not appears in kubelet-config.yaml as enabled

guy_jerby@gke-gke-test-cluster-bas-default-pool-ba74cdf0-qbln /etc/kubernetes $ cat kubelet-config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
webhook:
enabled: false
authorization:
mode: AlwaysAllow
enableServer: false
podCIDR: 10.42.0.0/24
staticPodPath: /etc/kubernetes/manifests
staticPodURL: http://metadata.google.internal/computeMetadata/v1/instance/attributes/google-container-manifest
staticPodURLHeader:
Metadata-Flavor: [Google]
cgroupDriver: systemd

- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ttousai , the result that the kube-bench shows is not logical => '{.streamingConnectionIdleTimeout}' is present OR '{.streamingConnectionIdleTimeout}' is not present

Copy link
Contributor Author

@ttousai ttousai Sep 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check should PASS:

  1. if --streaming-connection-idle-timeout is set to any value not equal to 0 on the command line or,
  2. if streamingConnectionIdleTimeout is set to any value not equal to 0 in the config file or,
  3. if --streaming-connection-idle-timeout is not set on the command line or,
  4. if streamingConnectionIdleTimeout is not set in the config file.

In our case it should pass because --streaming-connection-idle-timeout is not set on the command line and it is also not set in the config file (the correct config file is /home/kubernetes/kubelet-config.yaml).

tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ttousai , here is the result,
the iptables-util-chains is exists in the kubelet-config , but the test is failed and exoected result is empty, probably a typo in the test?

{
"test_number": "3.2.6",
"test_desc": "Ensure that the --make-iptables-util-chains argument is set to true (Automated)",
"audit": "/bin/ps -fC kubelet",
"AuditEnv": "",
"AuditConfig": "/bin/cat /etc/kubernetes/kubelet-config.yaml",
"type": "",
"remediation": "Remediation Method 1:\nIf modifying the Kubelet config file, edit the kubelet-config.json file\n/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to\ntrue\n\n "makeIPTablesUtilChains": true\n\nEnsure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf\ndoes not set the --make-iptables-util-chains argument because that would\noverride your Kubelet config file.\n\nRemediation Method 2:\nIf using executable arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each\nworker node and add the below parameter at the end of the KUBELET_ARGS variable\nstring.\n\n --make-iptables-util-chains:true\n\nRemediation Method 3:\nIf using the api configz endpoint consider searching for the status of\n"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes\nrunning kubelet.\n\nSee detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a\nLive Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),\nand then rerun the curl statement from audit process to check for kubelet\nconfiguration changes\n\n kubectl proxy --port=8001 \u0026\n export HOSTNAME_PORT=localhost:8001 (example host and port number)\n export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from\n "kubectl get nodes")\n curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"\n\nFor all three remediations:\nBased on your system, restart the kubelet service and check status\n\n systemctl daemon-reload\n systemctl restart kubelet.service\n systemctl status kubelet -l\n",
"test_info": ["Remediation Method 1:\nIf modifying the Kubelet config file, edit the kubelet-config.json file\n/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to\ntrue\n\n "makeIPTablesUtilChains": true\n\nEnsure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf\ndoes not set the --make-iptables-util-chains argument because that would\noverride your Kubelet config file.\n\nRemediation Method 2:\nIf using executable arguments, edit the kubelet service file\n/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each\nworker node and add the below parameter at the end of the KUBELET_ARGS variable\nstring.\n\n --make-iptables-util-chains:true\n\nRemediation Method 3:\nIf using the api configz endpoint consider searching for the status of\n"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes\nrunning kubelet.\n\n
See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a\nLive Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),\nand then rerun the curl statement from audit process to check for kubelet\nconfiguration changes\n\n kubectl proxy --port=8001 \u0026\n export HOSTNAME_PORT=localhost:8001 (example host and port number)\n export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from\n "kubectl get nodes")\n curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"\n\nFor all three remediations:\nBased on your system, restart the kubelet service and check status\n\n systemctl daemon-reload\n systemctl restart kubelet.service\n systemctl status kubelet -l\n"],
"status": "FAIL",
"actual_value": "apiVersion: kubelet.config.k8s.io/v1beta1\nkind: KubeletConfiguration\nauthentication:\n webhook:\n enabled: false\nauthorization:\n mode: AlwaysAllow\nenableServer: false\nmakeIPTablesUtilChains:true\npodCIDR: 10.42.0.0/24\nstaticPodPath: /etc/kubernetes/manifests\nstaticPodURL: http://metadata.google.internal/computeMetadata/v1/instance/attributes/google-container-manifest\nstaticPodURLHeader:\n Metadata-Flavor: [Google]\ncgroupDriver: systemd",
"scored": true,
"IsMultiple": false,
"expected_result": ""
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it was a bad test, the test should PASS if the makeIPTablesUtilChains is not set on either the command line or in the config file which is our case so it should pass.

I have added a fix for the test.

@afdesk
Copy link
Collaborator

afdesk commented Oct 7, 2024

Hi guys!
Is this PR ready for review and merge? thanks!

@guyjerby
Copy link

guyjerby commented Oct 8, 2024

Hi @afdesk - we are still validating the benchmark and fixing the last founded issues, soon we will be able to merge that and I will notify you, will you able to merge it and also to build a new kube-bench release and image?

systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: false
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ttousai - scored should be true for all automatic tests

@afdesk
Copy link
Collaborator

afdesk commented Oct 9, 2024

Hi @afdesk - we are still validating the benchmark and fixing the last founded issues, soon we will be able to merge that and I will notify you, will you able to merge it and also to build a new kube-bench release and image?

hi @guyjerby!
yes, I'll try to review and merge it ASAP after your approve.

thanks for the answer.

@guyjerby
Copy link

guyjerby commented Oct 9, 2024

@afdesk , @ttousai - The validation has been completed , we can merge the PR and build a new kube-bench release

Copy link
Collaborator

@afdesk afdesk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi guys! I left a few comments.

as for me, only a mark about 3.1.3 is critical.

thanks for understanding.

Comment on lines +11 to +13
- id: 2.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this check marked as automation.
If there is no way to automate it, we should add a command to test in remediation section:

$ kubectl get secrets --namespace kube-system

# Look for secrets with names starting with gke-. These secrets contain the client
certificates.

text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Manual)"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this check is marked as automated.

scored: true

- id: 3.1.2
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this check is marked as automated and there is audit command here

scored: true

- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 600 (Manual)"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this check is marked as automated and there is audit command

scored: true

- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this check is marked as automated and there is audit command

- flag: "permissions"
compare:
op: bitmask
value: "644"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was changed to 600

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ttousai - can you change to 600?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment on lines +241 to +250
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
bin_op: or
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if --read-only-port isn't set, the check will pass, right?
is it correct behavior?
just make sure

Verify that the --read-only-port argument exists and is set to 0.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@afdesk yes this is the correct behavior. According to the kubelet config doc if readOnlyPort is not set, it defaults to 0, which is disable.

- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
type: "manual"
remediation: |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to add audit tip into remediation block? wdyt?

Audit:
Obtain a list of the principals who have access to the cluster-admin role by reviewing
the clusterrolebinding output for each role binding that has access to the cluster-
admin role.
kubectl get clusterrolebindings -o=custom-
columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name
Review each principal listed and ensure that cluster-admin privilege is required for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@afdesk I think the idea makes sense, I'm not sure if we want to add it to the remediation block though.
@guyjerby wdyt?

@guyjerby
Copy link

Thanks @afdesk for your review - regarding the automated test that cannot be added - this is because we never run kubectl commands from the kube-bench pod in the previous releases and yes - if we converted it to manual - the remediation must cover it.

in any case - I am going to test the kubectl functionality from the kube-bench pod - if it works well - we will have create another PR to automate them

@ttousai - can you make the relevant fixes for the remediations if missing and than we will finalize it?

Thanks!

@ttousai
Copy link
Contributor Author

ttousai commented Oct 10, 2024

@guyjerby @afdesk I am done with the changes we can make at the moment.

@afdesk afdesk self-requested a review October 11, 2024 04:48
@afdesk afdesk merged commit a15e8ac into aquasecurity:main Oct 11, 2024
5 checks passed
@afdesk
Copy link
Collaborator

afdesk commented Oct 11, 2024

@guyjerby @ttousai thanks for your efforts.
merged.

I'll take a look at another PR and try to cut a new release today

@guyjerby
Copy link

guyjerby commented Oct 11, 2024

@afdesk , @ttousai Thank you very much! for assisting with GKE 1.6 and having the new release ready!

@guyjerby
Copy link

@afdesk , how can I contact you? can you share your mail address or send it to me? [email protected] ?

deebhatia pushed a commit to VoerEirAB/kube-bench that referenced this pull request Oct 14, 2024
* Add config entries for GKE 1.6 controls

* Add gke1.6 control plane recommendations

* Add gke-1.6.0 worker node recommendations

* Add gke-1.6.0 policy recommendations

* Add managed services and policy recommendation

* Add master recommendations

* Fix formatting across gke-1.6.0 files

* Add gke-1.6.0 benchmark selection based on k8s version

* Workaround: hardcode kubelet config path for gke-1.6.0

* Fix tests for makeIPTablesUtilChaings

* Change scored field for all node tests to true

* Fix kubelet file permission to check for

---------

Co-authored-by: afdesk <[email protected]>
deebhatia pushed a commit to VoerEirAB/kube-bench that referenced this pull request Oct 14, 2024
* Add config entries for GKE 1.6 controls

* Add gke1.6 control plane recommendations

* Add gke-1.6.0 worker node recommendations

* Add gke-1.6.0 policy recommendations

* Add managed services and policy recommendation

* Add master recommendations

* Fix formatting across gke-1.6.0 files

* Add gke-1.6.0 benchmark selection based on k8s version

* Workaround: hardcode kubelet config path for gke-1.6.0

* Fix tests for makeIPTablesUtilChaings

* Change scored field for all node tests to true

* Fix kubelet file permission to check for

---------

Co-authored-by: afdesk <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants