-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster Nodes & Masters Failing kube-bench Security Checks - Question #6150
Comments
I was curious about the same thing, is this a difference of opinion, or a gap in kops? I'd be glad to contribute to implementing some of these security improvements. |
Related. #4688. Been trying to get this issue opened, as there are more discussions there. :) |
Thanks for the list - it is much easier to come up with a checklist than it is to actually implement them in the real world when people have running clusters :-) We have to think about whether each one of these options breaks people. That said there's some good stuff here, like I do think we should use this list to drive a goal of being the most secure configuration available - I have it on a discussion item for "roadmap" for today's kops office hours. As a taster of the complexities though, I spotted Let's tag for 1.13 as a target list (we can pull things into earlier releases as well). (And I think some of these are actually getting fixed in kops 1.11) |
Opened upstream issue for AlwaysPullImages: kubernetes/kubernetes#71850 /milestone 1.12 |
Thanks for following up @justinsb. I just updated my original post with the full list of failing tests after performing a Kops upgrade to Kubernetes 1.10.11 (as a result of CVE-2018-1002105) There is a full list of remediation steps that comes with that checklist if you want me to post it, although it's several pages long. Let me know if I can help at all 👍 |
Hi While it should be possible to paint green most of the API Server and Controller Manager checks after #4799 gets merged, I am wondering what needs to be done to fix those in the '1.4 Configuration Files' group? Are there any reasons the actual permissions are broader than prescribed? |
I've just discovered that file paths for component configuration files created by kops are different to the kubeadm defaults used by kube-bench in https://github.com/aquasecurity/kube-bench/blob/master/cfg/1.11/config.yaml Fixing the path (.manifest instead of .yaml) has turned green the checks 1.4.1-1.4.6 |
FWIW if you're looking for things to fix out of this list, as default settings, I'd thoroughly recommend looking at these two on the kubelet.
as a priority. This combination allows anyone who can hit the kubelet port (10250/TCP) at a network level, to easily get root access on the underlying node. Also worth noting that there are newer versions of the benchmark out now which drop some of the recommendations which could cause issues with cluster operation (e.g. the |
I did a default kops install today and a couple of other ones you might want to look at changing Using basic and token auth is generally considered a bad idea as they both store credentials in-clear on disk. etcd without authentication. Again although this is only bound to localhost, from a security standpoint, unauthenticated etcd is dangerous as it contains all the secrets from the cluster. A more minor point , but one worth considering. All the keys stored in /srv/kubernetes are valid for 10 years. An attacker who compromises one of those keys gets persistent access to the targeted cluster, as they don't expire for a long time. |
@raesene so are you saying to avoid using |
@llcranmer from a security point of view the current correct configuration I believe is to configure when you have the cluster running the best way to check if it's enabled or not is to check for running ports with something like What I can't tell you is, what kops uses the insecure port for (if anything). If it's making use of the insecure port, then obviously disabling it could have consequences for the cluster. |
@raesene Thank you for clarifying. I'll report back if setting insecure-port=0 causes any problems. |
@raesene I've done some digging and it looks like it defaults to |
FWIW, I was looking around for this today and saw kubernetes/kubernetes#43784 . So it sounds like the reason it's not disabled in kops is so that Personally, I'd say that allowing a healthz endpoint to be available unauthenticated is less of a risk than enabling the use of the insecure port, even on localhost. There's a number of scenarios where an attacker might be able to make a request the API server on localhost, and the insecure port always allows |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I ran kube-bench 0.3.1 against a cluster built from kops' master branch and these are the FAIL and WARN results. Some of them are outside the responsibilities of kops, and some are false positives (aquasecurity/kube-bench#667). The file permission and ownership failures should be straight forward fixes. The arguments being set on the control plane components might be a bit more involved since it requires changing default behaviors. We can audit the remaining issues and try to address them as necessary.
|
4.2.10/2.1.11 fixed in #10022, thanks @olemarkus |
Most of 1.4 fixed in PR to kube-bench |
#10022 should also enable fixing 1.2.6 / 1.1.21: It's not set by default by k8s and doesn't seem to be set by kOps by default either, but could be set now due to #10022 . I think one can workaround this in user-land by setting checking the mount: ❯ kubectl -n kube-system get pods kube-apiserver-ip-<ip-address>.ec2.internal -o yaml | grep " volumes:" -A 50 | grep "/srv/kubernetes" -B 1 -A 4
- hostPath:
path: /srv/kubernetes EDIT: the above^ didn't work for us and broke the cert chain |
I just ran
kube-bench
(https://github.com/aquasecurity/kube-bench) across my KOPS 1.10 provisioned clusters (Kubernetes1.10.11
running kope.io/k8s-1.10-debian-jessie-amd64-hvm-ebs-2018-08-17) and I was quite shocked to see a large number of tests failed for both nodes and masters.I expected a few, but not quite this many.
Is there any reason why there seems to be so many security issues with both node and master machines provisioned with KOPS?
Some output
Workers
Masters
UPDATED WITH FULL LIST AFTER VERSION 1.10.11 UPGRADE
The text was updated successfully, but these errors were encountered: