Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to set all setting needed for CIS benchmark level 1 hardening #4688

Closed
craigjol opened this issue Mar 16, 2018 · 15 comments
Closed

Unable to set all setting needed for CIS benchmark level 1 hardening #4688

craigjol opened this issue Mar 16, 2018 · 15 comments

Comments

@craigjol
Copy link

Thanks for submitting an issue! Please fill in as much of the template below as
you can.

------------- BUG REPORT TEMPLATE --------------------

  1. What kops version are you running? The command kops version, will display
    this information.
    Version 1.8.1

  2. What Kubernetes version are you running? kubectl version will print the
    version if a cluster is running or provide the Kubernetes version specified as
    a kops flag.
    Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:52:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

  1. What cloud provider are you using?
    AWS

  2. What commands did you run? What is the simplest way to reproduce this issue?
    Created cluster on AWS using - kops create cluster .....

Then to edit the cluster used - kops edit cluster .....
unable to set the following

kubeAPIServer:
basicAuthFile - can not remove
insecureallowAnyToken - can not set a value
kubeletHttps - can not set a value
insecureBindAddress - can not set a value
profiling - can not set a value
repairMalformedUpdates - can not set a value
serviceAccountLookup - can not set a value
Scheduler:
profiling - can not set a value
kubeControllerManager:
profiling - can not set a value
etcdClusters:
clientCertAuth - can not set a value
autoTls - can not set a value
peerClientCertAuth - can not set a value
peerAutoTls - can not set a value
maxWals - can not set a value
kubelet:
authorizationMode - can not set a value
streamingConnectionIdleTimeout - can not set a value
protectKernelDefaults - can not set a value
hostnameOverride - can not set a value
eventGps - can not set a value
cadvisorPort - can not set a value
kubeAPIServer:
tokenAuthFile - can not remove
insecurePort - can not set a value to 0

  1. What happened after the commands executed?

After saving file, it reported back with "Found fields that are not recognized" and listed all my entrees.
under the kubeAPIServer: , kubeControllerManager: , kubelet: , and etcdClusters:

I have look at https://godoc.org/k8s.io/kops to see if the options I need are listed and it doesn't seem that they are currently available in KOPS.

  1. What did you expect to happen?
    Configuration saved and next time a node or master build was done, the new configuration would be loaded.

  2. Please provide your cluster manifest. Execute
    kops get --name my.example.com -oyaml to display your cluster manifest.
    You may want to remove your cluster name and other sensitive information.

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
creationTimestamp: 2018-03-16T08:48:21Z
name: .k8s.local
spec:
api:
loadBalancer:
type: Public
authorization:
alwaysAllow: {}
channel: stable
cloudProvider: aws
configBase: s3://
/*******.k8s.local
etcdClusters:

  • etcdMembers:
    • instanceGroup: master-eu-west-1a
      name: a
      name: main
  • etcdMembers:
    • instanceGroup: master-eu-west-1a
      name: a
      name: events
      iam:
      allowContainerRegistry: true
      legacy: false
      kubernetesApiAccess:
  • 0.0.0.0/0
    kubernetesVersion: 1.8.7
    masterPublicName: api.*******.k8s.local
    networkCIDR: x.x.0.0/16
    networking:
    kubenet: {}
    nonMasqueradeCIDR: x.x.0.0/10
    sshAccess:
  • 0.0.0.0/0
    subnets:
  • cidr: x.x.32.0/19
    name: eu-west-1a
    type: Public
    zone: eu-west-1a
  • cidr: x.x.64.0/19
    name: eu-west-1b
    type: Public
    zone: eu-west-1b
  • cidr: x.x.96.0/19
    name: eu-west-1c
    type: Public
    zone: eu-west-1c
    topology:
    dns:
    type: Public
    masters: public
    nodes: public

  1. Please run the commands with most verbose logging by adding the -v 10 flag.
    Paste the logs into this report, or in a gist and provide the gist link here.
    N/A
  2. Anything else do we need to know?
    The CIS guide advises to modify the yaml file for each service, using kops to deploy, this doesn't seem to be an option. some setting we were about to change, but this was over written in time.
@ivans3
Copy link

ivans3 commented Apr 11, 2018

Hi @craigjol are you still working on this? If so, have you been making progress? To change apiserver flags that aren't referenced in ./pkg/apis/kops/componentconfig.go , i believe that it would be necessary to build a custom nodeup binary from kops src and push to an s3 bucket.
This is what i am trying now,
https://github.com/kubernetes/kops/blob/master/docs/development/adding_a_feature.md
(see the 'Testing' section...)

@ameena007
Copy link

Any updates on this ?? I am facing same issue .

@craigjol
Copy link
Author

No update as yet.

@craigjol craigjol reopened this Apr 26, 2018
@ameena007
Copy link

Why this is closed. Any one who wants to make a k8 cluster CIS compliant needs all this mechanism in place ?

@ivans3
Copy link

ivans3 commented Apr 26, 2018

Hi @ameena007, i am still working on this, results are so far promising, but customizing nodeup/protokube was necessary, and as well i encountered these major issues:

  • kops1.9 and k8s 1.9.x were needed for: etcd tls, calico etcd tls integration, as well as the required podsecuritypolicy
    -a health check user with no RBAC roles bound needs to be created and distributed to the master nodes somehow (due to the requirement to remove --insecure-port), however the kubelet health checker doesnt support client cert authentication (as of k8s 1.9.x, it may be there in 1.10)
    -there is a requirement for a seperate service account signing keypair, but kops doesnt support that
    -the kubelet certificates cannot be self-signed anymore if you set the required --kubelet-certificate-authority on the apiserver, so a way to distribute&sign those with the approriate SAN on the cert is needed...

Some of the CIS items i dont think we will be able to comply with (for example, 2.1.1 - Ensure that the --allow-privileged argument is set to false [on the kubelet] -- im not sure it would be possible for a cni network plugin to work with this set...)

Let me know if you have any question!

@bzuelke
Copy link

bzuelke commented Jul 2, 2018

Hi, So I came across this thread and #4799 which is super helpful trying to get the cluster to a CIS standard. I was seeing if there was any progress on the etcdCluster configuration for auto tls and peer auto tls and so forth as the other PR doesn't really account for that. I wasn't sure if that was being looked at here. Thanks a ton :)

@tralexa
Copy link

tralexa commented Jul 23, 2018

also adding:
kubeAPIServer: request-timeout - can not set a value

kops version: 1.9.1
Kubernetes version: 1.9
cloud provider: AWS
reproduce: kops create cluster

@tralexa
Copy link

tralexa commented Jul 25, 2018

also missing for kubelet:

protect-kernel-defaults
make-iptables-util-chains
keep-terminated-pod-volumes
allow-privileged
cadvisor-port

kops version: 1.9.1
Kubernetes version: 1.9
cloud provider: AWS
reproduce: kops create cluster

@tralexa
Copy link

tralexa commented Jul 25, 2018

also missing for kubelet:

rotate-certificates
rotate-server-certificates

kops version: 1.9.1
Kubernetes version: 1.9
cloud provider: AWS
reproduce: kops create cluster

@tralexa
Copy link

tralexa commented Jul 30, 2018

streaming-connection-idle-timeout doesn't seem to have effect.
kops version: 1.9.1
Kubernetes version: 1.9
cloud provider: AWS
reproduce: kops create cluster

@ivans3
Copy link

ivans3 commented Jul 30, 2018

make-iptables-util-chains: this is now the default, you can check with kubelet --help

How should i support getting #4799 merged in? it covers some of these...

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 28, 2018
@faheem-nadeem
Copy link

faheem-nadeem commented Oct 29, 2018

/remove-lifecycle stale
/open

Can we please keep this open to track changes for CIS level 1 compliance. Or the flags listed are already supported, or we are tracking somewhere else? @craigjol

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 29, 2018
@markine
Copy link

markine commented Nov 26, 2018

Please reopen to track progress.

@javierlga
Copy link

Any update on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants