-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert default rke2.yaml over to non system:admin account for security auditing #4051
Comments
Can I ask which specific other operations you're seeing occur as the system:admin user? A better ask might be to have dedicated users for all the embedded controllers. We should already have this for most things, but its possible we missed something.
I don't believe that we have plans to create a different default admin account or RBAC, but we can leave this open for future consideration. I will note that the file is rewritten on startup to ensure that the credentials are valid, so even if you change its contents, those changes will be reverted. It might be better to simply have an organization policy against using the default admin kubeconfig for cluster management.
Kubernetes does not actually support certificate revocation (it does not check CRLs), so your only option is to create unique roles and clean up the associated RBAC in order to revoke access. This is why using service account tokens is a better idea; the tokens are bound to a SA that can be deleted, and the token is invalidated when the SA goes away. |
We see a lot of action all over the place here are a few. You can see similar audit logs in any environment.
Here is me running commands locally on a node
In our case our organization policy requires STIG and auditing compliance so the most effective route would be to allow the disabling of this account/file after a given period or in general. Or least disruptive would be to ensure that the use of this file is auditable back to a source/user.
This is exactly what I mean by this request. The rke2.yaml should be configured to use a service account per node to ensure it can be revoked/audited. |
Probably need to extend this thinking to the manifests directory as well. To audit which node a CR yaml was dropped into that folder. |
The Rancher Federal folks regularly assist in deploying RKE2 into STIG hardened environments and this has never been raised as an issue. There needs to be some way to access the cluster for break-the-glass access locally on server nodes, you're welcome to consider the admin kubeconfig that if you like but I am not aware of any plans to eliminate it. We can take a pass at auditing any controllers using the admin creds, there should not be any but it seems some have perhaps slipped in.
That is already possible, as seen in the audit logs you posted - note the node name in the user agent: The created resources also have the same value in the managedFields info. |
Closing this out as we are not going to do away with the default admin user, but I have opened k3s-io/k3s#7212 to track auditing for controllers that use the admin RBAC. |
Is your feature request related to a problem? Please describe.
In the audit logs for rke2, actions provided from the host system, using the /etc/rancher/rke2/rke2.yaml are issued by the same user as the internal k8s cluster orchestration ['system:admin'].
Describe the solution you'd like
It would improve security and auditing if this default rke2.yaml on the nodes was registered using a seperate RBAC account to better track host run commands and changes back .
Describe alternatives you've considered
This could be done via RBAC and the manifest directory then updating the rke2.yaml on each host. But we are unsure of the consequences of manipulating this file on each node would do.
Additional context
This would also solve the issue of someone obtaining the rke2.yaml and using it remotely, So that that key could be revoked/rotated on the user/token.
The text was updated successfully, but these errors were encountered: