Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend the authenticator to enable reading data from the configmap #34

Closed
mattlandis opened this issue Dec 12, 2017 · 8 comments
Closed
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@mattlandis
Copy link
Contributor

Currently the config is only loaded on startup, requiring the authenticator to be restarted each time there is a change to the config. There is already an issue (#7 ) to reload the config when it changes. Although this is an improvement, it still requires modifying the file on disk wherever the authenticator is running.

This proposal is to separate the server configuration from the mapping configurations, and to load the mapping configurations from a configmap. This will enable a user to update the user/role mappings using kubectl without needing access to the config file where the authenticator is running. The source of this data can be expanded in the future to allow for additional sources such as dynamodb or s3 which will make it easier for organizations with large user bases to build automated tooling for creating the mappings. It's worth noting that the specific use case we are thinking of involves running the authenticator on the master outside of kubernetes, talking to the apiserver via localhost on the unsecured port. This bypasses authentication and voids a circular dependency. Using an x509 cert or another (non-bearer token) authentication mechanism would also work.

This proposed design would:

  • Extend the server config to include a loadMappingsFromConfigMap bool, an apiServerURL and kubeconfigPath
  • Extract the logic to convert an ARN to a username/groups
  • When looking for mappings check all configured sources (config file and configmap).
  • The configmap mapping source would read the configmap on startup and perform a watch looking for changes and updating its mapping when necessary
@mattmoyer
Copy link
Contributor

+1 on this. Some thoughts:

  • Rather than a special case for just the ARN mappings, I think it might make sense to move all the configuration into a dynamic container object that we pass around everywhere. We're using spf13/viper which is kind of designed for this already, but we're not using it correctly to get this benefit at the moment.

  • We might be able to implement a Kubernetes ConfigMap source upstream in spf13/viper, but if that ends up being hard I don't see any problem doing it here instead.

  • apiServerURL would normally be part of the kubeconfig, so it probably doesn't need to be split out separately.

  • kubeconfigPath should probably be optional, with the default being the InClusterConfig() that loads from a well known path.

@mattmoyer mattmoyer added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 12, 2018
@mattlandis
Copy link
Contributor Author

Sounds good. I have something in progress. I'll send out a PR when it is ready for review.

@mumoshu
Copy link
Contributor

mumoshu commented Mar 19, 2018

@mattlandis @mattmoyer Sorry to say this after coming late to the party but - is this really what you would want in scope of this project?

A big concern (to me) of live-reloading in this way is that you can't easily roll-back your configuration, or stop roll in case of a configuration error.

I'd rather just use helm + configmap checksums + readiness probes to stop rolling on configuration error plus easy rollbacks.

So, I appreciate it if you could let me confirm your original intention.
I feel like doing it natively in authenticator is a kind of reinvention of wheels.
Was your intention to support some level of automation even without helm? In that case, your direction makes a lot sense to me.

But I couldn't figure it just by reading this thread. Thx!

@mattlandis
Copy link
Contributor Author

My intention is to provide a mechanism for people an cluster administrator who does not have access to a managed cluster to be able to update the user and role mappings.

Since RBAC already happens through kuberenetes mechanisms it seems like a good place to put the user/group data instead of a external API that would then update the config file and restart the service.

We could also potentially look at a custom resource if people think that makes more sense but the configmap is what we have been exploring.

@mumoshu
Copy link
Contributor

mumoshu commented Mar 20, 2018

@mattlandis Thank you very much for the response!

Excuse me if I'm still missing your point - So, our intention here would be to:

  1. Allow an operator with fairy restrictive RBAC permissions(w/ configmap update permission but w/o pod CRUD access) to update mappings
  2. (In the future) Allow an operator with no K8S API access at all to update mappings

And the 2. above is especially relevant to your use-case:

authenticator on the master outside of kubernetes, talking to the apiserver via localhost on the unsecured port. This bypasses authentication and voids a circular dependency

because 2. allows authenticator outside of K8S(no serviceaccount, also no TLS key/cert for ClientAuth) to authenticate against apiserver with K8S user/group from IAM session/role, to read the configmap for mappings?

@mattlandis
Copy link
Contributor Author

My intention is to be able to enable someone who has API access (either system:masters or something more restrictive to the configmap for aws auth) is able to update the configurations without having access to the instances running the API server or the authenticator itself. It is for a manged kubernetes experience. This is the primary use case we are focusing on right now.

The use case for non-k8s operators updating the mappings would be when we look at reading them from an external source (possibly s3 and/or dynamodb) which would allow for someone to build an API and tooling around updating them that would be easier to fit into existing IT infrastructure that doesn't currently talk k8s.

@nckturner
Copy link
Contributor

I am a little confused by this issue, but I'd like to summarize a conversation I had with @mattlandis -- I think the authenticator should use config in a file, like how it is implemented today, the only difference being it should reload the file from disk. This supports the use case of running the authenticator in a pod and outside of kubernetes. I think there should be one config file, and some parameters can be passed in on command line, which solves our use case.

@mattlandis
Copy link
Contributor Author

I think the CRD is a better solution.

joanayma pushed a commit to joanayma/aws-iam-authenticator that referenced this issue Aug 11, 2021
joanayma pushed a commit to joanayma/aws-iam-authenticator that referenced this issue Aug 11, 2021
kubernetes-sigs#34 - asg size changes should be ignored - desired_capacity
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

4 participants