Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AWS Fargate] Fetching in-cluster config fails from farget container. #19

Open
sanketsudake opened this issue May 3, 2018 · 6 comments
Assignees

Comments

@sanketsudake
Copy link


Environment summary

Provider - Farget

Version - Latest 18adde2aca4ebe72aee1c0320e9affad218e1933

K8s Master Info - Cluster created with Kops on AWS

Install Method (e.g. Helm Chart, ) Manually. Referred steps in https://aws.amazon.com/blogs/opensource/aws-fargate-virtual-kubelet/

Issue Details

I am trying to run a pod which uses in cluster configuration to create k8s go client. All of details are mentioned in pod spec, but looks like aws farget provider totally ignores that service account is mentioned in pod run. It just parses pod spec.containers

I get following error in pod log,

2018/05/03 01:50:11 Error making fetcher: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

Comes from https://github.com/kubernetes-client/go/blob/78199cc914eead8a64d1eb11061bf4a031b63a1e/kubernetes/config/incluster_config.go#L45

Repro Steps

Try to run kubernetes client-go example,
https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration

@ofiliz ofiliz assigned d-nishi and unassigned ofiliz May 16, 2018
@d-nishi
Copy link

d-nishi commented May 18, 2018

@ssudake21 will respond as soon as I have a clear path forward here!

@sanketsudake
Copy link
Author

Sure @d-nishi . Thanks for acknowledgement.

@mumoshu
Copy link

mumoshu commented Jan 9, 2019

@ssudake21 @d-nishi This is likely due to the current limitation of ECS and Fargate. I believe that we either need to implement sidecars for the fargate provider to exploit it, or wailt until Fargate/ECS implements a thing like "pre-populated volume mounts", or the EFS volume mounts gets supported for Fargate too.

The in-cluster configuration, AFAIK, works by mounting service account token at the specific path and client-go discovers it.

Neither Fargate nor ECS have notions of pre-populated volumes, and relevant Kubernetes features like ConfigMaps and Secrets, and also ability to mount them as container volumes. Therefore the fargate provider has no straightforward way to mount service account tokens onto containers due to the above.

Also see: https://github.com/virtual-kubelet/virtual-kubelet/blob/d8736e23f59ffbad04654670ec5370ac9510d11a/providers/aws/fargate/pod.go#L60

@d-nishi Do you think a feature that sounds like "pre-populated volumes", configmaps, secrets w/ volume mount support is in the ECS/Fargate roadmap? Then we should better wait for that to land.

Otherwise, my best bet is to add the sidecar support like @lizrice recently attempted in virtual-kubelet/virtual-kubelet#484.

I believe fargate supports local volume sharing across containers within a task. With the sidecar support, we may implement a configmap/secret-writer-sidecar that watches k8s configmap/secret to write to files on the shared local volume. Applying this to the serviceaccount token secrets solves this specific issue, along with all the benefits of configmaps/secrets support for the fargate provider.

One thing to consider would be how to allow the sidecar to authenticate against the K8S API, given that it can't rely on the serviceaccount token(chicken-and-egg!).

For that, I guess we can just add a support for iam.amazonaws.com/role pod annotation that is translated to the ECS task's taskRoleArn. The annotation is widely used by kube2iam and kiam to identify which IAM role the pod want to assume.

@lizrice
Copy link

lizrice commented Jan 9, 2019

I did get virtual-kubelet/virtual-kubelet#484 working in my own fork, at least for what I was trying to achieve, but that PR here was inadvertent as it seems pretty hacky to me and I didn't really think anyone else would want it! But let me know if it would be useful for me to push any of it (or part of it) here.

@johanneswuerbach
Copy link
Contributor

One thing to consider would be how to allow the sidecar to authenticate against the K8S API, given that it can't rely on the serviceaccount token(chicken-and-egg!).

Wouldn't it be possible to make this push based instead of pull? E.g. have the vk itself write the secretmap/configmap content to a shared storage (say S3) and then have a fargate sidecar polling this into a local volume? This would at least not create a per task api-client and might be more resilient against k8s master hiccups.

@mumoshu
Copy link

mumoshu commented Jan 9, 2019

Wouldn't it be possible to make this push based instead of pull? E.g. have the vk itself write the secretmap/configmap content to a shared storage (say S3) and then have a fargate sidecar polling this into a local volume?

That sounds like a great idea! The only challenge would be how we can safely open up s3 bucket access only to the sidecar. Altering the task role won't be an option.

Perhaps we can teach the fargate provider to automatically update the bucket policy to accept access from the pod on start, and to decline access on pod stop?

@pires pires transferred this issue from virtual-kubelet/virtual-kubelet Jan 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants