Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExternalDNS + kube2iam integration problem with default role #292

Closed
placydo opened this issue Jul 27, 2017 · 5 comments
Closed

ExternalDNS + kube2iam integration problem with default role #292

placydo opened this issue Jul 27, 2017 · 5 comments

Comments

@placydo
Copy link

placydo commented Jul 27, 2017

Hi,
First of all thanks for Your amazing work!
We would like to run ExternalDNS with kube2iam with --default-role enabled. While it works perfectly while the default role is disabled we encountered issue when we enable it. For some reason it looks like there were wrong role assigned to pod - default instead of the dedicated one. It sometimes resolve to correct role after 20+ minutes.

Log:

time="2017-07-27T07:05:50Z" level=info msg="Connected to cluster at https://10.3.0.1:443"
time="2017-07-27T07:06:25Z" level=error msg="AccessDenied: User: arn:aws:sts::VERY_SECRET_ID:assumed-role/VERY_SECRET_ROLE-assume-def/185b3bd4-VERY_SECRET_ROLE-assume-def is not authorized to perform: route53:ListHostedZones
status code: 403, request id: 1adf6f00-729a-11e7-9cdb-9195f1e9a104"

There is corresponding issue on kube2iam github: jtblin/kube2iam#80

Tried update kube2iam to newest version 0.6.4 - it did not resolve the issue.
Currently using registry.opensource.zalan.do/teapot/external-dns:v0.4.0

@mikkeloscar
Copy link
Contributor

This seem to be a purely kube2iam issue or am I misunderstanding something?

Doesn't sound like it can be resolved in external-dns if the pod is getting the wrong IAM role.

@linki
Copy link
Member

linki commented Jul 27, 2017

@mikkeloscar I told @placydo to open an issue here so that we can keep track of it. It does look like kube2iam though.

@jrnt30
Copy link
Contributor

jrnt30 commented Aug 19, 2017

@mikkeloscar We had a series of issues with Kube2iam which is a caching issue that can rear it's head more readily (for us at least) when using cron jobs.

The next time you have, if you do a kubectl get pods -a check to see if the IP of a "Completed" pod is that of the external-dns pod (or whatever is failing). I have a PR jtblin/kube2iam#92 to address this if you want to try running it, we have been using this for a few weeks and have not experienced this issue again yet.

@placydo
Copy link
Author

placydo commented Sep 11, 2017

@linki Thanks to @jrnt30 we do not notice that issue anymore. It was purely Kube2IAM problem and now it is solved. Thanks for your awesome work!

@placydo placydo closed this as completed Sep 11, 2017
@linki
Copy link
Member

linki commented Sep 11, 2017

Thanks @placydo for reporting the issue and thank you @jrnt30 for fixing kube2iam.

lou-lan pushed a commit to lou-lan/external-dns that referenced this issue May 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants