-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "Secure Mode" to the complete example #112
Conversation
/test all |
/test all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lg2m
/test all |
Getting an error on the secure mode test: But it passes locally. At first I thought it might be because of the differences between using managed and self-managed node groups. Currently testing in #116. But now because it passes locally I'm wondering whether it is due to the difference in the type of auth being used. Locally I'm using my own AWS creds, but in the pipeline it uses an OIDC provider wired up with GitHub, so it isn't a "user" applying the resource, rather it is an assumed role doing it. I'm wondering if implementing #105 and automatically adding the role that is currently being used will fix the issue. What's weird is, this issue is not seen in the "insecure mode" of the complete example. I'm guessing because insecure mode uses managed nodegroups, which means we don't actually create the aws-auth configmap, amazon does |
My test in #116 passed so now I'm thinking the issue here does not lie in the aws-auth configmap stuff. What is super weird is that it works locally but not in the pipeline. Perhaps something with the sshuttle stuff |
Another difference that I want to check out is that locally I'm only running one of the tests
But in the pipeline both tests are running in parallel
|
/test all |
/test all |
/test all |
/test all |
/test all |
/test all |
Issue has most likely been narrowed down to a race condition with the EKS cluster. Terraform was trying to use the Kubernetes Provider / Helm Provider / Kubectl Provider to deploy stuff to the cluster before it was ready. To resolve I switched from 2 I also added a 30 second wait between step 2 and step 3. Doing this gives the cluster enough time to be ready to accept deployments when Terraform tries to do them. |
|
/test all |
1 similar comment
/test all |
/test all |
/test all |
/test all |
1 similar comment
/test all |
/test all |
/test all |
/test all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm
Closes #101