-
Notifications
You must be signed in to change notification settings - Fork 984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add missing automount_service_account_token to the pod spec #261
add missing automount_service_account_token to the pod spec #261
Conversation
Thanks. FYI, you can also push commits to the same PR. No need to re-open. Let me run the acceptance tests on this. |
Also, I think this new attribute should at least be added to one of the ACC test configurations, to be exercised by the tests. |
@alexsomesan I can't push to the original PR opener's fork/branch so it had to be a new PR... |
@alexsomesan I'm a provider n00b. I see a |
b2a338f
to
9203ad7
Compare
@alexsomesan After banging my head against the wall figuring out how to run the acc tests without the framework trying to startup > 1K pods all at once (a testacc README blurb would be greatly appreciated), I've taken a stab at adding a simple acceptance test... which has turned up two obvious problems:
The volume will probably require heuristically predicting the name. The later I don't know how to fix and would appreciate any guidance you might have.
|
9203ad7
to
1e1fcdc
Compare
I've removed the |
We actually fell on our face because of the (unchangable) default on this. Would very much like to see this merged. :) |
How we looking for this? I need the attribute to be optional to release 0.1.1 of my shared module that deploys the K8 dashboard. https://github.com/MarvelOrange/terraform-kubernetes-dashboard |
I reckon that's what Radek meant in his comments when he said he wanted to avoid spurious diffs: |
Adding a workaround for those that are (also) stubbing their toes on this... resource "kubernetes_service_account" "workaround" {
metadata {
name = "workaround"
}
automount_service_account_token = "true"
} Provided you provision the service account with See the service account docs compared to the pod docs. |
Indeed, that's what I used yesterday as a work-around too! Worth noting is that updating that attribute on an existing service account does not yet work, but will in the next release, cf. #377 (comment) |
@StephenWithPH I'm not quite sure I understand. The documentation says that the flag can be overridden at the Pod level. The problem is that Pods are hard coded to be false. So even if the ServiceAccount is associated with the Pod, wouldn't the hardcoded false value override it? Regardless, I'm super excited for this to get merged in. |
I don't believe this PR isn't in a mergable state as the provider tries to remove secret volume that gets mounted. |
f2404cb
to
2eb3ea8
Compare
To add a bit more detail, the provider needs to be taught to ignore the volume that is auto-magically mounted at |
@jhoblitt am I right in thinking that the ignoring is only required for Also, if I'm not mistaken, |
@jhoblitt ... here is some information that may be useful. I happened to get bitten by this in a very different context. It sounds like you may see the same thing. If so, it's really a larger question for
https://github.com/kubernetes/kubernetes/pull/44121/files#diff-9ce7ea8441086bf1902b4f936f4601d0R40097 and https://github.com/kubernetes/kubernetes/pull/44121/files#diff-9ce7ea8441086bf1902b4f936f4601d0R37920 show annotations on k8s' OpenAPI spec (this is the easiest place I could find to link to) which indicate that volumes and volume mounts need to be patch merged. I believe this is why the automagically-mounted secret volume is tripping you up. I'm not savvy enough with Terraform plugins to quickly grok whether this plugin already tries to handle "upsert-ish" behavior... especially because that seems like it might conflict with Terraform's declarative design goals. |
Due to the fact I spent several hours reading issues like this one, I want to share workaround that works today. Instead of trusting pod to mount secret automatically, you need just to set in pod's spec it needs to mount it:
|
Why this is not been merged yet? |
I'm happy to do what work is needed to get this merged but need guidance... |
@jhoblitt I've done the rebase, but haven't pushed yet. I'm changing the tests slightly so they don't run into race conditions with the volume mounting. Once the tests are stable we can merge. It's faster if I do it because I need to try out a few different approaches. I'm aiming to include this in the next release. |
So, solving the problem with the volume showing up in Pod's diff isn't quite trivial, without making some unsubstantiated assumptions about the names of the volume and service account. While we find a way to reliably filter out the automounted token volume from Pods, I propose we move forward with the fix only for Deployments, DaemonSets and the rest of the managed workload resources, while we leave Pods as they are for now. I expect this to have less disruptive impact while solving this issue for the majority of folks. Please +1 the comment if you think this will work well for your use case. |
2eb3ea8
to
4a9b79c
Compare
This change propagates to the replication_controller, services and deployments. See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#pod-v1-core
`automountServiceAccountToken` does not appear to a default value per - https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#pod-v1-core - https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#pod-v1-core - manual inspection of running pods on a k8s 1.11.5 cluster
…late spec. Fix style on expansion of 'automount_service_account_token'.
cdf0ec8
to
2ccb673
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good IMO. CI passes.
I cannot get this to work with
resource "kubernetes_service_account" "operator" {
metadata {
name = "zalando-postgres-operator"
namespace = "default"
}
# workaround for https://github.com/terraform-providers/terraform-provider-kubernetes/pull/261#issuecomment-481872856
automount_service_account_token = true
} When I spawn an operator that needs the service account it still fails. It is still spec'd with
I have to manually add the volume/volume mount to make it work:
This isn't working b/c the Should I be using the EDIT -- ok, it does work when the operator spawns new pods -- those pods are now auto-mounting the service token. The operator itself, however, still appears to need to manually mount the volume in the deployment spec. |
// Setting this default to false prevents a perpetual diff caused by volume_mounts | ||
// being mutated on the server side as Kubernetes automatically adds a mount | ||
// for the service account token | ||
podSpecFields["automount_service_account_token"].Default = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW - I resolved this issue in my fork of the provider:
sl1pm4t@e0e953c#diff-7424630baa3c87d787d88925b5f81cac
The same as #251 with the code formatting issue fixed.
closes #251