You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.
After creating a kubernetes cluster and assigning to the cluster a node pool with nodes having 4vCPU's, a pod requesting 4 vCPU's should be assigned to the node pool and run.
Actual Behavior
After creating a kubernetes cluster and assigning to the cluster a node pool with nodes having 4vCPU's, a pod requesting 4 vCPU's gives this error:
FailedScheduling 1s (x5 over 8s) default-scheduler 0/12 nodes are available: 12 Insufficient cpu.
This suggests to me that there is an issue with the way in which the node pool was associated with the cluster. The node pool does exist and appears to have been created without issue. It's possible this is a kubernetes issue and not a terraform issue, but we were able to successfully create this same setup via the gcloud web UI. Therefore, I believe my issue is either a mistake in my configuration or a bug.
Steps to Reproduce
terraform init
terraform apply
Create a file called pod.yaml with the content below
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
ghost
locked and limited conversation to collaborators
Mar 29, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Community Note
Terraform Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
https://gist.github.com/nathanwilk7/cffc7658cd96a2e319476d1e10e326f4
Expected Behavior
After creating a kubernetes cluster and assigning to the cluster a node pool with nodes having 4vCPU's, a pod requesting 4 vCPU's should be assigned to the node pool and run.
Actual Behavior
After creating a kubernetes cluster and assigning to the cluster a node pool with nodes having 4vCPU's, a pod requesting 4 vCPU's gives this error:
This suggests to me that there is an issue with the way in which the node pool was associated with the cluster. The node pool does exist and appears to have been created without issue. It's possible this is a kubernetes issue and not a terraform issue, but we were able to successfully create this same setup via the gcloud web UI. Therefore, I believe my issue is either a mistake in my configuration or a bug.
Steps to Reproduce
terraform init
terraform apply
pod.yaml
with the content belowkubectl apply -f pod.yaml
kubectl describe po/nginx
gcloud container node-pools list --cluster my-cluster --region us-east1
Important Factoids
References
I've also tried this config but had similar results.
The text was updated successfully, but these errors were encountered: