You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While using Managed NodeGroups with create_launch_template: true and passing bootstrap_env it seems like DNS_CLUSTER_IP is not honored in the bootstrap.sh script.
I see values being set in user-data in AWS console
Create a cluster with ClusterService CIDR : 192.168.0.0/24
Now check kubelet-config.json on the host or /etc/resolv.conf in pod expected DNS entry should be 192.168.0.10 but will be 10.100.0.10
Code Snippet to Reproduce
Set following in MNG
bootstrap_env = {
DNS_CLUSTER_IP = "192.168.0.10"
}
Create cluster with cluster_service_ipv4_cidr: 192.168.0.0/24
Expected behavior
bootstrap.sh script should set DNS_CLUSTER_IP: 192.168.0.10 in /etc/kubernetes/kubelet/kubelet-config.json
Actual behavior
bootstrap.sh script is setting default DNS_CLUSTER_IP: 10.100.0.10" in kubelet-config.json which is one of the default value in bootstrap.sh
clusterDNS": [
"10.100.0.10"
],
Additional context
TF version 1.0.8
Module version : 17.24.0
The text was updated successfully, but these errors were encountered:
Maybe I have a similar problem when I tried to change the container_runtime of a custom_ami. The "bootstrap_env" wasn't honored
Just need add the "export" parameter to change the scope of variables to Global and now the bootstrap.sh could read it.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
While using Managed NodeGroups with
create_launch_template: true
and passingbootstrap_env
it seems likeDNS_CLUSTER_IP
is not honored in the bootstrap.sh script.I see values being set in user-data in AWS console
Versions
Reproduction
Steps to reproduce the behavior:
kubelet-config.json
on the host or /etc/resolv.conf in pod expected DNS entry should be 192.168.0.10 but will be 10.100.0.10Code Snippet to Reproduce
Set following in MNG
bootstrap_env = {
DNS_CLUSTER_IP = "192.168.0.10"
}
Create cluster with cluster_service_ipv4_cidr:
192.168.0.0/24
Expected behavior
bootstrap.sh script should set
DNS_CLUSTER_IP: 192.168.0.10
in /etc/kubernetes/kubelet/kubelet-config.jsonActual behavior
bootstrap.sh script is setting default
DNS_CLUSTER_IP: 10.100.0.10"
in kubelet-config.json which is one of the default value in bootstrap.shclusterDNS": [
"10.100.0.10"
],
Additional context
TF version 1.0.8
Module version :
17.24.0
The text was updated successfully, but these errors were encountered: