Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container: fix updates for node_config.gcfs_config and make optional #11717

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 7 additions & 6 deletions mmv1/third_party/terraform/services/container/node_config.go.erb
Original file line number Diff line number Diff line change
Expand Up @@ -101,12 +101,13 @@ func schemaLoggingVariant() *schema.Schema {
}

func schemaGcfsConfig() *schema.Schema {
return &schema.Schema{
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
return &schema.Schema{
Type: schema.TypeList,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm inclined to make the block optional + computed while keep the subfield enabled required to prevent users from sending empty blocks gcfs_config{}. WDYT?

Copy link
Contributor Author

@wyardley wyardley Sep 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can give it a shot and see how the tests (and functional testing) look; I'm Ok with it as long as everything still works.

That said, from what I understand, empty blocks are legal Terraform (and we've seen cases, like in terraform-google-modules/terraform-google-kubernetes-engine where there's a lot of templating, dynamic blocks, etc that this sometimes is a thing)? See for example the sample code in hashicorp/terraform-provider-google#19428 with an empty node_pool_defaults.node_config_defaults block... I think the requirement to have at least one field in a nested block used to be a common practice, but that there are now better safeguards? I doubt other fields would be added to this block, but #11572 is an example of a field that was defined as required more or less arbitrarily at one point in the past, and ended up causing issues down the line.

FWIW, in the functional testing I did, I'm pretty sure I tested the behavior with that block empty and other scenarios, and didn't see any issues with it, though of course it's possible I missed something.

You would probably know better, but what effect does making a top level field Computed do exactly, since it doesn't have any value of its own? Does it just get inherited by everything underneath?

Copy link
Contributor Author

@wyardley wyardley Sep 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically like this, is what you're thinking?

diff --git a/mmv1/third_party/terraform/services/container/node_config.go.erb b/mmv1/third_party/terraform/services/container/node_config.go.erb
index 94e1e04c0..df45e8d50 100644
--- a/mmv1/third_party/terraform/services/container/node_config.go.erb
+++ b/mmv1/third_party/terraform/services/container/node_config.go.erb
@@ -104,14 +104,14 @@ func schemaGcfsConfig() *schema.Schema {
 	return &schema.Schema{
 		Type:        schema.TypeList,
 		Optional:    true,
+		Computed:    true,
 		MaxItems:    1,
 		Description: `GCFS configuration for this node.`,
 		Elem:        &schema.Resource{
 			Schema: map[string]*schema.Schema{
 				"enabled": {
 					Type:        schema.TypeBool,
-					Optional:    true,
-					Computed:    true,
+					Required:    true,
 					Description: `Whether or not GCFS is enabled`,
 				},
 			},
diff --git a/mmv1/third_party/terraform/website/docs/r/container_cluster.html.markdown b/mmv1/third_party/terraform/website/docs/r/container_cluster.html.markdown
index bf4ab7772..df2af254e 100644
--- a/mmv1/third_party/terraform/website/docs/r/container_cluster.html.markdown
+++ b/mmv1/third_party/terraform/website/docs/r/container_cluster.html.markdown
@@ -1038,7 +1038,7 @@ sole_tenant_config {
 
 <a name="nested_gcfs_config"></a>The `gcfs_config` block supports:
 
-* `enabled` (Optional) - Whether or not the Google Container Filesystem (GCFS) is enabled
+* `enabled` (Required) - Whether or not the Google Container Filesystem (GCFS) is enabled
 
 <a name="nested_gvnic"></a>The `gvnic` block supports:

Copy link
Contributor Author

@wyardley wyardley Sep 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can validate that at a basic level, it has the effect of preventing enabled from being skipped. I can also create the cluster without it set, add it later, and even remove the block after applying (in which case it's a noop -- terraform will get the existing computed value but won't try to set it)

I'll complete running the tests locally, and push up this change as a new commit if it passes, though would appreciate if you're able to expedite any additional tests / review on your end if we go this way (esp. since I've got another change in the same area of the codebase that may end up having some conflicts when one or the other goes in)

│ Error: Missing required argument
│ 
│   on cluster.tf line 7, in resource "google_container_cluster" "test_gcfs_config":
│    7:     gcfs_config {
│ 
│ The argument "enabled" is required, but no definition was found.

Tests pass, at least for TestAccContainerCluster_withNodeConfigGcfsConfig and TestAccContainerNodePool_gcfsConfig:

--- PASS: TestAccContainerCluster_withNodeConfigGcfsConfig (931.54s)
PASS
ok  	github.com/hashicorp/terraform-provider-google/google/services/container	932.504s
--- PASS: TestAccContainerNodePool_gcfsConfig (1003.20s)
PASS
ok  	github.com/hashicorp/terraform-provider-google/google/services/container	1004.177s

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - I pushed that update; it wouldn't be too hard to adjust later if it turns out it does need to be optional after all.

We can revert 3b6d99f if it turns out the other behavior is preferred after all.

Optional: true,
Computed: true,
MaxItems: 1,
Description: `GCFS configuration for this node.`,
Elem: &schema.Resource{
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"enabled": {
Type: schema.TypeBool,
Expand All @@ -115,7 +116,7 @@ func schemaGcfsConfig() *schema.Schema {
},
},
},
}
}
}

func schemaNodeConfig() *schema.Schema {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3883,6 +3883,55 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er
log.Printf("[INFO] GKE cluster %s: default-pool setting for insecure_kubelet_readonly_port_enabled updated to %s", d.Id(), it)
}
}

if d.HasChange("node_config.0.gcfs_config") {
wyardley marked this conversation as resolved.
Show resolved Hide resolved

defaultPool := "default-pool"

timeout := d.Timeout(schema.TimeoutCreate)

nodePoolInfo, err := extractNodePoolInformationFromCluster(d, config, clusterName)
if err != nil {
return err
}

// Acquire write-lock on nodepool.
npLockKey := nodePoolInfo.nodePoolLockKey(defaultPool)

gcfsEnabled := d.Get("node_config.0.gcfs_config.0.enabled").(bool)

// While we're getting the value from the drepcated field in
// node_config.kubelet_config, the actual setting that needs to be updated
// is on the default nodepool.
req := &container.UpdateNodePoolRequest{
Name: defaultPool,
GcfsConfig: &container.GcfsConfig{
Enabled: gcfsEnabled,
},
}

updateF := func() error {
clusterNodePoolsUpdateCall := config.NewContainerClient(userAgent).Projects.Locations.Clusters.NodePools.Update(nodePoolInfo.fullyQualifiedName(defaultPool), req)
if config.UserProjectOverride {
clusterNodePoolsUpdateCall.Header().Add("X-Goog-User-Project", nodePoolInfo.project)
}
op, err := clusterNodePoolsUpdateCall.Do()
if err != nil {
return err
}

// Wait until it's updated
return ContainerOperationWait(config, op, nodePoolInfo.project, nodePoolInfo.location,
"updating GKE node pool gcfs_config", userAgent, timeout)
}

if err := retryWhileIncompatibleOperation(timeout, npLockKey, updateF); err != nil {
return err
}

log.Printf("[INFO] GKE cluster %s: %s setting for gcfs_config updated to %t", d.Id(), defaultPool, gcfsEnabled)
}

}

if d.HasChange("notification_config") {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1536,6 +1536,49 @@ func TestAccContainerCluster_withNodeConfig(t *testing.T) {
})
}

func TestAccContainerCluster_withNodeConfigGcfsConfig(t *testing.T) {
t.Parallel()
clusterName := fmt.Sprintf("tf-test-cluster-%s", acctest.RandString(t, 10))
networkName := acctest.BootstrapSharedTestNetwork(t, "gke-cluster")
subnetworkName := acctest.BootstrapSubnet(t, "gke-cluster", networkName)

acctest.VcrTest(t, resource.TestCase{
PreCheck: func() { acctest.AccTestPreCheck(t) },
ProtoV5ProviderFactories: acctest.ProtoV5ProviderFactories(t),
CheckDestroy: testAccCheckContainerClusterDestroyProducer(t),
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_withNodeConfigGcfsConfig(clusterName, networkName, subnetworkName, false),
ConfigPlanChecks: resource.ConfigPlanChecks{
PreApply: []plancheck.PlanCheck{
acctest.ExpectNoDelete(),
},
},
},
{
ResourceName: "google_container_cluster.with_node_config_gcfs_config",
ImportState: true,
ImportStateVerify: true,
ImportStateVerifyIgnore: []string{"deletion_protection"},
},
{
Config: testAccContainerCluster_withNodeConfigGcfsConfig(clusterName, networkName, subnetworkName, true),
ConfigPlanChecks: resource.ConfigPlanChecks{
PreApply: []plancheck.PlanCheck{
acctest.ExpectNoDelete(),
},
},
},
{
ResourceName: "google_container_cluster.with_node_config_gcfs_config",
ImportState: true,
ImportStateVerify: true,
ImportStateVerifyIgnore: []string{"deletion_protection"},
},
},
})
}

// Note: Updates for these are currently known to be broken (b/361634104), and
// so are not tested here.
// They can probably be made similar to, or consolidated with,
Expand Down Expand Up @@ -6693,6 +6736,26 @@ resource "google_container_cluster" "with_node_config" {
`, clusterName, networkName, subnetworkName)
}

func testAccContainerCluster_withNodeConfigGcfsConfig(clusterName, networkName, subnetworkName string, enabled bool) string {
return fmt.Sprintf(`
resource "google_container_cluster" "with_node_config_gcfs_config" {
name = "%s"
location = "us-central1-f"
initial_node_count = 1

node_config {
gcfs_config {
enabled = %t
}
}

deletion_protection = false
network = "%s"
subnetwork = "%s"
}
`, clusterName, enabled, networkName, subnetworkName)
}

func testAccContainerCluster_withNodeConfigKubeletConfigSettings(clusterName, networkName, subnetworkName string) string {
return fmt.Sprintf(`
resource "google_container_cluster" "with_node_config_kubelet_config_settings" {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1787,6 +1787,39 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node
log.Printf("[INFO] Updated workload_metadata_config for node pool %s", name)
}

if d.HasChange(prefix + "node_config.0.gcfs_config") {
gcfsEnabled := bool(d.Get(prefix + "node_config.0.gcfs_config.0.enabled").(bool))
req := &container.UpdateNodePoolRequest{
NodePoolId: name,
GcfsConfig: &container.GcfsConfig{
Enabled: gcfsEnabled,
},
}
updateF := func() error {
clusterNodePoolsUpdateCall := config.NewContainerClient(userAgent).Projects.Locations.Clusters.NodePools.Update(nodePoolInfo.fullyQualifiedName(name),req)
if config.UserProjectOverride {
clusterNodePoolsUpdateCall.Header().Add("X-Goog-User-Project", nodePoolInfo.project)
}
op, err := clusterNodePoolsUpdateCall.Do()
if err != nil {
return err
}

// Wait until it's updated
return ContainerOperationWait(config, op,
nodePoolInfo.project,
nodePoolInfo.location,
"updating GKE node pool gcfs_config", userAgent,
timeout)
}

if err := retryWhileIncompatibleOperation(timeout, npLockKey, updateF); err != nil {
return err
}

log.Printf("[INFO] Updated gcfs_config for node pool %s", name)
}

if d.HasChange(prefix + "node_config.0.kubelet_config") {
req := &container.UpdateNodePoolRequest{
NodePoolId: name,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1675,9 +1675,9 @@ resource "google_container_node_pool" "np" {
node_config {
machine_type = "n1-standard-8"
image_type = "COS_CONTAINERD"
gcfs_config {
enabled = true
}
gcfs_config {
enabled = true
}
secondary_boot_disks {
disk_image = ""
mode = "CONTAINER_IMAGE_CACHE"
Expand All @@ -1694,9 +1694,9 @@ resource "google_container_node_pool" "np-no-mode" {
node_config {
machine_type = "n1-standard-8"
image_type = "COS_CONTAINERD"
gcfs_config {
enabled = true
}
gcfs_config {
enabled = true
}
secondary_boot_disks {
disk_image = ""
}
Expand All @@ -1720,10 +1720,14 @@ func TestAccContainerNodePool_gcfsConfig(t *testing.T) {
Steps: []resource.TestStep{
{
Config: testAccContainerNodePool_gcfsConfig(cluster, np, networkName, subnetworkName, true),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttr("google_container_node_pool.np",
"node_config.0.gcfs_config.0.enabled", "true"),
),
},
{
ResourceName: "google_container_node_pool.np",
ImportState: true,
ImportStateVerify: true,
},
{
Config: testAccContainerNodePool_gcfsConfig(cluster, np, networkName, subnetworkName, false),
},
{
ResourceName: "google_container_node_pool.np",
Expand Down
Loading