Skip to content

Commit

Permalink
Added field graceful_decomissioning_timeout to resource dataproc_clus…
Browse files Browse the repository at this point in the history
…ter (#4078) (#7485)

* Added field graceful_decomissioning_timeout to resource dataproc_cluster

* formating

* fix formatting concerns and removed api.yaml edits since resource is handwritten

Signed-off-by: Modular Magician <[email protected]>
  • Loading branch information
modular-magician authored Oct 9, 2020
1 parent 3358ea2 commit 91c0de1
Show file tree
Hide file tree
Showing 4 changed files with 32 additions and 8 deletions.
3 changes: 3 additions & 0 deletions .changelog/4078.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
```release-note:enhancement
dataproc: Added `graceful_decomissioning_timeout` field to `dataproc_cluster` resource
```
13 changes: 12 additions & 1 deletion google/resource_dataproc_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,13 @@ func resourceDataprocCluster() *schema.Resource {
Description: `The region in which the cluster and associated nodes will be created in. Defaults to global.`,
},

"graceful_decommission_timeout": {
Type: schema.TypeString,
Optional: true,
Default: "0s",
Description: `The timeout duration which allows graceful decomissioning when you change the number of worker nodes directly through a terraform apply`,
},

"labels": {
Type: schema.TypeMap,
Optional: true,
Expand Down Expand Up @@ -1125,9 +1132,13 @@ func resourceDataprocClusterUpdate(d *schema.ResourceData, meta interface{}) err
}

if len(updMask) > 0 {
gracefulDecommissionTimeout := d.Get("graceful_decommission_timeout").(string)

patch := config.NewDataprocBetaClient(userAgent).Projects.Regions.Clusters.Patch(
project, region, clusterName, cluster)
op, err := patch.UpdateMask(strings.Join(updMask, ",")).Do()
patch.GracefulDecommissionTimeout(gracefulDecommissionTimeout)
patch.UpdateMask(strings.Join(updMask, ","))
op, err := patch.Do()
if err != nil {
return err
}
Expand Down
5 changes: 3 additions & 2 deletions google/resource_dataproc_cluster_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1139,6 +1139,7 @@ func testAccDataprocCluster_updatable(rnd string, w, p int) string {
resource "google_dataproc_cluster" "updatable" {
name = "tf-test-dproc-%s"
region = "us-central1"
graceful_decommission_timeout = "0.2s"
cluster_config {
master_config {
Expand Down Expand Up @@ -1462,7 +1463,7 @@ resource "google_dataproc_cluster" "basic" {
}
}
}
resource "google_dataproc_autoscaling_policy" "asp" {
policy_id = "tf-test-dataproc-policy-%s"
location = "us-central1"
Expand Down Expand Up @@ -1494,7 +1495,7 @@ resource "google_dataproc_cluster" "basic" {
}
}
}
resource "google_dataproc_autoscaling_policy" "asp" {
policy_id = "tf-test-dataproc-policy-%s"
location = "us-central1"
Expand Down
19 changes: 14 additions & 5 deletions website/docs/r/dataproc_cluster.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ resource "google_dataproc_cluster" "simplecluster" {
resource "google_dataproc_cluster" "mycluster" {
name = "mycluster"
region = "us-central1"
graceful_decommission_timeout = "120s"
labels = {
foo = "bar"
}
Expand Down Expand Up @@ -131,6 +132,14 @@ resource "google_dataproc_cluster" "accelerated_cluster" {
* `cluster_config` - (Optional) Allows you to configure various aspects of the cluster.
Structure defined below.

* `graceful_decommission_timout` - (Optional) Allows graceful decomissioning when you change the number of worker nodes directly through a terraform apply.
Does not affect auto scaling decomissioning from an autoscaling policy.
Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress.
Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs).
Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day. (see JSON representation of
[Duration](https://developers.google.com/protocol-buffers/docs/proto3#json)).
Only supported on Dataproc image versions 1.2 and higher.
For more context see the [docs](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/patch#query-parameters)
- - -

The `cluster_config` block supports:
Expand Down Expand Up @@ -240,10 +249,10 @@ The `cluster_config.gce_cluster_config` block supports:
* `tags` - (Optional) The list of instance tags applied to instances in the cluster.
Tags are used to identify valid sources or targets for network firewalls.

* `internal_ip_only` - (Optional) By default, clusters are not restricted to internal IP addresses,
and will have ephemeral external IP addresses assigned to each instance. If set to true, all
instances in the cluster will only have internal IP addresses. Note: Private Google Access
(also known as `privateIpGoogleAccess`) must be enabled on the subnetwork that the cluster
* `internal_ip_only` - (Optional) By default, clusters are not restricted to internal IP addresses,
and will have ephemeral external IP addresses assigned to each instance. If set to true, all
instances in the cluster will only have internal IP addresses. Note: Private Google Access
(also known as `privateIpGoogleAccess`) must be enabled on the subnetwork that the cluster
will be launched in.

* `metadata` - (Optional) A map of the Compute Engine metadata entries to add to all instances
Expand Down Expand Up @@ -436,7 +445,7 @@ cluster_config {
a cluster. For a list of valid properties please see
[Cluster properties](https://cloud.google.com/dataproc/docs/concepts/cluster-properties)

* `optional_components` - (Optional) The set of optional components to activate on the cluster.
* `optional_components` - (Optional) The set of optional components to activate on the cluster.
Accepted values are:
* ANACONDA
* DRUID
Expand Down

0 comments on commit 91c0de1

Please sign in to comment.