From feee345c9f50c1af3ff919d2635fd3fcca1b7bbf Mon Sep 17 00:00:00 2001 From: The Magician Date: Fri, 11 Dec 2020 10:00:54 -0800 Subject: [PATCH] add tests and docs for using custom service account in dataflow flex template job (#4260) (#7999) Signed-off-by: Modular Magician --- .changelog/4260.txt | 3 +++ .../docs/r/dataflow_flex_template_job.html.markdown | 13 ++++++++----- 2 files changed, 11 insertions(+), 5 deletions(-) create mode 100644 .changelog/4260.txt diff --git a/.changelog/4260.txt b/.changelog/4260.txt new file mode 100644 index 00000000000..8f09d7da871 --- /dev/null +++ b/.changelog/4260.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +dataflow: added documentation about using `parameters` for custom service account and other pipeline options to `google_dataflow_flex_template_job` +``` diff --git a/website/docs/r/dataflow_flex_template_job.html.markdown b/website/docs/r/dataflow_flex_template_job.html.markdown index f209114c3ec..598dda663d3 100644 --- a/website/docs/r/dataflow_flex_template_job.html.markdown +++ b/website/docs/r/dataflow_flex_template_job.html.markdown @@ -29,14 +29,14 @@ resource "google_dataflow_flex_template_job" "big_data_job" { ## Note on "destroy" / "apply" There are many types of Dataflow jobs. Some Dataflow jobs run constantly, -getting new data from (e.g.) a GCS bucket, and outputting data continuously. +getting new data from (e.g.) a GCS bucket, and outputting data continuously. Some jobs process a set amount of data then terminate. All jobs can fail while running due to programming errors or other issues. In this way, Dataflow jobs are different from most other Terraform / Google resources. The Dataflow resource is considered 'existing' while it is in a nonterminal state. If it reaches a terminal state (e.g. 'FAILED', 'COMPLETE', -'CANCELLED'), it will be recreated on the next 'apply'. This is as expected for +'CANCELLED'), it will be recreated on the next 'apply'. This is as expected for jobs which run continuously, but may surprise users who use this resource for other kinds of Dataflow jobs. @@ -60,15 +60,16 @@ Template. - - - * `parameters` - (Optional) Key/Value pairs to be passed to the Dataflow job (as -used in the template). +used in the template). Additional [pipeline options](https://cloud.google.com/dataflow/docs/guides/specifying-exec-params#setting-other-cloud-dataflow-pipeline-options) +such as `serviceAccount`, `workerMachineType`, etc can be specified here. * `labels` - (Optional) User labels to be specified for the job. Keys and values should follow the restrictions specified in the [labeling restrictions](https://cloud.google.com/compute/docs/labeling-resources#restrictions) page. **Note**: This field is marked as deprecated in Terraform as the API does not currently -support adding labels. +support adding labels. **NOTE**: Google-provided Dataflow templates often provide default labels that begin with `goog-dataflow-provided`. Unless explicitly set in config, these -labels will be ignored to prevent diffs on re-apply. +labels will be ignored to prevent diffs on re-apply. * `on_delete` - (Optional) One of "drain" or "cancel". Specifies behavior of deletion during `terraform destroy`. See above note. @@ -76,6 +77,8 @@ deletion during `terraform destroy`. See above note. * `project` - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used. +* `region` - (Optional) The region in which the created job should run. + ## Attributes Reference In addition to the arguments listed above, the following computed attributes are exported: