Skip to content

Commit

Permalink
Fix broken documentation links
Browse files Browse the repository at this point in the history
  • Loading branch information
juliocc committed Oct 15, 2021
1 parent ccf9119 commit 44b946d
Show file tree
Hide file tree
Showing 6 changed files with 13 additions and 12 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ exported:
* `runtime` - The runtime in which the function is running.
* `entry_point` - Name of a JavaScript function that will be executed when the Google Cloud Function is triggered.
* `trigger_http` - If function is triggered by HTTP, this boolean is set.
* `event_trigger` - A source that fires events in response to a condition in another service. Structure is [documented below](#nested_trigger_http).
* `event_trigger` - A source that fires events in response to a condition in another service. Structure is [documented below](#nested_event_trigger).
* `https_trigger_url` - If function is triggered by HTTP, trigger URL is set here.
* `ingress_settings` - Controls what traffic can reach the function.
* `labels` - A map of labels applied to this function.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ The following arguments are supported:
* `scheduling` - The scheduling strategy to use. More details about
this configuration option are detailed below.

* `service_account` - Service account to attach to the instance. Structure is [[documented below](#nested_service_account)](#nested_scheduling).
* `service_account` - Service account to attach to the instance. Structure is [documented below](#nested_service_account).

* `tags` - Tags to attach to the instance.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -293,6 +293,7 @@ in Terraform state, a `terraform destroy` or `terraform apply` that would delete
* `use_legacy_sql` - (Optional) Specifies whether to use BigQuery's legacy SQL for this view.
The default value is true. If set to false, the view will use BigQuery's standard SQL.

The `materialized_view` block supports:

* `query` - (Required) A query whose result is persisted.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ The following arguments are supported:
this configuration option are [detailed below](#nested_scheduling).

* `scratch_disk` - (Optional) Scratch disks to attach to the instance. This can be
specified multiple times for multiple scratch disks. Structure is [documented below](#nested_scheduling).
specified multiple times for multiple scratch disks. Structure is [documented below](#nested_scratch_disk).

* `service_account` - (Optional) Service account to attach to the instance.
Structure is [documented below](#nested_service_account).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ output "pyspark_status" {

* `scheduling.max_failures_total` - (Required) Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed.

<a name="pyspark_config"></a>The `pyspark_config` block supports:
<a name="nested_pyspark_config"></a>The `pyspark_config` block supports:

Submitting a pyspark job to the cluster. Below is an example configuration:

Expand Down Expand Up @@ -149,7 +149,7 @@ are generally applicable:

* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

<a name="spark_config"></a>The `spark_config` block supports:
<a name="nested_spark_config"></a>The `spark_config` block supports:

```hcl
# Submit a spark job to the cluster
Expand Down Expand Up @@ -192,7 +192,7 @@ resource "google_dataproc_job" "spark" {
* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'


<a name="hadoop_config"></a>The `hadoop_config` block supports:
<a name="nested_hadoop_config"></a>The `hadoop_config` block supports:

```hcl
# Submit a hadoop job to the cluster
Expand Down Expand Up @@ -225,7 +225,7 @@ resource "google_dataproc_job" "hadoop" {

* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

<a name="hive_config"></a>The `hive_config` block supports:
<a name="nested_hive_config"></a>The `hive_config` block supports:

```hcl
# Submit a hive job to the cluster
Expand Down Expand Up @@ -255,7 +255,7 @@ resource "google_dataproc_job" "hive" {

* `jar_file_uris` - (Optional) HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.

<a name="pig_config"></a>The `pig_config` block supports:
<a name="nested_pig_config"></a>The `pig_config` block supports:

```hcl
# Submit a pig job to the cluster
Expand Down Expand Up @@ -290,7 +290,7 @@ resource "google_dataproc_job" "pig" {
* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'


<a name="sparksql_config"></a>The `sparksql_config` block supports:
<a name="nested_sparksql_config"></a>The `sparksql_config` block supports:

```hcl
# Submit a spark SQL job to the cluster
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -125,9 +125,9 @@ The following arguments are supported:

<a name="nested_schedule"></a>The `schedule` block supports:

* `schedule_start_date` - (Required) The first day the recurring transfer is scheduled to run. If `schedule_start_date` is in the past, the transfer will run for the first time on the following day. Structure [documented below](#nested_schedule_start_date).
* `schedule_start_date` - (Required) The first day the recurring transfer is scheduled to run. If `schedule_start_date` is in the past, the transfer will run for the first time on the following day. Structure [documented below](#nested_schedule_start_end_date).

* `schedule_end_date` - (Optional) The last day the recurring transfer will be run. If `schedule_end_date` is the same as `schedule_start_date`, the transfer will be executed only once. Structure [documented below](#nested_schedule_end_date).
* `schedule_end_date` - (Optional) The last day the recurring transfer will be run. If `schedule_end_date` is the same as `schedule_start_date`, the transfer will be executed only once. Structure [documented below](#nested_schedule_start_end_date).

* `start_time_of_day` - (Optional) The time in UTC at which the transfer will be scheduled to start in a day. Transfers may start later than this time. If not specified, recurring and one-time transfers that are scheduled to run today will run immediately; recurring transfers that are scheduled to run on a future date will start at approximately midnight UTC on that date. Note that when configuring a transfer with the Cloud Platform Console, the transfer's start time in a day is specified in your local timezone. Structure [documented below](#nested_start_time_of_day).

Expand Down Expand Up @@ -193,7 +193,7 @@ The `azure_credentials` block supports:

* `sas_token` - (Required) Azure shared access signature. See [Grant limited access to Azure Storage resources using shared access signatures (SAS)](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview).

<a name="nested_schedule_start_date"></a>The `schedule_start_date` and `schedule_end_date` blocks support:
<a name="nested_schedule_start_end_date"></a>The `schedule_start_date` and `schedule_end_date` blocks support:

* `year` - (Required) Year of date. Must be from 1 to 9999.

Expand Down

0 comments on commit 44b946d

Please sign in to comment.