diff --git a/website/docs/r/compute_autoscaler.html.markdown b/website/docs/r/compute_autoscaler.html.markdown index 97b24158318..94addda87f5 100644 --- a/website/docs/r/compute_autoscaler.html.markdown +++ b/website/docs/r/compute_autoscaler.html.markdown @@ -15,7 +15,7 @@ when the need for resources is lower. You just define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load. For more information, see [the official documentation](https://cloud.google.com/compute/docs/autoscaler/) and -[API](https://cloud.google.com/compute/docs/autoscaler/v1beta2/autoscalers) +[API](https://cloud.google.com/compute/docs/reference/latest/autoscalers) ## Example Usage diff --git a/website/docs/r/compute_region_autoscaler.html.markdown b/website/docs/r/compute_region_autoscaler.html.markdown new file mode 100644 index 00000000000..91979f73608 --- /dev/null +++ b/website/docs/r/compute_region_autoscaler.html.markdown @@ -0,0 +1,155 @@ +--- +layout: "google" +page_title: "Google: google_compute_region_autoscaler" +sidebar_current: "docs-google-compute-region-autoscaler" +description: |- + Manages a Regional Autoscaler within GCE. +--- + +# google\_compute\_region\_autoscaler + +A Compute Engine Regional Autoscaler automatically adds or removes virtual machines from +a managed instance group based on increases or decreases in load. This allows +your applications to gracefully handle increases in traffic and reduces cost +when the need for resources is lower. You just define the autoscaling policy and +the autoscaler performs automatic scaling based on the measured load. For more +information, see [the official +documentation](https://cloud.google.com/compute/docs/autoscaler/) and +[API](https://cloud.google.com/compute/docs/reference/latest/regionAutoscalers) + + +## Example Usage + +```hcl +resource "google_compute_instance_template" "foobar" { + name = "foobar" + machine_type = "n1-standard-1" + can_ip_forward = false + + tags = ["foo", "bar"] + + disk { + source_image = "debian-cloud/debian-8" + } + + network_interface { + network = "default" + } + + metadata { + foo = "bar" + } + + service_account { + scopes = ["userinfo-email", "compute-ro", "storage-ro"] + } +} + +resource "google_compute_target_pool" "foobar" { + name = "foobar" +} + +resource "google_compute_region_instance_group_manager" "foobar" { + name = "foobar" + region = "us-central1" + + instance_template = "${google_compute_instance_template.foobar.self_link}" + target_pools = ["${google_compute_target_pool.foobar.self_link}"] + base_instance_name = "foobar" +} + +resource "google_compute_region_autoscaler" "foobar" { + name = "scaler" + region = "us-central1" + target = "${google_compute_region_instance_group_manager.foobar.self_link}" + + autoscaling_policy = { + max_replicas = 5 + min_replicas = 1 + cooldown_period = 60 + + cpu_utilization { + target = 0.5 + } + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the autoscaler. + +* `target` - (Required) The full URL to the instance group manager whose size we + control. + +* `region` - (Required) The region of the target. + +* `autoscaling_policy` - (Required) The parameters of the autoscaling + algorithm. Structure is documented below. + +- - - + +* `description` - (Optional) An optional textual description of the instance + group manager. + +* `project` - (Optional) The project in which the resource belongs. If it + is not provided, the provider project is used. + +The `autoscaling_policy` block contains: + +* `max_replicas` - (Required) The group will never be larger than this. + +* `min_replicas` - (Required) The group will never be smaller than this. + +* `cooldown_period` - (Optional) Period to wait between changes. This should be + at least double the time your instances take to start up. + +* `cpu_utilization` - (Optional) A policy that scales when the cluster's average + CPU is above or below a given threshold. Structure is documented below. + +* `metric` - (Optional) A policy that scales according to Google Cloud + Monitoring metrics Structure is documented below. + +* `load_balancing_utilization` - (Optional) A policy that scales when the load + reaches a proportion of a limit defined in the HTTP load balancer. Structure +is documented below. + +The `cpu_utilization` block contains: + +* `target` - The floating point threshold where CPU utilization should be. E.g. + for 50% one would specify 0.5. + +The `metric` block contains (more documentation +[here](https://cloud.google.com/monitoring/api/metrics)): + +* `name` - The name of the Google Cloud Monitoring metric to follow, e.g. + `compute.googleapis.com/instance/network/received_bytes_count` + +* `type` - Either "cumulative", "delta", or "gauge". + +* `target` - The desired metric value per instance. Must be a positive value. + +The `load_balancing_utilization` block contains: + +* `target` - The floating point threshold where load balancing utilization + should be. E.g. if the load balancer's `maxRatePerInstance` is 10 requests + per second (RPS) then setting this to 0.5 would cause the group to be scaled + such that each instance receives 5 RPS. + + +## Attributes Reference + +In addition to the arguments listed above, the following computed attributes are +exported: + +* `self_link` - The URL of the created resource. + +## Import + +Autoscalers can be imported using the `name`, e.g. + +``` +$ terraform import google_compute_region_autoscaler.foobar scaler +``` diff --git a/website/google.erb b/website/google.erb index f6fc6c5b28d..9bb5035a5be 100644 --- a/website/google.erb +++ b/website/google.erb @@ -77,6 +77,9 @@ > google_folder_iam_policy + > + google_organization_policy + > google_project @@ -185,6 +188,10 @@ google_compute_project_metadata_item + > + google_compute_region_autoscaler + + > google_compute_region_backend_service