Skip to content
This repository has been archived by the owner on Nov 14, 2024. It is now read-only.

Commit

Permalink
MM updates (#280)
Browse files Browse the repository at this point in the history
* fix typo (#3369) (#413)

Signed-off-by: Modular Magician <[email protected]>

* Fix check for serial port disabled (#3695) (#415)

* Fix check for serial port disabled

* fix check for block-project-ssh-keys

Signed-off-by: Modular Magician <[email protected]>

* Add mode enum and scale down controls for Compute AutoScaler (#3693) (#416)

* Add mode enum and scale down controls for Compute AutoScaler

* Add mode enum for Compute AutoScaler in the correct API block

* Add defaults for mode and default_from_api for scale down controls

* Add tests for scale_down_controls and set at_least_one_of for it

Signed-off-by: Modular Magician <[email protected]>

* added support for shielded nodes in container (#3639) (#417)

Signed-off-by: Modular Magician <[email protected]>

* fix memcache_parameters (#3733) (#418)

* change memcacheParameters to parameters, add parameters to basic test

* make switching memcacheParameters to parameters not a breaking-change

Signed-off-by: Modular Magician <[email protected]>

* add tiers and nfs_export_options (#3766) (#419)

* add tiers and nfs_export_options

* update docs, make full test beta

Signed-off-by: Modular Magician <[email protected]>

* Add skip enum value generation (#3767) (#420)

* Add skip enum value generation

* Fix default

* Fix reader

* Fix line spacing on enum values

Signed-off-by: Modular Magician <[email protected]>

* Backend service support for internet NEG backend (#3782) (#421)

* Add ability to set global network endpoint group as backend for backend service. Make health_checks optional

* PR fixes

* Add encoder to remove max_utilization when neg backend

* Check for global NEG in group to remove max_utilization

* Add another nil check

* Spacing

* Docs fix

Signed-off-by: Modular Magician <[email protected]>

* add firewall logging controls (#3780) (#422)

* add firewall logging controls

* make backward compatible

* check enable_logging in expand

* update docs

* update expand logic to fix failing test

Signed-off-by: Modular Magician <[email protected]>

* Fix colon in doc notes (#3796) (#423)

Signed-off-by: Modular Magician <[email protected]>

* Add persistence_iam_identity to Redis Instance (#3805) (#424)

Signed-off-by: Modular Magician <[email protected]>

* Org Security Policies (Hierarchical Firewalls) (#3626) (#425)

Co-authored-by: Dana Hoffman <[email protected]>
Signed-off-by: Modular Magician <[email protected]>

Co-authored-by: Dana Hoffman <[email protected]>

* Adding Missing Cloud Build Attributes (#3627) (#426)

Signed-off-by: Modular Magician <[email protected]>

* Add additional fields to Memcached Instance (#3821) (#427)

Signed-off-by: Modular Magician <[email protected]>

* Convert inboundServices to an enum. (#3820) (#428)

Signed-off-by: Modular Magician <[email protected]>

* add source_image and source_snapshot to google_compute_image (#3799) (#429)

* add source_image to google_compute_image

* add source_snapshot to google_compute_image

* PR comment changes

Signed-off-by: Modular Magician <[email protected]>

* Collection fixes for release (#3831) (#430)

Signed-off-by: Modular Magician <[email protected]>

* Add new field filter to pubsub. (#3759) (#431)

* Add new field filter to pubsub.

Fixes: hashicorp/terraform-provider-google#6727

* Fixed filter name, it was improperly set.

* add filter key to pubsub subscription unit test

* spaces not tabs!

* hardcode filter value in test

* revert remove escaped quotes

Co-authored-by: Tim O'Connell <[email protected]>
Signed-off-by: Modular Magician <[email protected]>

Co-authored-by: Tim O'Connell <[email protected]>

* Add archive class to gcs (#3867) (#432)

Signed-off-by: Modular Magician <[email protected]>

* Add support for gRPC healthchecks (#3825) (#433)

Signed-off-by: Modular Magician <[email protected]>

* Add enableMessageOrdering to Pub/Sub Subscription (#3872) (#434)

Add enableMessageOrdering to Pub/Sub Subscription

Signed-off-by: Modular Magician <[email protected]>

* Specify possible values for arg only once (#3874) (#435)

Signed-off-by: ialidzhikov <[email protected]>
Signed-off-by: Modular Magician <[email protected]>

* use {product}.googleapis.com endpoints (#3755) (#436)

* use {product}.googleapis.com endpoints

* use actual correct urls

* fix zone data source test

* fix network peering tests

* possibly fix deleting default network

Signed-off-by: Modular Magician <[email protected]>

* Add vpcAccessConnector property on google_app_engine_standard_app_version terraform resource (#3789) (#437)

* add vpc access connector property in standard app version resource

* add Gemfile.lock and .ruby-version

* modify .ruby-version

* Update Gemfile.lock

* Update Gemfile.lock

* Update Gemfile.lock

* add test for vpcAccessConnector field

* change casing of test field

* add comma

* format app engine connector test

* make vpc_access_connector an object

* add vpc access connector resource to test

* pass connector id output property to app engine resource instead of hardcoding connector id

Signed-off-by: Modular Magician <[email protected]>

* retrypolicy attribute added (#3843) (#438)

* retrypolicy attribute added

* test case updated

Signed-off-by: Modular Magician <[email protected]>

* add discovery endpoint (#3891) (#439)

Signed-off-by: Modular Magician <[email protected]>

* Advanced logging config options in google_compute_subnetwork (#3603) (#440)

Co-authored-by: Dana Hoffman <[email protected]>
Signed-off-by: Modular Magician <[email protected]>

Co-authored-by: Dana Hoffman <[email protected]>

* Add Erase Windows VSS support to compute disk (#3898) (#441)

Co-authored-by: Cameron Thornton <[email protected]>
Signed-off-by: Modular Magician <[email protected]>

Co-authored-by: Cameron Thornton <[email protected]>

* Add Snapshot location to compute snapshot (#3896) (#442)

* added storage locations

* add storage locations to field

* tweak cmek logic

* fix the decoder logic and cleanup whitespaces

* remove duplicate entry

Signed-off-by: Modular Magician <[email protected]>

* Added missing 'all' option for protocol firewall rule (#3962) (#443)

Signed-off-by: Modular Magician <[email protected]>

* Revert `eraseWindowsVssSignature` field and test (#3959) (#444)

Signed-off-by: Modular Magician <[email protected]>

* Added support GRPC for google_compute_(region)_backend_service.protocol (#3973) (#445)

Co-authored-by: Edward Sun <[email protected]>
Signed-off-by: Modular Magician <[email protected]>

Co-authored-by: Edward Sun <[email protected]>

* Added properties of options & artifacts on google_cloudbuild_trigger (#3944) (#446)

* added options & artifacts to cloudbuild trigger

* updated with minor changes and added more options in test

* a test adding update behavior for multiple optional fields

Co-authored-by: Edward Sun <[email protected]>
Signed-off-by: Modular Magician <[email protected]>

Co-authored-by: Edward Sun <[email protected]>

* products/container: Add datapath provider field (#3956) (#447)

Signed-off-by: Modular Magician <[email protected]>

* Add SEV_CAPABLE option to google_compute_image (#3994) (#448)

Signed-off-by: Modular Magician <[email protected]>

* Add network peerings for inspec (#4002) (#449)

Signed-off-by: Modular Magician <[email protected]>

* Update docs for pubsub targets in cloud scheduler (#4008) (#450)

Signed-off-by: Modular Magician <[email protected]>

Co-authored-by: The Magician <[email protected]>
Co-authored-by: Dana Hoffman <[email protected]>
Co-authored-by: Tim O'Connell <[email protected]>
Co-authored-by: Cameron Thornton <[email protected]>
Co-authored-by: Edward Sun <[email protected]>
Co-authored-by: Stuart Paterson <[email protected]>
  • Loading branch information
7 people authored Sep 29, 2020
1 parent 961f084 commit 4b16e6b
Show file tree
Hide file tree
Showing 178 changed files with 1,574 additions and 227 deletions.
10 changes: 7 additions & 3 deletions docs/resources/google_appengine_standard_app_version.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,17 @@ Properties that can be accessed from the `google_appengine_standard_app_version`

* `name`: Full path to the Version resource in the API. Example, "v1".

* `version_id`: Relative name of the version within the service. For example, `v1`. Version names can contain only lowercase letters, numbers, or hyphens. Reserved names,"default", "latest", and any name with the prefix "ah-".
* `version_id`: Relative name of the version within the service. For example, `v1`. Version names can contain only lowercase letters, numbers, or hyphens. Reserved names,"default", "latest", and any name with the prefix "ah-".

* `runtime`: Desired runtime. Example python27.

* `threadsafe`: Whether multiple requests can be dispatched to this version at once.

* `inbound_services`: Before an application can receive email or XMPP messages, the application must be configured to enable the service.
* `vpc_access_connector`: Enables VPC connectivity for standard apps.

* `name`: Full Serverless VPC Access Connector name e.g. /projects/my-project/locations/us-central1/connectors/c1.

* `inbound_services`: A list of the types of messages that this application is able to receive.

* `instance_class`: Instance class that is used to run this version. Valid values are AutomaticScaling: F1, F2, F4, F4_1G BasicScaling or ManualScaling: B1, B2, B4, B4_1G, B8 Defaults to F1 for AutomaticScaling and B2 for ManualScaling and BasicScaling. If no scaling is specified, AutomaticScaling is chosen.

Expand Down Expand Up @@ -62,7 +66,7 @@ Properties that can be accessed from the `google_appengine_standard_app_version`

* `manual_scaling`: A service with manual scaling runs continuously, allowing you to perform complex initialization and rely on the state of its memory over time.

* `instances`: Number of instances to assign to the service at the start. **Note:** When managing the number of instances at runtime through the App Engine Admin API or the (now deprecated) Python 2 Modules API set_num_instances() you must use `lifecycle.ignore_changes = ["manual_scaling"[0].instances]` to prevent drift detection.
* `instances`: Number of instances to assign to the service at the start. **Note:** When managing the number of instances at runtime through the App Engine Admin API or the (now deprecated) Python 2 Modules API set_num_instances() you must use `lifecycle.ignore_changes = ["manual_scaling"[0].instances]` to prevent drift detection.


## GCP Permissions
Expand Down
1 change: 1 addition & 0 deletions docs/resources/google_appengine_standard_app_versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ See [google_appengine_standard_app_version.md](google_appengine_standard_app_ver
* `version_ids`: an array of `google_appengine_standard_app_version` version_id
* `runtimes`: an array of `google_appengine_standard_app_version` runtime
* `threadsaves`: an array of `google_appengine_standard_app_version` threadsafe
* `vpc_access_connectors`: an array of `google_appengine_standard_app_version` vpc_access_connector
* `inbound_services`: an array of `google_appengine_standard_app_version` inbound_services
* `instance_classes`: an array of `google_appengine_standard_app_version` instance_class
* `automatic_scalings`: an array of `google_appengine_standard_app_version` automatic_scaling
Expand Down
4 changes: 2 additions & 2 deletions docs/resources/google_cloud_scheduler_job.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Properties that can be accessed from the `google_cloud_scheduler_job` resource:

* `time_zone`: Specifies the time zone to be used in interpreting schedule. The value of this field must be a time zone name from the tz database.

* `attempt_deadline`: The deadline for job attempts. If the request handler does not respond by this deadline then the request is cancelled and the attempt is marked as a DEADLINE_EXCEEDED failure. The failed attempt can be viewed in execution logs. Cloud Scheduler will retry the job according to the RetryConfig. The allowed duration for this deadline is: * For HTTP targets, between 15 seconds and 30 minutes. * For App Engine HTTP targets, between 15 seconds and 24 hours. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s"
* `attempt_deadline`: The deadline for job attempts. If the request handler does not respond by this deadline then the request is cancelled and the attempt is marked as a DEADLINE_EXCEEDED failure. The failed attempt can be viewed in execution logs. Cloud Scheduler will retry the job according to the RetryConfig. The allowed duration for this deadline is: * For HTTP targets, between 15 seconds and 30 minutes. * For App Engine HTTP targets, between 15 seconds and 24 hours. * **Note**: For PubSub targets, this field is ignored - setting it will introduce an unresolvable diff. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s"

* `retry_config`: By default, if a job does not complete successfully, meaning that an acknowledgement is not received from the handler, then it will be retried with exponential backoff according to the settings

Expand All @@ -47,7 +47,7 @@ Properties that can be accessed from the `google_cloud_scheduler_job` resource:

* `pubsub_target`: Pub/Sub target If the job providers a Pub/Sub target the cron will publish a message to the provided topic

* `topic_name`: The full resource name for the Cloud Pub/Sub topic to which messages will be published when a job is delivered. ~>**NOTE**: The topic name must be in the same format as required by PubSub's PublishRequest.name, e.g. `projects/my-project/topics/my-topic`.
* `topic_name`: The full resource name for the Cloud Pub/Sub topic to which messages will be published when a job is delivered. ~>**NOTE:** The topic name must be in the same format as required by PubSub's PublishRequest.name, e.g. `projects/my-project/topics/my-topic`.

* `data`: The message payload for PubsubMessage. Pubsub message must contain either non-empty data, or at least one attribute.

Expand Down
108 changes: 108 additions & 0 deletions docs/resources/google_cloudbuild_trigger.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ Properties that can be accessed from the `google_cloudbuild_trigger` resource:

* `description`: Human-readable description of the trigger.

* `tags`: Tags for annotation of a BuildTrigger

* `disabled`: Whether the trigger is disabled or not. If true, the trigger will never result in a build.

* `create_time`: Time when the trigger was created.
Expand Down Expand Up @@ -87,12 +89,52 @@ Properties that can be accessed from the `google_cloudbuild_trigger` resource:

* `build`: Contents of the build template. Either a filename or build template must be provided.

* `source`: The location of the source files to build.

* `storage_source`: Location of the source in an archive file in Google Cloud Storage.

* `bucket`: Google Cloud Storage bucket containing the source.

* `object`: Google Cloud Storage object containing the source. This object must be a gzipped archive file (.tar.gz) containing source to build.

* `generation`: Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used

* `repo_source`: Location of the source in a Google Cloud Source Repository.

* `project_id`: ID of the project that owns the Cloud Source Repository. If omitted, the project ID requesting the build is assumed.

* `repo_name`: Name of the Cloud Source Repository.

* `dir`: Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's dir is specified and is an absolute path, this value is ignored for that step's execution.

* `invert_regex`: Only trigger a build if the revision regex does NOT match the revision regex.

* `substitutions`: Substitutions to use in a triggered build. Should only be used with triggers.run

* `branch_name`: Regex matching branches to build. Exactly one a of branch name, tag, or commit SHA must be provided. The syntax of the regular expressions accepted is the syntax accepted by RE2 and described at https://github.com/google/re2/wiki/Syntax

* `tag_name`: Regex matching tags to build. Exactly one a of branch name, tag, or commit SHA must be provided. The syntax of the regular expressions accepted is the syntax accepted by RE2 and described at https://github.com/google/re2/wiki/Syntax

* `commit_sha`: Explicit commit SHA to build. Exactly one a of branch name, tag, or commit SHA must be provided.

* `tags`: Tags for annotation of a Build. These are not docker tags.

* `images`: A list of images to be pushed upon the successful completion of all build steps. The images are pushed using the builder service account's credentials. The digests of the pushed images will be stored in the Build resource's results field. If any of the images fail to be pushed, the build status is marked FAILURE.

* `substitutions`: Substitutions data for Build resource.

* `queue_ttl`: TTL in queue for this build. If provided and the build is enqueued longer than this value, the build will expire and the build status will be EXPIRED. The TTL starts ticking from createTime. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

* `logs_bucket`: Google Cloud Storage bucket where logs should be written. Logs file names will be of the format ${logsBucket}/log-${build_id}.txt.

* `timeout`: Amount of time that this build should be allowed to run, to second granularity. If this amount of time elapses, work on the build will cease and the build status will be TIMEOUT. This timeout must be equal to or greater than the sum of the timeouts for build steps within the build. The expected format is the number of seconds followed by s. Default time is ten minutes (600s).

* `secrets`: Secrets to decrypt using Cloud Key Management Service.

* `kms_key_name`: Cloud KMS key name to use to decrypt these envs.

* `secret_env`: Map of environment variable name to its encrypted value. Secret environment variables must be unique across all of a build's secrets, and must be used by at least one build step. Values can be at most 64 KB in size. There can be at most 100 secret values across all of a build's secrets.

* `steps`: The operations to be performed on the workspace.

* `name`: The name of the container image that will run this particular build step. If the image is available in the host's Docker daemon's cache, it will be run directly. If not, the host will attempt to pull the image first, using the builder service account's credentials if necessary. The Docker daemon's cache will already have the latest versions of all of the officially supported build steps (https://github.com/GoogleCloudPlatform/cloud-builders). The Docker daemon will also have cached many of the layers for some popular images, like "ubuntu", "debian", but they will be refreshed at the time you attempt to use them. If you built an image in a previous build step, it will be stored in the host's Docker daemon's cache and is available to use as the name for a later build step.
Expand Down Expand Up @@ -121,6 +163,72 @@ Properties that can be accessed from the `google_cloudbuild_trigger` resource:

* `wait_for`: The ID(s) of the step(s) that this build step depends on. This build step will not start until all the build steps in `wait_for` have completed successfully. If `wait_for` is empty, this build step will start when all previous build steps in the `Build.Steps` list have completed successfully.

* `artifacts`: Artifacts produced by the build that should be uploaded upon successful completion of all build steps.

* `images`: A list of images to be pushed upon the successful completion of all build steps. The images will be pushed using the builder service account's credentials. The digests of the pushed images will be stored in the Build resource's results field. If any of the images fail to be pushed, the build is marked FAILURE.

* `objects`: A list of objects to be uploaded to Cloud Storage upon successful completion of all build steps. Files in the workspace matching specified paths globs will be uploaded to the Cloud Storage location using the builder service account's credentials. The location and generation of the uploaded objects will be stored in the Build resource's results field. If any objects fail to be pushed, the build is marked FAILURE.

* `location`: Cloud Storage bucket and optional object path, in the form "gs://bucket/path/to/somewhere/". Files in the workspace matching any path pattern will be uploaded to Cloud Storage with this location as a prefix.

* `paths`: Path globs used to match files in the build's workspace.

* `timing`: Output only. Stores timing information for pushing all artifact objects.

* `start_time`: Start of time span. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

* `end_time`: End of time span. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

* `options`: Special options for this build.

* `source_provenance_hash`: Requested hash for SourceProvenance.

* `requested_verify_option`: Requested verifiability options.
Possible values:
* NOT_VERIFIED
* VERIFIED

* `machine_type`: Compute Engine machine type on which to run the build.
Possible values:
* UNSPECIFIED
* N1_HIGHCPU_8
* N1_HIGHCPU_32

* `disk_size_gb`: Requested disk size for the VM that runs the build. Note that this is NOT "disk free"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 1000GB; builds that request more than the maximum are rejected with an error.

* `substitution_option`: Option to specify behavior when there is an error in the substitution checks. NOTE this is always set to ALLOW_LOOSE for triggered builds and cannot be overridden in the build configuration file.
Possible values:
* MUST_MATCH
* ALLOW_LOOSE

* `dynamic_substitutions`: Option to specify whether or not to apply bash style string operations to the substitutions. NOTE this is always enabled for triggered builds and cannot be overridden in the build configuration file.

* `log_streaming_option`: Option to define build log streaming behavior to Google Cloud Storage.
Possible values:
* STREAM_DEFAULT
* STREAM_ON
* STREAM_OFF

* `worker_pool`: Option to specify a WorkerPool for the build. Format projects/{project}/workerPools/{workerPool} This field is experimental.

* `logging`: Option to specify the logging mode, which determines if and where build logs are stored.
Possible values:
* LOGGING_UNSPECIFIED
* LEGACY
* GCS_ONLY
* STACKDRIVER_ONLY
* NONE

* `env`: A list of global environment variable definitions that will exist for all build steps in this build. If a variable is defined in both globally and in a build step, the variable will use the build step value. The elements are of the form "KEY=VALUE" for the environment variable "KEY" being given the value "VALUE".

* `secret_env`: A list of global environment variables, which are encrypted using a Cloud Key Management Service crypto key. These values must be specified in the build's Secret. These variables will be available to all build steps in this build.

* `volumes`: Global list of volumes to mount for ALL build steps Each volume is created as an empty volume prior to starting the build process. Upon completion of the build, volumes and their contents are discarded. Global volume names and paths cannot conflict with the volumes defined a build step. Using a global volume in a build with only one step is not valid as it is indicative of a build request with an incorrect configuration.

* `name`: Name of the volume to mount. Volume names must be unique per build step and must be valid names for Docker volumes. Each named volume must be used by at least two build steps.

* `path`: Path at which to mount the volume. Paths must be absolute and cannot conflict with other volume paths on the same build step or with certain reserved volume paths.


## GCP Permissions

Expand Down
1 change: 1 addition & 0 deletions docs/resources/google_cloudbuild_triggers.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ See [google_cloudbuild_trigger.md](google_cloudbuild_trigger.md) for more detail
* `ids`: an array of `google_cloudbuild_trigger` id
* `names`: an array of `google_cloudbuild_trigger` name
* `descriptions`: an array of `google_cloudbuild_trigger` description
* `tags`: an array of `google_cloudbuild_trigger` tags
* `disableds`: an array of `google_cloudbuild_trigger` disabled
* `create_times`: an array of `google_cloudbuild_trigger` create_time
* `substitutions`: an array of `google_cloudbuild_trigger` substitutions
Expand Down
16 changes: 16 additions & 0 deletions docs/resources/google_compute_autoscaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,22 @@ Properties that can be accessed from the `google_compute_autoscaler` resource:

* `cool_down_period_sec`: The number of seconds that the autoscaler should wait before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process.

* `mode`: Defines operating mode for this policy.
Possible values:
* OFF
* ONLY_UP
* ON

* `scale_down_control`: (Beta only) Defines scale down controls to reduce the risk of response latency and outages due to abrupt scale-in events

* `max_scaled_down_replicas`: A nested object resource

* `fixed`: Specifies a fixed number of VM instances. This must be a positive integer.

* `percent`: Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.

* `time_window_sec`: How long back autoscaling should look when computing recommendations to include directives regarding slower scale down, as described above.

* `cpu_utilization`: Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.

* `utilization_target`: The target CPU utilization that the autoscaler should maintain. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales down the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales up until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
Expand Down
Loading

0 comments on commit 4b16e6b

Please sign in to comment.