From eb927958bd416606027601af49684315a4432e5b Mon Sep 17 00:00:00 2001 From: Paul Tyng Date: Sat, 10 Apr 2021 15:52:23 -0400 Subject: [PATCH 001/644] Add null to tobool docs --- website/docs/language/functions/tobool.html.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/website/docs/language/functions/tobool.html.md b/website/docs/language/functions/tobool.html.md index 4ff160929ba9..88dbb19f322d 100644 --- a/website/docs/language/functions/tobool.html.md +++ b/website/docs/language/functions/tobool.html.md @@ -14,7 +14,7 @@ Explicit type conversions are rarely necessary in Terraform because it will convert types automatically where required. Use the explicit type conversion functions only to normalize types returned in module outputs. -Only boolean values and the exact strings `"true"` and `"false"` can be +Only boolean values, `null`, and the exact strings `"true"` and `"false"` can be converted to boolean. All other values will produce an error. ## Examples @@ -24,6 +24,8 @@ converted to boolean. All other values will produce an error. true > tobool("true") true +> tobool(null) +null > tobool("no") Error: Invalid function argument From b3fe8713e819f8b8a3212c5086d91f077cd67e43 Mon Sep 17 00:00:00 2001 From: Paul Tyng Date: Sat, 10 Apr 2021 15:54:51 -0400 Subject: [PATCH 002/644] Add null to tonumber docs --- website/docs/language/functions/tonumber.html.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/website/docs/language/functions/tonumber.html.md b/website/docs/language/functions/tonumber.html.md index 1b7e0236b50a..3d5801255869 100644 --- a/website/docs/language/functions/tonumber.html.md +++ b/website/docs/language/functions/tonumber.html.md @@ -14,7 +14,7 @@ Explicit type conversions are rarely necessary in Terraform because it will convert types automatically where required. Use the explicit type conversion functions only to normalize types returned in module outputs. -Only numbers and strings containing decimal representations of numbers can be +Only numbers, `null`, and strings containing decimal representations of numbers can be converted to number. All other values will produce an error. ## Examples @@ -24,6 +24,8 @@ converted to number. All other values will produce an error. 1 > tonumber("1") 1 +> tonumber(null) +null > tonumber("no") Error: Invalid function argument From b8a0929a5d607f1dc8ebc5ca8ed4a29450e4bcee Mon Sep 17 00:00:00 2001 From: Paul Tyng Date: Sat, 10 Apr 2021 15:55:29 -0400 Subject: [PATCH 003/644] Add null to tostring docs --- website/docs/language/functions/tostring.html.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/website/docs/language/functions/tostring.html.md b/website/docs/language/functions/tostring.html.md index 667b3619a063..bbe11acfdf7f 100644 --- a/website/docs/language/functions/tostring.html.md +++ b/website/docs/language/functions/tostring.html.md @@ -14,7 +14,7 @@ Explicit type conversions are rarely necessary in Terraform because it will convert types automatically where required. Use the explicit type conversion functions only to normalize types returned in module outputs. -Only the primitive types (string, number, and bool) can be converted to string. +Only the primitive types (string, number, and bool) and `null` can be converted to string. All other values will produce an error. ## Examples @@ -26,6 +26,8 @@ hello 1 > tostring(true) true +> tostring(null) +null > tostring([]) Error: Invalid function argument From c6de87354ae664879237cf11411a22393ac0cef6 Mon Sep 17 00:00:00 2001 From: Yves Peter Date: Tue, 13 Apr 2021 10:26:32 +0200 Subject: [PATCH 004/644] document that destroy provisioners don't run with create_before_destroy --- website/docs/language/meta-arguments/lifecycle.html.md | 4 ++++ website/docs/language/resources/provisioners/syntax.html.md | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/website/docs/language/meta-arguments/lifecycle.html.md b/website/docs/language/meta-arguments/lifecycle.html.md index 6671ff3b31e5..afa726cee081 100644 --- a/website/docs/language/meta-arguments/lifecycle.html.md +++ b/website/docs/language/meta-arguments/lifecycle.html.md @@ -45,6 +45,10 @@ The following arguments can be used within a `lifecycle` block: such features, so you must understand the constraints for each resource type before using `create_before_destroy` with it. + Destroy provisioners of this resource will not run if `create_before_destroy` + is used. This limitation may be addressed in the future, see + [GitHub issue](https://github.com/hashicorp/terraform/issues/13549) for details. + * `prevent_destroy` (bool) - This meta-argument, when set to `true`, will cause Terraform to reject with an error any plan that would destroy the infrastructure object associated with the resource, as long as the argument diff --git a/website/docs/language/resources/provisioners/syntax.html.md b/website/docs/language/resources/provisioners/syntax.html.md index 3a8eb95eb290..171f3fe13cdb 100644 --- a/website/docs/language/resources/provisioners/syntax.html.md +++ b/website/docs/language/resources/provisioners/syntax.html.md @@ -237,6 +237,11 @@ fail, Terraform will error and rerun the provisioners again on the next `terraform apply`. Due to this behavior, care should be taken for destroy provisioners to be safe to run multiple times. +Destroy provisioners will not run if the lifecycle Meta-Argument +[`create_before_destroy`](/docs/language/meta-arguments/lifecycle.html) is used +in the resource. This limitation may be addressed in the future, see +[GitHub issue](https://github.com/hashicorp/terraform/issues/13549) for details. + Destroy-time provisioners can only run if they remain in the configuration at the time a resource is destroyed. If a resource block with a destroy-time provisioner is removed entirely from the configuration, its provisioner From a303a03f2f07a8e4937e876139ca61c1b433cc06 Mon Sep 17 00:00:00 2001 From: Chad Bailey Date: Sun, 2 May 2021 17:14:10 -0500 Subject: [PATCH 005/644] Added clarity: remote-exec connection requirement This is a secondary change to PR #28578 Details: According to the [Provisioner Connection](https://www.terraform.io/docs/language/resources/provisioners/connection.html) page, provisioners require the connection block. This change of behavior is shown prominently within a note on the [Provisioner Connection](https://www.terraform.io/docs/language/resources/provisioners/connection.html) page: > Note: In Terraform 0.11 and earlier, providers could set default values for some connection settings, so that connection blocks could sometimes be omitted. This feature was removed in 0.12 in order to make Terraform's behavior more predictable. However, this behavioral change is omitted from the [remote-exec provisioner](https://www.terraform.io/docs/language/resources/provisioners/remote-exec.html) page which is where a user will be if they are attempting to follow the prior behavior when this was permissible in versions prior to 0.12. This change prompts the user of that change and directs to the connection page. --- .../resources/provisioners/remote-exec.html.md | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/website/docs/language/resources/provisioners/remote-exec.html.md b/website/docs/language/resources/provisioners/remote-exec.html.md index 312728300bf5..19e771989de8 100644 --- a/website/docs/language/resources/provisioners/remote-exec.html.md +++ b/website/docs/language/resources/provisioners/remote-exec.html.md @@ -12,7 +12,8 @@ The `remote-exec` provisioner invokes a script on a remote resource after it is created. This can be used to run a configuration management tool, bootstrap into a cluster, etc. To invoke a local process, see the `local-exec` [provisioner](/docs/language/resources/provisioners/local-exec.html) instead. The `remote-exec` -provisioner supports both `ssh` and `winrm` type [connections](/docs/language/resources/provisioners/connection.html). +provisioner requires a [connection](/docs/language/resources/provisioners/connection.html) +and supports both `ssh` and `winrm`. -> **Note:** Provisioners should only be used as a last resort. For most common situations there are better alternatives. For more information, see @@ -24,6 +25,15 @@ common situations there are better alternatives. For more information, see resource "aws_instance" "web" { # ... + # Establishes connection to be used by all + # generic remote provisioners (i.e. file/remote-exec) + connection { + type = "ssh" + user = "root" + password = var.root_password + host = self.public_ip + } + provisioner "remote-exec" { inline = [ "puppet apply", From d29cdccb5bc0e34665092c5dc3a832f0ca9226bf Mon Sep 17 00:00:00 2001 From: Ben Moskovitz Date: Wed, 30 Jun 2021 16:39:31 +1200 Subject: [PATCH 006/644] Add a note to the docs on the S3 backend around permissions needed for encrypted state storage --- website/docs/language/settings/backends/s3.html.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/language/settings/backends/s3.html.md b/website/docs/language/settings/backends/s3.html.md index 678cb24b7b43..c6dac3b993c2 100644 --- a/website/docs/language/settings/backends/s3.html.md +++ b/website/docs/language/settings/backends/s3.html.md @@ -190,7 +190,7 @@ The following configuration is optional: * `encrypt` - (Optional) Enable [server side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) of the state file. * `endpoint` - (Optional) Custom endpoint for the AWS S3 API. This can also be sourced from the `AWS_S3_ENDPOINT` environment variable. * `force_path_style` - (Optional) Enable path-style S3 URLs (`https:///` instead of `https://.`). -* `kms_key_id` - (Optional) Amazon Resource Name (ARN) of a Key Management Service (KMS) Key to use for encrypting the state. +* `kms_key_id` - (Optional) Amazon Resource Name (ARN) of a Key Management Service (KMS) Key to use for encrypting the state. Note that if this value is specified, Terraform will need `kms:Encrypt`, `kms:Decrypt` and `kms:GenerateDataKey` permissions on this KMS key. * `sse_customer_key` - (Optional) The key to use for encrypting state with [Server-Side Encryption with Customer-Provided Keys (SSE-C)](https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html). This is the base64-encoded value of the key, which must decode to 256 bits. This can also be sourced from the `AWS_SSE_CUSTOMER_KEY` environment variable, which is recommended due to the sensitivity of the value. Setting it inside a terraform file will cause it to be persisted to disk in `terraform.tfstate`. * `workspace_key_prefix` - (Optional) Prefix applied to the state path inside the bucket. This is only relevant when using a non-default workspace. Defaults to `env:`. From 21228e19df4a0ea6da669f2813b7aa887d5714ba Mon Sep 17 00:00:00 2001 From: Ben Moskovitz Date: Wed, 30 Jun 2021 16:55:00 +1200 Subject: [PATCH 007/644] Fix broken links as pointed out by CI See: https://app.circleci.com/pipelines/github/hashicorp/terraform/6853/workflows/cc37def1-6bf5-4f6c-89b1-10dfcc65ea5e/jobs/44887 --- website/docs/language/settings/backends/s3.html.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/website/docs/language/settings/backends/s3.html.md b/website/docs/language/settings/backends/s3.html.md index c6dac3b993c2..e2b93beddc74 100644 --- a/website/docs/language/settings/backends/s3.html.md +++ b/website/docs/language/settings/backends/s3.html.md @@ -18,7 +18,7 @@ the `dynamodb_table` field to an existing DynamoDB table name. A single DynamoDB table can be used to lock multiple remote state files. Terraform generates key names that include the values of the `bucket` and `key` variables. ~> **Warning!** It is highly recommended that you enable -[Bucket Versioning](http://docs.aws.amazon.com/AmazonS3/latest/UG/enable-bucket-versioning.html) +[Bucket Versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/manage-versioning-examples.html) on the S3 bucket to allow for state recovery in the case of accidental deletions and human error. ## Example Configuration @@ -73,7 +73,7 @@ attached to users/groups/roles (like the example above) or resource policies attached to bucket objects (which look similar but also require a `Principal` to indicate which entity has those permissions). For more details, see Amazon's documentation about -[S3 access control](https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html). +[S3 access control](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html). ### DynamoDB Table Permissions @@ -186,12 +186,12 @@ The following configuration is required: The following configuration is optional: -* `acl` - (Optional) [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to be applied to the state file. -* `encrypt` - (Optional) Enable [server side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) of the state file. +* `acl` - (Optional) [Canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl) to be applied to the state file. +* `encrypt` - (Optional) Enable [server side encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) of the state file. * `endpoint` - (Optional) Custom endpoint for the AWS S3 API. This can also be sourced from the `AWS_S3_ENDPOINT` environment variable. * `force_path_style` - (Optional) Enable path-style S3 URLs (`https:///` instead of `https://.`). * `kms_key_id` - (Optional) Amazon Resource Name (ARN) of a Key Management Service (KMS) Key to use for encrypting the state. Note that if this value is specified, Terraform will need `kms:Encrypt`, `kms:Decrypt` and `kms:GenerateDataKey` permissions on this KMS key. -* `sse_customer_key` - (Optional) The key to use for encrypting state with [Server-Side Encryption with Customer-Provided Keys (SSE-C)](https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html). This is the base64-encoded value of the key, which must decode to 256 bits. This can also be sourced from the `AWS_SSE_CUSTOMER_KEY` environment variable, which is recommended due to the sensitivity of the value. Setting it inside a terraform file will cause it to be persisted to disk in `terraform.tfstate`. +* `sse_customer_key` - (Optional) The key to use for encrypting state with [Server-Side Encryption with Customer-Provided Keys (SSE-C)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html). This is the base64-encoded value of the key, which must decode to 256 bits. This can also be sourced from the `AWS_SSE_CUSTOMER_KEY` environment variable, which is recommended due to the sensitivity of the value. Setting it inside a terraform file will cause it to be persisted to disk in `terraform.tfstate`. * `workspace_key_prefix` - (Optional) Prefix applied to the state path inside the bucket. This is only relevant when using a non-default workspace. Defaults to `env:`. ### DynamoDB State Locking @@ -245,7 +245,7 @@ Your administrative AWS account will contain at least the following items: * Optionally, one or more [IAM groups](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) to differentiate between different groups of users that have different levels of access to the other AWS accounts. -* An [S3 bucket](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html) +* An [S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html) that will contain the Terraform state files for each workspace. * A [DynamoDB table](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.TablesItemsAttributes) that will be used for locking to prevent concurrent operations on a single From dea645797ae26d08032f03611b9c055dd3800a41 Mon Sep 17 00:00:00 2001 From: magodo Date: Sat, 24 Jul 2021 11:34:58 +0800 Subject: [PATCH 008/644] `terraform add -out` append to existing config --- internal/command/views/add.go | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/internal/command/views/add.go b/internal/command/views/add.go index 4b1c96d700ff..d181e19090ea 100644 --- a/internal/command/views/add.go +++ b/internal/command/views/add.go @@ -74,7 +74,14 @@ func (v *addHuman) Resource(addr addrs.AbsResourceInstance, schema *configschema } else { // The Println call above adds this final newline automatically; we add it manually here. formatted = append(formatted, '\n') - return os.WriteFile(v.outPath, formatted, 0600) + + f, err := os.OpenFile(v.outPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) + if err != nil { + return err + } + defer f.Close() + _, err = f.Write(formatted) + return err } } From 67afee693b226bedd583608f56f03d4fd5bfc87a Mon Sep 17 00:00:00 2001 From: magodo Date: Sat, 24 Jul 2021 12:03:32 +0800 Subject: [PATCH 009/644] target resource in module check only when `-out` points to a module --- internal/command/add.go | 53 +++++++++++++++++++++++++++++------------ 1 file changed, 38 insertions(+), 15 deletions(-) diff --git a/internal/command/add.go b/internal/command/add.go index bb989e31e5a0..343851426f65 100644 --- a/internal/command/add.go +++ b/internal/command/add.go @@ -3,6 +3,7 @@ package command import ( "fmt" "os" + "path/filepath" "strings" "github.com/hashicorp/hcl/v2" @@ -33,6 +34,43 @@ func (c *AddCommand) Run(rawArgs []string) int { return 1 } + // In case the output configuration path is specified, we should ensure the + // target resource address doesn't exist in the module tree indicated by + // the existing configuration files. + if args.OutPath != "" { + // Ensure the directory to the path exists and is accessible. + outDir := filepath.Dir(args.OutPath) + if _, err := os.Stat(outDir); os.IsNotExist(err) { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "The out path doesn't exist or is not accessible", + err.Error(), + )) + view.Diagnostics(diags) + return 1 + } + + config, loadDiags := c.loadConfig(outDir) + diags = diags.Append(loadDiags) + if diags.HasErrors() { + view.Diagnostics(diags) + return 1 + } + + if config != nil && config.Module != nil { + if rs, ok := config.Module.ManagedResources[args.Addr.ContainingResource().Config().String()]; ok { + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: "Resource already in configuration", + Detail: fmt.Sprintf("The resource %s is already in this configuration at %s. Resource names must be unique per type in each module.", args.Addr, rs.DeclRange), + Subject: &rs.DeclRange, + }) + c.View.Diagnostics(diags) + return 1 + } + } + } + // Check for user-supplied plugin path var err error if c.pluginPath, err = c.loadPluginPath(); err != nil { @@ -119,21 +157,6 @@ func (c *AddCommand) Run(rawArgs []string) int { } } - if module == nil { - // It's fine if the module doesn't actually exist; we don't need to check if the resource exists. - } else { - if rs, ok := module.ManagedResources[args.Addr.ContainingResource().Config().String()]; ok { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Resource already in configuration", - Detail: fmt.Sprintf("The resource %s is already in this configuration at %s. Resource names must be unique per type in each module.", args.Addr, rs.DeclRange), - Subject: &rs.DeclRange, - }) - c.View.Diagnostics(diags) - return 1 - } - } - // Get the schemas from the context schemas := ctx.Schemas() From 90e6a3dffb922b4b6f195ad6304c9fe387b4dcc0 Mon Sep 17 00:00:00 2001 From: magodo Date: Mon, 26 Jul 2021 11:25:47 +0800 Subject: [PATCH 010/644] modify test case --- internal/command/add_test.go | 67 +++++++++++++++++++++++++++++++++++- 1 file changed, 66 insertions(+), 1 deletion(-) diff --git a/internal/command/add_test.go b/internal/command/add_test.go index 18661d1cc46a..96b0d349be92 100644 --- a/internal/command/add_test.go +++ b/internal/command/add_test.go @@ -103,6 +103,45 @@ func TestAdd_basic(t *testing.T) { } }) + t.Run("basic to existing file", func(t *testing.T) { + view, done := testView(t) + c := &AddCommand{ + Meta: Meta{ + testingOverrides: overrides, + View: view, + }, + } + outPath := "add.tf" + args := []string{fmt.Sprintf("-out=%s", outPath), "test_instance.new"} + c.Run(args) + args = []string{fmt.Sprintf("-out=%s", outPath), "test_instance.new2"} + code := c.Run(args) + output := done(t) + if code != 0 { + fmt.Println(output.Stderr()) + t.Fatalf("wrong exit status. Got %d, want 0", code) + } + expected := `resource "test_instance" "new" { + value = null # REQUIRED string +} +resource "test_instance" "new2" { + value = null # REQUIRED string +} +` + result, err := os.ReadFile(outPath) + if err != nil { + t.Fatalf("error reading result file %s: %s", outPath, err.Error()) + } + // While the entire directory will get removed once the whole test suite + // is done, we remove this lest it gets in the way of another (not yet + // written) test. + os.Remove(outPath) + + if !cmp.Equal(expected, string(result)) { + t.Fatalf("wrong output:\n%s", cmp.Diff(expected, string(result))) + } + }) + t.Run("optionals", func(t *testing.T) { view, done := testView(t) c := &AddCommand{ @@ -164,7 +203,8 @@ func TestAdd_basic(t *testing.T) { View: view, }, } - args := []string{"test_instance.exists"} + outPath := "add.tf" + args := []string{fmt.Sprintf("-out=%s", outPath), "test_instance.exists"} code := c.Run(args) if code != 1 { t.Fatalf("wrong exit status. Got %d, want 0", code) @@ -176,6 +216,31 @@ func TestAdd_basic(t *testing.T) { } }) + t.Run("output existing resource to stdout", func(t *testing.T) { + view, done := testView(t) + c := &AddCommand{ + Meta: Meta{ + testingOverrides: overrides, + View: view, + }, + } + args := []string{"test_instance.exists"} + code := c.Run(args) + output := done(t) + if code != 0 { + fmt.Println(output.Stderr()) + t.Fatalf("wrong exit status. Got %d, want 0", code) + } + expected := `resource "test_instance" "exists" { + value = null # REQUIRED string +} +` + + if !cmp.Equal(output.Stdout(), expected) { + t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) + } + }) + t.Run("provider not in configuration", func(t *testing.T) { view, done := testView(t) c := &AddCommand{ From 9d5f1752c851cc15da18098a8dc2eff0dc42cf5a Mon Sep 17 00:00:00 2001 From: Alex Khaerov Date: Tue, 3 Aug 2021 14:21:48 +0800 Subject: [PATCH 011/644] oss backend: flattern assume_role block --- internal/backend/remote-state/oss/backend.go | 123 +++++++----------- .../language/settings/backends/oss.html.md | 15 +-- 2 files changed, 54 insertions(+), 84 deletions(-) diff --git a/internal/backend/remote-state/oss/backend.go b/internal/backend/remote-state/oss/backend.go index de08af37d68f..5a2b2880ce61 100644 --- a/internal/backend/remote-state/oss/backend.go +++ b/internal/backend/remote-state/oss/backend.go @@ -146,8 +146,6 @@ func New() backend.Backend { return nil, nil }, }, - - "assume_role": assumeRoleSchema(), "shared_credentials_file": { Type: schema.TypeString, Optional: true, @@ -160,60 +158,48 @@ func New() backend.Backend { Description: "This is the Alibaba Cloud profile name as set in the shared credentials file. It can also be sourced from the `ALICLOUD_PROFILE` environment variable.", DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_PROFILE", ""), }, - }, - } - - result := &Backend{Backend: s} - result.Backend.ConfigureFunc = result.configure - return result -} - -func assumeRoleSchema() *schema.Schema { - return &schema.Schema{ - Type: schema.TypeSet, - Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "role_arn": { - Type: schema.TypeString, - Required: true, - Description: "The ARN of a RAM role to assume prior to making API calls.", - DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_ARN", ""), - }, - "session_name": { - Type: schema.TypeString, - Optional: true, - Description: "The session name to use when assuming the role.", - DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_SESSION_NAME", ""), - }, - "policy": { - Type: schema.TypeString, - Optional: true, - Description: "The permissions applied when assuming a role. You cannot use this policy to grant permissions which exceed those of the role that is being assumed.", - }, - "session_expiration": { - Type: schema.TypeInt, - Optional: true, - Description: "The time after which the established session for assuming role expires.", - ValidateFunc: func(v interface{}, k string) ([]string, []error) { - min := 900 - max := 3600 - value, ok := v.(int) - if !ok { - return nil, []error{fmt.Errorf("expected type of %s to be int", k)} - } + "assume_role_role_arn": { + Type: schema.TypeString, + Required: true, + Description: "The ARN of a RAM role to assume prior to making API calls.", + DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_ARN", ""), + }, + "assume_role_session_name": { + Type: schema.TypeString, + Optional: true, + Description: "The session name to use when assuming the role.", + DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_SESSION_NAME", ""), + }, + "assume_role_policy": { + Type: schema.TypeString, + Optional: true, + Description: "The permissions applied when assuming a role. You cannot use this policy to grant permissions which exceed those of the role that is being assumed.", + }, + "assume_role_session_expiration": { + Type: schema.TypeInt, + Optional: true, + Description: "The time after which the established session for assuming role expires.", + ValidateFunc: func(v interface{}, k string) ([]string, []error) { + min := 900 + max := 3600 + value, ok := v.(int) + if !ok { + return nil, []error{fmt.Errorf("expected type of %s to be int", k)} + } - if value < min || value > max { - return nil, []error{fmt.Errorf("expected %s to be in the range (%d - %d), got %d", k, min, max, v)} - } + if value < min || value > max { + return nil, []error{fmt.Errorf("expected %s to be in the range (%d - %d), got %d", k, min, max, v)} + } - return nil, nil - }, + return nil, nil }, }, }, } + + result := &Backend{Backend: s} + result.Backend.ConfigureFunc = result.configure + return result } type Backend struct { @@ -274,31 +260,22 @@ func (b *Backend) configure(ctx context.Context) error { sessionExpiration = (int)(expiredSeconds.(float64)) } - if v, ok := d.GetOk("assume_role"); ok { - for _, v := range v.(*schema.Set).List() { - assumeRole := v.(map[string]interface{}) - if assumeRole["role_arn"].(string) != "" { - roleArn = assumeRole["role_arn"].(string) - } - if assumeRole["session_name"].(string) != "" { - sessionName = assumeRole["session_name"].(string) - } - if sessionName == "" { - sessionName = "terraform" - } - policy = assumeRole["policy"].(string) - sessionExpiration = assumeRole["session_expiration"].(int) - if sessionExpiration == 0 { - if v := os.Getenv("ALICLOUD_ASSUME_ROLE_SESSION_EXPIRATION"); v != "" { - if expiredSeconds, err := strconv.Atoi(v); err == nil { - sessionExpiration = expiredSeconds - } - } - if sessionExpiration == 0 { - sessionExpiration = 3600 - } + roleArn = d.Get("assume_role_role_arn").(string) + sessionName = d.Get("assume_role_session_name").(string) + if sessionName == "" { + sessionName = "terraform" + } + policy = d.Get("assume_role_policy").(string) + sessionExpiration = d.Get("assume_role_session_expiration").(int) + if sessionExpiration == 0 { + if v := os.Getenv("ALICLOUD_ASSUME_ROLE_SESSION_EXPIRATION"); v != "" { + if expiredSeconds, err := strconv.Atoi(v); err == nil { + sessionExpiration = expiredSeconds } } + if sessionExpiration == 0 { + sessionExpiration = 3600 + } } if accessKey == "" { diff --git a/website/docs/language/settings/backends/oss.html.md b/website/docs/language/settings/backends/oss.html.md index 6acc16af1c82..515095ceb533 100644 --- a/website/docs/language/settings/backends/oss.html.md +++ b/website/docs/language/settings/backends/oss.html.md @@ -95,18 +95,11 @@ The following configuration options or environment variables are supported: to be applied to the state file. * `shared_credentials_file` - (Optional, Available in 0.12.8+) This is the path to the shared credentials file. It can also be sourced from the `ALICLOUD_SHARED_CREDENTIALS_FILE` environment variable. If this is not set and a profile is specified, `~/.aliyun/config.json` will be used. * `profile` - (Optional, Available in 0.12.8+) This is the Alibaba Cloud profile name as set in the shared credentials file. It can also be sourced from the `ALICLOUD_PROFILE` environment variable. - * `assume_role` - (Optional, Available in 0.12.6+) If provided with a role ARN, will attempt to assume this role using the supplied credentials. - -The nested `assume_role` block supports the following: - -* `role_arn` - (Required) The ARN of the role to assume. If ARN is set to an empty string, it does not perform role switching. It supports environment variable `ALICLOUD_ASSUME_ROLE_ARN`. +* `assume_role_role_arn` - (Optional, Available in 0.12.6+) The ARN of the role to assume. If ARN is set to an empty string, it does not perform role switching. It supports environment variable `ALICLOUD_ASSUME_ROLE_ARN`. Terraform executes configuration on account with provided credentials. - -* `policy` - (Optional) A more restrictive policy to apply to the temporary credentials. This gives you a way to further restrict the permissions for the resulting temporary +* `assume_role_policy` - (Optional, Available in 0.12.6+) A more restrictive policy to apply to the temporary credentials. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use this policy to grant permissions which exceed those of the role that is being assumed. - -* `session_name` - (Optional) The session name to use when assuming the role. If omitted, 'terraform' is passed to the AssumeRole call as session name. It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_NAME`. - -* `session_expiration` - (Optional) The time after which the established session for assuming role expires. Valid value range: [900-3600] seconds. Default to 3600 (in this case Alibaba Cloud use own default value). It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_EXPIRATION`. +* `assume_role_session_name` - (Optional, Available in 0.12.6+) The session name to use when assuming the role. If omitted, 'terraform' is passed to the AssumeRole call as session name. It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_NAME`. +* `assume_role_session_expiration` - (Optional, Available in 0.12.6+) The time after which the established session for assuming role expires. Valid value range: [900-3600] seconds. Default to 3600 (in this case Alibaba Cloud use own default value). It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_EXPIRATION`. -> **Note:** If you want to store state in the custom OSS endpoint, you can specify a environment variable `OSS_ENDPOINT`, like "oss-cn-beijing-internal.aliyuncs.com" From 3f876d14d8fa3e85ff6c7155a5381d5983028c44 Mon Sep 17 00:00:00 2001 From: Li Kexian Date: Mon, 23 Aug 2021 09:46:35 +0800 Subject: [PATCH 012/644] fixed tencentcloud tag --- go.mod | 9 +++++---- go.sum | 18 ++++++++++++------ 2 files changed, 17 insertions(+), 10 deletions(-) diff --git a/go.mod b/go.mod index 889cea2d5b9e..89e274d7f7c3 100644 --- a/go.mod +++ b/go.mod @@ -56,7 +56,7 @@ require ( github.com/golang/mock v1.5.0 github.com/golang/protobuf v1.4.3 github.com/google/go-cmp v0.5.5 - github.com/google/go-querystring v1.0.0 // indirect + github.com/google/go-querystring v1.1.0 // indirect github.com/google/gofuzz v1.0.0 // indirect github.com/google/uuid v1.2.0 github.com/googleapis/gax-go/v2 v2.0.5 // indirect @@ -123,7 +123,7 @@ require ( github.com/mitchellh/reflectwalk v1.0.1 github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.1 // indirect - github.com/mozillazg/go-httpheader v0.2.1 // indirect + github.com/mozillazg/go-httpheader v0.3.0 // indirect github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d // indirect github.com/oklog/run v1.0.0 // indirect github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db @@ -135,8 +135,9 @@ require ( github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a // indirect github.com/spf13/afero v1.2.2 github.com/spf13/pflag v1.0.3 // indirect - github.com/tencentcloud/tencentcloud-sdk-go v0.0.0-20190816164403-f8fa457a3c72 - github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c + github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.232 + github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233 + github.com/tencentyun/cos-go-sdk-v5 v0.7.29 github.com/tombuildsstuff/giovanni v0.15.1 github.com/ulikunitz/xz v0.5.8 // indirect github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect diff --git a/go.sum b/go.sum index a1c2fd9b4d2f..9b2e43bafe4c 100644 --- a/go.sum +++ b/go.sum @@ -273,8 +273,9 @@ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/ github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= -github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk= github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck= +github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= +github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI= github.com/google/gofuzz v1.0.0 h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw= @@ -528,8 +529,9 @@ github.com/modern-go/reflect2 v0.0.0-20180320133207-05fbef0ca5da/go.mod h1:bx2lN github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= -github.com/mozillazg/go-httpheader v0.2.1 h1:geV7TrjbL8KXSyvghnFm+NyTux/hxwueTSrwhe88TQQ= github.com/mozillazg/go-httpheader v0.2.1/go.mod h1:jJ8xECTlalr6ValeXYdOF8fFUISeBAdw6E61aqQma60= +github.com/mozillazg/go-httpheader v0.3.0 h1:3brX5z8HTH+0RrNA1362Rc3HsaxyWEKtGY45YrhuINM= +github.com/mozillazg/go-httpheader v0.3.0/go.mod h1:PuT8h0pw6efvp8ZeUec1Rs7dwjK08bt6gKSReGMqtdA= github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= @@ -609,10 +611,14 @@ github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81P github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/tencentcloud/tencentcloud-sdk-go v0.0.0-20190816164403-f8fa457a3c72 h1:mvzGHRiR9bQd5L9eS24nD6CmzSiHbQAQJx66UwMO9LQ= -github.com/tencentcloud/tencentcloud-sdk-go v0.0.0-20190816164403-f8fa457a3c72/go.mod h1:0PfYow01SHPMhKY31xa+EFz2RStxIqj6JFAJS+IkCi4= -github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c h1:iRD1CqtWUjgEVEmjwTMbP1DMzz1HRytOsgx/rlw/vNs= -github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c/go.mod h1:wk2XFUg6egk4tSDNZtXeKfe2G6690UVyt163PuUxBZk= +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.194/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y= +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.232 h1:kwsWbh4rEw42ZDe9/812ebhbwNZxlQyZ2sTmxBOKhN4= +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.232/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y= +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/kms v1.0.194/go.mod h1:yrBKWhChnDqNz1xuXdSbWXG56XawEq0G5j1lg4VwBD4= +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233 h1:5Tbi+jyZ2MojC6GK8V6hchwtnkP2IuENUTqSisbYOlA= +github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233/go.mod h1:sX14+NSvMjOhNFaMtP2aDy6Bss8PyFXij21gpY6+DAs= +github.com/tencentyun/cos-go-sdk-v5 v0.7.29 h1:uwRBzc70Wgtc5iQQCowqecfRT0OpCXUOZzodZHOOEDs= +github.com/tencentyun/cos-go-sdk-v5 v0.7.29/go.mod h1:4E4+bQ2gBVJcgEC9Cufwylio4mXOct2iu05WjgEBx1o= github.com/tmc/grpc-websocket-proxy v0.0.0-20200427203606-3cfed13b9966 h1:j6JEOq5QWFker+d7mFQYOhjTZonQ7YkLTHm56dbn+yM= github.com/tmc/grpc-websocket-proxy v0.0.0-20200427203606-3cfed13b9966/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tombuildsstuff/giovanni v0.15.1 h1:CVRaLOJ7C/eercCrKIsarfJ4SZoGMdBL9Q2deFDUXco= From 49b31d005a7e068bf67a03d14ec6cde69ee5ad15 Mon Sep 17 00:00:00 2001 From: Krzysztof Madej Date: Tue, 24 Aug 2021 02:05:43 +0200 Subject: [PATCH 013/644] Added required paramter `resource_group_name` for MSI MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Without `resource_group_name` I had > │ Error: Either an Access Key / SAS Token or the Resource Group for the Storage Account must be specified - or Azure AD Authentication must be enabled --- website/docs/language/settings/backends/azurerm.html.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/website/docs/language/settings/backends/azurerm.html.md b/website/docs/language/settings/backends/azurerm.html.md index d5796a348474..36e95a2f8162 100644 --- a/website/docs/language/settings/backends/azurerm.html.md +++ b/website/docs/language/settings/backends/azurerm.html.md @@ -35,6 +35,7 @@ When authenticating using Managed Service Identity (MSI): ```hcl terraform { backend "azurerm" { + resource_group_name = "StorageAccount-ResourceGroup" storage_account_name = "abcd1234" container_name = "tfstate" key = "prod.terraform.tfstate" @@ -125,6 +126,7 @@ When authenticating using Managed Service Identity (MSI): data "terraform_remote_state" "foo" { backend = "azurerm" config = { + resource_group_name = "StorageAccount-ResourceGroup" storage_account_name = "terraform123abc" container_name = "terraform-state" key = "prod.terraform.tfstate" @@ -215,6 +217,8 @@ The following configuration options are supported: When authenticating using the Managed Service Identity (MSI) - the following fields are also supported: +* `resource_group_name` - (Required) The Name of the Resource Group in which the Storage Account exists. + * `subscription_id` - (Optional) The Subscription ID in which the Storage Account exists. This can also be sourced from the `ARM_SUBSCRIPTION_ID` environment variable. * `tenant_id` - (Optional) The Tenant ID in which the Subscription exists. This can also be sourced from the `ARM_TENANT_ID` environment variable. From 6562466c32a8750d7a71a6cc6232e6b5a28fe13a Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 27 Aug 2021 10:43:51 -0700 Subject: [PATCH 014/644] Be explicit that community PR review is currently paused --- .github/CONTRIBUTING.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 76f33c66de34..3080a1d3bea8 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -4,6 +4,10 @@ This repository contains only Terraform core, which includes the command line in --- +**Note:** Due to current low staffing on the Terraform Core team at HashiCorp, **we are not routinely reviewing and merging community-submitted pull requests**. We do hope to begin processing them again soon once we're back up to full staffing again, but for the moment we need to ask for patience. Thanks! + +--- + **All communication on GitHub, the community forum, and other HashiCorp-provided communication channels is subject to [the HashiCorp community guidelines](https://www.hashicorp.com/community-guidelines).** This document provides guidance on Terraform contribution recommended practices. It covers what we're looking for in order to help set some expectations and help you get the most out of participation in this project. From b1d56076a4880455cbd90a3b4460eecb97f75316 Mon Sep 17 00:00:00 2001 From: Topher Ayrhart Date: Mon, 30 Aug 2021 08:59:50 -0500 Subject: [PATCH 015/644] generated -> generate MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit In the last paragraph, the word "generated" is in the wrong tense for the sentence. The correct word is "generate" (unless I misunderstand the sentence 🙂). --- website/docs/language/expressions/for.html.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/language/expressions/for.html.md b/website/docs/language/expressions/for.html.md index ec17059d5572..b106c3f4be64 100644 --- a/website/docs/language/expressions/for.html.md +++ b/website/docs/language/expressions/for.html.md @@ -205,6 +205,6 @@ individual resource arguments that expect complex values. Some resource types also define _nested block types_, which typically represent separate objects that belong to the containing resource in some way. You can't -dynamically generated nested blocks using `for` expressions, but you _can_ +dynamically generate nested blocks using `for` expressions, but you _can_ generate nested blocks for a resource dynamically using [`dynamic` blocks](dynamic-blocks.html). From 7ca6be82854bae12b4bece168ac44dfb2d745e2d Mon Sep 17 00:00:00 2001 From: James Bardin Date: Mon, 30 Aug 2021 13:35:47 -0400 Subject: [PATCH 016/644] correctly verify planned nested object values The validation for nested object types with computed attributes was using the incorrect function call. --- internal/plans/objchange/plan_valid.go | 2 +- internal/plans/objchange/plan_valid_test.go | 213 ++++++++++++++++++++ 2 files changed, 214 insertions(+), 1 deletion(-) diff --git a/internal/plans/objchange/plan_valid.go b/internal/plans/objchange/plan_valid.go index 3555e02ed590..e26979e84b9b 100644 --- a/internal/plans/objchange/plan_valid.go +++ b/internal/plans/objchange/plan_valid.go @@ -424,7 +424,7 @@ func assertPlannedObjectValid(schema *configschema.Object, prior, config, planne if !prior.IsNull() && prior.HasIndex(idx).True() { priorEV = prior.Index(idx) } - moreErrs := assertPlannedObjectValid(schema, priorEV, configEV, plannedEV, path) + moreErrs := assertPlannedAttrsValid(schema.Attributes, priorEV, configEV, plannedEV, path) errs = append(errs, moreErrs...) } for it := config.ElementIterator(); it.Next(); { diff --git a/internal/plans/objchange/plan_valid_test.go b/internal/plans/objchange/plan_valid_test.go index a7166fe7b550..783e7e15fca8 100644 --- a/internal/plans/objchange/plan_valid_test.go +++ b/internal/plans/objchange/plan_valid_test.go @@ -1222,8 +1222,221 @@ func TestAssertPlanValid(t *testing.T) { }), []string{`.bloop: planned value cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"blop":cty.StringVal("ok")})}) for a non-computed attribute`}, }, + "computed in nested objects": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "map": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + // When an object has dynamic attrs, the map may be + // handled as an object. + "map_as_obj": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + "list": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingList, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + "set": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSet, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + "single": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSingle, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.DynamicPseudoType, + Computed: true, + }, + }, + }, + }, + }, + }, + cty.NullVal(cty.Object(map[string]cty.Type{ + "map": cty.Map(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "map_as_obj": cty.Map(cty.Object(map[string]cty.Type{ + "name": cty.DynamicPseudoType, + })), + "list": cty.List(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "set": cty.Set(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "single": cty.Object(map[string]cty.Type{ + "name": cty.String, + }), + })), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "map_as_obj": cty.ObjectVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.DynamicPseudoType), + }), + }), + "list": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "set": cty.SetVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "single": cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "map_as_obj": cty.ObjectVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("computed"), + }), + }), + "list": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "set": cty.SetVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "single": cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + nil, + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + errs := AssertPlanValid(test.Schema, test.Prior, test.Config, test.Planned) + + wantErrs := make(map[string]struct{}) + gotErrs := make(map[string]struct{}) + for _, err := range errs { + gotErrs[tfdiags.FormatError(err)] = struct{}{} + } + for _, msg := range test.WantErrs { + wantErrs[msg] = struct{}{} + } + + t.Logf( + "\nprior: %sconfig: %splanned: %s", + dump.Value(test.Planned), + dump.Value(test.Config), + dump.Value(test.Planned), + ) + for msg := range wantErrs { + if _, ok := gotErrs[msg]; !ok { + t.Errorf("missing expected error: %s", msg) + } + } + for msg := range gotErrs { + if _, ok := wantErrs[msg]; !ok { + t.Errorf("unexpected extra error: %s", msg) + } + } + }) } +} +func TestAssertPlanValidTEST(t *testing.T) { + tests := map[string]struct { + Schema *configschema.Block + Prior cty.Value + Config cty.Value + Planned cty.Value + WantErrs []string + }{ + "computed in map": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "items": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Computed: true, + Optional: true, + }, + }, + }, + Required: true, + }, + }, + }, + cty.NullVal(cty.Object(map[string]cty.Type{ + "items": cty.Map(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + })), + cty.ObjectVal(map[string]cty.Value{ + "items": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + //"name": cty.StringVal("computed"), + }), + }), + }), + cty.ObjectVal(map[string]cty.Value{ + "items": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("computed"), + }), + }), + }), + nil, + }, + } for name, test := range tests { t.Run(name, func(t *testing.T) { errs := AssertPlanValid(test.Schema, test.Prior, test.Config, test.Planned) From 903797084a361680f4c00f4b9439e7b7454f220a Mon Sep 17 00:00:00 2001 From: James Bardin Date: Mon, 30 Aug 2021 13:53:49 -0400 Subject: [PATCH 017/644] objects vs maps in nested object types When using NestedMap objects, unify the codepath for both maps and objects as they may be interchangeable. --- internal/plans/objchange/plan_valid.go | 97 ++++++++------------- internal/plans/objchange/plan_valid_test.go | 2 +- 2 files changed, 37 insertions(+), 62 deletions(-) diff --git a/internal/plans/objchange/plan_valid.go b/internal/plans/objchange/plan_valid.go index e26979e84b9b..5bcfe8dd580d 100644 --- a/internal/plans/objchange/plan_valid.go +++ b/internal/plans/objchange/plan_valid.go @@ -368,71 +368,46 @@ func assertPlannedObjectValid(schema *configschema.Object, prior, config, planne case configschema.NestingMap: // A NestingMap might either be a map or an object, depending on - // whether there are dynamically-typed attributes inside, but - // that's decided statically and so all values will have the same - // kind. - if planned.Type().IsObjectType() { - plannedAtys := planned.Type().AttributeTypes() - configAtys := config.Type().AttributeTypes() - for k := range plannedAtys { - if _, ok := configAtys[k]; !ok { - errs = append(errs, path.NewErrorf("block key %q from plan is not present in config", k)) - continue - } - path := append(path, cty.GetAttrStep{Name: k}) + // whether there are dynamically-typed attributes inside, so we will + // break these down to maps to handle them both in the same manner. + plannedVals := map[string]cty.Value{} + configVals := map[string]cty.Value{} + priorVals := map[string]cty.Value{} + + if !planned.IsNull() { + plannedVals = planned.AsValueMap() + } + if !config.IsNull() { + configVals = config.AsValueMap() + } + if !prior.IsNull() { + priorVals = prior.AsValueMap() + } - plannedEV := planned.GetAttr(k) - if !plannedEV.IsKnown() { - errs = append(errs, path.NewErrorf("element representing nested block must not be unknown itself; set nested attribute values to unknown instead")) - continue - } - configEV := config.GetAttr(k) - priorEV := cty.NullVal(schema.ImpliedType()) - if !prior.IsNull() && prior.Type().HasAttribute(k) { - priorEV = prior.GetAttr(k) - } - moreErrs := assertPlannedAttrsValid(schema.Attributes, priorEV, configEV, plannedEV, path) - errs = append(errs, moreErrs...) - } - for k := range configAtys { - if _, ok := plannedAtys[k]; !ok { - errs = append(errs, path.NewErrorf("block key %q from config is not present in plan", k)) - continue - } + for k, plannedEV := range plannedVals { + configEV, ok := configVals[k] + if !ok { + errs = append(errs, path.NewErrorf("block key %q from plan is not present in config", k)) + continue } - } else { - plannedL := planned.LengthInt() - configL := config.LengthInt() - if plannedL != configL { - errs = append(errs, path.NewErrorf("block count in plan (%d) disagrees with count in config (%d)", plannedL, configL)) - return errs + path := append(path, cty.GetAttrStep{Name: k}) + + if !plannedEV.IsKnown() { + errs = append(errs, path.NewErrorf("element representing nested block must not be unknown itself; set nested attribute values to unknown instead")) + continue } - for it := planned.ElementIterator(); it.Next(); { - idx, plannedEV := it.Element() - path := append(path, cty.IndexStep{Key: idx}) - if !plannedEV.IsKnown() { - errs = append(errs, path.NewErrorf("element representing nested block must not be unknown itself; set nested attribute values to unknown instead")) - continue - } - k := idx.AsString() - if !config.HasIndex(idx).True() { - errs = append(errs, path.NewErrorf("block key %q from plan is not present in config", k)) - continue - } - configEV := config.Index(idx) - priorEV := cty.NullVal(schema.ImpliedType()) - if !prior.IsNull() && prior.HasIndex(idx).True() { - priorEV = prior.Index(idx) - } - moreErrs := assertPlannedAttrsValid(schema.Attributes, priorEV, configEV, plannedEV, path) - errs = append(errs, moreErrs...) + + priorEV, ok := priorVals[k] + if !ok { + priorEV = cty.NullVal(schema.ImpliedType()) } - for it := config.ElementIterator(); it.Next(); { - idx, _ := it.Element() - if !planned.HasIndex(idx).True() { - errs = append(errs, path.NewErrorf("block key %q from config is not present in plan", idx.AsString())) - continue - } + moreErrs := assertPlannedAttrsValid(schema.Attributes, priorEV, configEV, plannedEV, path) + errs = append(errs, moreErrs...) + } + for k := range configVals { + if _, ok := plannedVals[k]; !ok { + errs = append(errs, path.NewErrorf("block key %q from config is not present in plan", k)) + continue } } diff --git a/internal/plans/objchange/plan_valid_test.go b/internal/plans/objchange/plan_valid_test.go index 783e7e15fca8..28150002b8c7 100644 --- a/internal/plans/objchange/plan_valid_test.go +++ b/internal/plans/objchange/plan_valid_test.go @@ -1307,7 +1307,7 @@ func TestAssertPlanValid(t *testing.T) { "name": cty.NullVal(cty.String), }), }), - "map_as_obj": cty.ObjectVal(map[string]cty.Value{ + "map_as_obj": cty.MapVal(map[string]cty.Value{ "one": cty.ObjectVal(map[string]cty.Value{ "name": cty.NullVal(cty.DynamicPseudoType), }), From ea68d79ea20b23e0bd3627f1e8609733093e4155 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Mon, 30 Aug 2021 14:12:47 -0400 Subject: [PATCH 018/644] nested object values can be computed While blocks were not allowed to be computed by the provider, nested objects can be. Remove the errors regarding blocks and verify unknown values are valid. --- internal/plans/objchange/plan_valid.go | 51 ++++------ internal/plans/objchange/plan_valid_test.go | 101 +++++++++++++++++++- 2 files changed, 119 insertions(+), 33 deletions(-) diff --git a/internal/plans/objchange/plan_valid.go b/internal/plans/objchange/plan_valid.go index 5bcfe8dd580d..2706a6c52e60 100644 --- a/internal/plans/objchange/plan_valid.go +++ b/internal/plans/objchange/plan_valid.go @@ -39,11 +39,11 @@ func AssertPlanValid(schema *configschema.Block, priorState, config, plannedStat func assertPlanValid(schema *configschema.Block, priorState, config, plannedState cty.Value, path cty.Path) []error { var errs []error if plannedState.IsNull() && !config.IsNull() { - errs = append(errs, path.NewErrorf("planned for absense but config wants existence")) + errs = append(errs, path.NewErrorf("planned for absence but config wants existence")) return errs } if config.IsNull() && !plannedState.IsNull() { - errs = append(errs, path.NewErrorf("planned for existence but config wants absense")) + errs = append(errs, path.NewErrorf("planned for existence but config wants absence")) return errs } if plannedState.IsNull() { @@ -286,6 +286,11 @@ func assertPlannedValueValid(attrS *configschema.Attribute, priorV, configV, pla } return errs } + } else { + if attrS.Computed { + errs = append(errs, path.NewErrorf("configuration present for computed attribute")) + return errs + } } // If this attribute has a NestedType, validate the nested object @@ -317,11 +322,11 @@ func assertPlannedObjectValid(schema *configschema.Object, prior, config, planne var errs []error if planned.IsNull() && !config.IsNull() { - errs = append(errs, path.NewErrorf("planned for absense but config wants existence")) + errs = append(errs, path.NewErrorf("planned for absence but config wants existence")) return errs } if config.IsNull() && !planned.IsNull() { - errs = append(errs, path.NewErrorf("planned for existence but config wants absense")) + errs = append(errs, path.NewErrorf("planned for existence but config wants absence")) return errs } if planned.IsNull() { @@ -349,10 +354,6 @@ func assertPlannedObjectValid(schema *configschema.Object, prior, config, planne for it := planned.ElementIterator(); it.Next(); { idx, plannedEV := it.Element() path := append(path, cty.IndexStep{Key: idx}) - if !plannedEV.IsKnown() { - errs = append(errs, path.NewErrorf("element representing nested block must not be unknown itself; set nested attribute values to unknown instead")) - continue - } if !config.HasIndex(idx).True() { continue // should never happen since we checked the lengths above } @@ -387,16 +388,11 @@ func assertPlannedObjectValid(schema *configschema.Object, prior, config, planne for k, plannedEV := range plannedVals { configEV, ok := configVals[k] if !ok { - errs = append(errs, path.NewErrorf("block key %q from plan is not present in config", k)) + errs = append(errs, path.NewErrorf("map key %q from plan is not present in config", k)) continue } path := append(path, cty.GetAttrStep{Name: k}) - if !plannedEV.IsKnown() { - errs = append(errs, path.NewErrorf("element representing nested block must not be unknown itself; set nested attribute values to unknown instead")) - continue - } - priorEV, ok := priorVals[k] if !ok { priorEV = cty.NullVal(schema.ImpliedType()) @@ -406,31 +402,22 @@ func assertPlannedObjectValid(schema *configschema.Object, prior, config, planne } for k := range configVals { if _, ok := plannedVals[k]; !ok { - errs = append(errs, path.NewErrorf("block key %q from config is not present in plan", k)) + errs = append(errs, path.NewErrorf("map key %q from config is not present in plan", k)) continue } } case configschema.NestingSet: + plannedL := planned.LengthInt() + configL := config.LengthInt() + if plannedL != configL { + errs = append(errs, path.NewErrorf("count in plan (%d) disagrees with count in config (%d)", plannedL, configL)) + return errs + } // Because set elements have no identifier with which to correlate - // them, we can't robustly validate the plan for a nested block + // them, we can't robustly validate the plan for a nested object // backed by a set, and so unfortunately we need to just trust the - // provider to do the right thing. :( - // - // (In principle we could correlate elements by matching the - // subset of attributes explicitly set in config, except for the - // special diff suppression rule which allows for there to be a - // planned value that is constructed by mixing part of a prior - // value with part of a config value, creating an entirely new - // element that is not present in either prior nor config.) - for it := planned.ElementIterator(); it.Next(); { - idx, plannedEV := it.Element() - path := append(path, cty.IndexStep{Key: idx}) - if !plannedEV.IsKnown() { - errs = append(errs, path.NewErrorf("element representing nested block must not be unknown itself; set nested attribute values to unknown instead")) - continue - } - } + // provider to do the right thing. } return errs diff --git a/internal/plans/objchange/plan_valid_test.go b/internal/plans/objchange/plan_valid_test.go index 28150002b8c7..dedb3958b010 100644 --- a/internal/plans/objchange/plan_valid_test.go +++ b/internal/plans/objchange/plan_valid_test.go @@ -1222,7 +1222,7 @@ func TestAssertPlanValid(t *testing.T) { }), []string{`.bloop: planned value cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"blop":cty.StringVal("ok")})}) for a non-computed attribute`}, }, - "computed in nested objects": { + "computed within nested objects": { &configschema.Block{ Attributes: map[string]*configschema.Attribute{ "map": { @@ -1353,6 +1353,105 @@ func TestAssertPlanValid(t *testing.T) { }), nil, }, + "computed nested objects": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "map": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + }, + }, + }, + Computed: true, + }, + "list": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingList, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + }, + }, + }, + Computed: true, + }, + "set": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSet, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + }, + }, + }, + Computed: true, + }, + "single": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSingle, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.DynamicPseudoType, + }, + }, + }, + Computed: true, + }, + }, + }, + cty.NullVal(cty.Object(map[string]cty.Type{ + "map": cty.Map(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "list": cty.List(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "set": cty.Set(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "single": cty.Object(map[string]cty.Type{ + "name": cty.String, + }), + })), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.NullVal(cty.Map(cty.Object(map[string]cty.Type{ + "name": cty.String, + }))), + "list": cty.NullVal(cty.List(cty.Object(map[string]cty.Type{ + "name": cty.String, + }))), + "set": cty.NullVal(cty.Set(cty.Object(map[string]cty.Type{ + "name": cty.String, + }))), + "single": cty.NullVal(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + }), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.UnknownVal(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + }), + "list": cty.ListVal([]cty.Value{ + cty.UnknownVal(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + }), + "set": cty.SetVal([]cty.Value{ + cty.UnknownVal(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + }), + "single": cty.UnknownVal(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + }), + nil, + }, } for name, test := range tests { From 22b36d1f4c6c6695ca54af33cda2bc602d714db8 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 20 Aug 2021 16:08:12 -0700 Subject: [PATCH 019/644] Field for the previous address of each resource instance in the plan In order to expose the effect of any relevant "moved" statements we dealt with prior to creating the plan, we'll record with each ResourceInstanceChange both is current address and the address it was tracked at for the previous run. To save consumers of these objects from having to special-case the situation where there _was_ no previous run (e.g. because this is a Create change), we'll just pretend the previous run address was the same as the current address in that case, the same as for an update without any renaming in effect. This includes a breaking change to the plan file format, but one that doesn't require a version number increment because there is no ambiguity between the two formats and so mismatched parsers will already fail with an error message. As of this commit we've just added the new field but not yet populated it with any useful information: it always just matches Addr. A future commit will wire this up to the result of applying the moves so that we can populate it correctly. We also don't yet expose this new information anywhere in the UI layer. --- internal/plans/changes.go | 20 + internal/plans/changes_src.go | 20 + .../plans/internal/planproto/planfile.pb.go | 362 ++++++------------ .../plans/internal/planproto/planfile.proto | 44 +-- internal/plans/planfile/tfplan.go | 98 ++--- internal/plans/planfile/tfplan_test.go | 17 +- .../node_resource_abstract_instance.go | 24 +- 7 files changed, 241 insertions(+), 344 deletions(-) diff --git a/internal/plans/changes.go b/internal/plans/changes.go index bcab96ed6dbe..ba06244cb92c 100644 --- a/internal/plans/changes.go +++ b/internal/plans/changes.go @@ -147,6 +147,19 @@ type ResourceInstanceChange struct { // will apply to. Addr addrs.AbsResourceInstance + // PrevRunAddr is the absolute address that this resource instance had at + // the conclusion of a previous run. + // + // This will typically be the same as Addr, but can be different if the + // previous resource instance was subject to a "moved" block that we + // handled in the process of creating this plan. + // + // For the initial creation of a resource instance there isn't really any + // meaningful "previous run address", but PrevRunAddr will still be set + // equal to Addr in that case in order to simplify logic elsewhere which + // aims to detect and react to the movement of instances between addresses. + PrevRunAddr addrs.AbsResourceInstance + // DeposedKey is the identifier for a deposed object associated with the // given instance, or states.NotDeposed if this change applies to the // current object. @@ -203,8 +216,15 @@ func (rc *ResourceInstanceChange) Encode(ty cty.Type) (*ResourceInstanceChangeSr if err != nil { return nil, err } + prevRunAddr := rc.PrevRunAddr + if prevRunAddr.Resource.Resource.Type == "" { + // Suggests an old caller that hasn't been properly updated to + // populate this yet. + prevRunAddr = rc.Addr + } return &ResourceInstanceChangeSrc{ Addr: rc.Addr, + PrevRunAddr: prevRunAddr, DeposedKey: rc.DeposedKey, ProviderAddr: rc.ProviderAddr, ChangeSrc: *cs, diff --git a/internal/plans/changes_src.go b/internal/plans/changes_src.go index b254f176002a..69330a21d897 100644 --- a/internal/plans/changes_src.go +++ b/internal/plans/changes_src.go @@ -16,6 +16,19 @@ type ResourceInstanceChangeSrc struct { // will apply to. Addr addrs.AbsResourceInstance + // PrevRunAddr is the absolute address that this resource instance had at + // the conclusion of a previous run. + // + // This will typically be the same as Addr, but can be different if the + // previous resource instance was subject to a "moved" block that we + // handled in the process of creating this plan. + // + // For the initial creation of a resource instance there isn't really any + // meaningful "previous run address", but PrevRunAddr will still be set + // equal to Addr in that case in order to simplify logic elsewhere which + // aims to detect and react to the movement of instances between addresses. + PrevRunAddr addrs.AbsResourceInstance + // DeposedKey is the identifier for a deposed object associated with the // given instance, or states.NotDeposed if this change applies to the // current object. @@ -66,8 +79,15 @@ func (rcs *ResourceInstanceChangeSrc) Decode(ty cty.Type) (*ResourceInstanceChan if err != nil { return nil, err } + prevRunAddr := rcs.PrevRunAddr + if prevRunAddr.Resource.Resource.Type == "" { + // Suggests an old caller that hasn't been properly updated to + // populate this yet. + prevRunAddr = rcs.Addr + } return &ResourceInstanceChange{ Addr: rcs.Addr, + PrevRunAddr: prevRunAddr, DeposedKey: rcs.DeposedKey, ProviderAddr: rcs.ProviderAddr, Change: *change, diff --git a/internal/plans/internal/planproto/planfile.pb.go b/internal/plans/internal/planproto/planfile.pb.go index e5e6dc7aca48..9f541946c6d5 100644 --- a/internal/plans/internal/planproto/planfile.pb.go +++ b/internal/plans/internal/planproto/planfile.pb.go @@ -194,52 +194,6 @@ func (ResourceInstanceActionReason) EnumDescriptor() ([]byte, []int) { return file_planfile_proto_rawDescGZIP(), []int{2} } -type ResourceInstanceChange_ResourceMode int32 - -const ( - ResourceInstanceChange_managed ResourceInstanceChange_ResourceMode = 0 // for "resource" blocks in configuration - ResourceInstanceChange_data ResourceInstanceChange_ResourceMode = 1 // for "data" blocks in configuration -) - -// Enum value maps for ResourceInstanceChange_ResourceMode. -var ( - ResourceInstanceChange_ResourceMode_name = map[int32]string{ - 0: "managed", - 1: "data", - } - ResourceInstanceChange_ResourceMode_value = map[string]int32{ - "managed": 0, - "data": 1, - } -) - -func (x ResourceInstanceChange_ResourceMode) Enum() *ResourceInstanceChange_ResourceMode { - p := new(ResourceInstanceChange_ResourceMode) - *p = x - return p -} - -func (x ResourceInstanceChange_ResourceMode) String() string { - return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x)) -} - -func (ResourceInstanceChange_ResourceMode) Descriptor() protoreflect.EnumDescriptor { - return file_planfile_proto_enumTypes[3].Descriptor() -} - -func (ResourceInstanceChange_ResourceMode) Type() protoreflect.EnumType { - return &file_planfile_proto_enumTypes[3] -} - -func (x ResourceInstanceChange_ResourceMode) Number() protoreflect.EnumNumber { - return protoreflect.EnumNumber(x) -} - -// Deprecated: Use ResourceInstanceChange_ResourceMode.Descriptor instead. -func (ResourceInstanceChange_ResourceMode) EnumDescriptor() ([]byte, []int) { - return file_planfile_proto_rawDescGZIP(), []int{3, 0} -} - // Plan is the root message type for the tfplan file type Plan struct { state protoimpl.MessageState @@ -553,27 +507,16 @@ type ResourceInstanceChange struct { sizeCache protoimpl.SizeCache unknownFields protoimpl.UnknownFields - // module_path is an address to the module that defined this resource. - // module_path is omitted for resources in the root module. For descendent modules - // it is a string like module.foo.module.bar as would be seen at the beginning of a - // resource address. The format of this string is not yet frozen and so external - // callers should treat it as an opaque key for filtering purposes. - ModulePath string `protobuf:"bytes,1,opt,name=module_path,json=modulePath,proto3" json:"module_path,omitempty"` - // mode is the resource mode. - Mode ResourceInstanceChange_ResourceMode `protobuf:"varint,2,opt,name=mode,proto3,enum=tfplan.ResourceInstanceChange_ResourceMode" json:"mode,omitempty"` - // type is the resource type name, like "aws_instance". - Type string `protobuf:"bytes,3,opt,name=type,proto3" json:"type,omitempty"` - // name is the logical name of the resource as defined in configuration. - // For example, in aws_instance.foo this would be "foo". - Name string `protobuf:"bytes,4,opt,name=name,proto3" json:"name,omitempty"` - // instance_key is either an integer index or a string key, depending on which iteration - // attributes ("count" or "for_each") are being used for this resource. If none - // are in use, this field is omitted. + // addr is a string representation of the resource instance address that + // this change will apply to. + Addr string `protobuf:"bytes,13,opt,name=addr,proto3" json:"addr,omitempty"` + // prev_run_addr is a string representation of the address at which + // this resource instance was tracked during the previous apply operation. // - // Types that are assignable to InstanceKey: - // *ResourceInstanceChange_Str - // *ResourceInstanceChange_Int - InstanceKey isResourceInstanceChange_InstanceKey `protobuf_oneof:"instance_key"` + // This is populated only if it would be different from addr due to + // Terraform having reacted to refactoring annotations in the configuration. + // If empty, the previous run address is the same as the current address. + PrevRunAddr string `protobuf:"bytes,14,opt,name=prev_run_addr,json=prevRunAddr,proto3" json:"prev_run_addr,omitempty"` // deposed_key, if set, indicates that this change applies to a deposed // object for the indicated instance with the given deposed key. If not // set, the change applies to the instance's current object. @@ -583,8 +526,7 @@ type ResourceInstanceChange struct { // apply it. Provider string `protobuf:"bytes,8,opt,name=provider,proto3" json:"provider,omitempty"` // Description of the proposed change. May use "create", "read", "update", - // "replace" and "delete" actions. "no-op" changes are not currently used here - // but consumers must accept and discard them to allow for future expansion. + // "replace", "delete" and "no-op" actions. Change *Change `protobuf:"bytes,9,opt,name=change,proto3" json:"change,omitempty"` // raw blob value provided by the provider as additional context for the // change. Must be considered an opaque value for any consumer other than @@ -633,55 +575,20 @@ func (*ResourceInstanceChange) Descriptor() ([]byte, []int) { return file_planfile_proto_rawDescGZIP(), []int{3} } -func (x *ResourceInstanceChange) GetModulePath() string { - if x != nil { - return x.ModulePath - } - return "" -} - -func (x *ResourceInstanceChange) GetMode() ResourceInstanceChange_ResourceMode { - if x != nil { - return x.Mode - } - return ResourceInstanceChange_managed -} - -func (x *ResourceInstanceChange) GetType() string { +func (x *ResourceInstanceChange) GetAddr() string { if x != nil { - return x.Type + return x.Addr } return "" } -func (x *ResourceInstanceChange) GetName() string { +func (x *ResourceInstanceChange) GetPrevRunAddr() string { if x != nil { - return x.Name + return x.PrevRunAddr } return "" } -func (m *ResourceInstanceChange) GetInstanceKey() isResourceInstanceChange_InstanceKey { - if m != nil { - return m.InstanceKey - } - return nil -} - -func (x *ResourceInstanceChange) GetStr() string { - if x, ok := x.GetInstanceKey().(*ResourceInstanceChange_Str); ok { - return x.Str - } - return "" -} - -func (x *ResourceInstanceChange) GetInt() int64 { - if x, ok := x.GetInstanceKey().(*ResourceInstanceChange_Int); ok { - return x.Int - } - return 0 -} - func (x *ResourceInstanceChange) GetDeposedKey() string { if x != nil { return x.DeposedKey @@ -724,22 +631,6 @@ func (x *ResourceInstanceChange) GetActionReason() ResourceInstanceActionReason return ResourceInstanceActionReason_NONE } -type isResourceInstanceChange_InstanceKey interface { - isResourceInstanceChange_InstanceKey() -} - -type ResourceInstanceChange_Str struct { - Str string `protobuf:"bytes,5,opt,name=str,proto3,oneof"` -} - -type ResourceInstanceChange_Int struct { - Int int64 `protobuf:"varint,6,opt,name=int,proto3,oneof"` -} - -func (*ResourceInstanceChange_Str) isResourceInstanceChange_InstanceKey() {} - -func (*ResourceInstanceChange_Int) isResourceInstanceChange_InstanceKey() {} - type OutputChange struct { state protoimpl.MessageState sizeCache protoimpl.SizeCache @@ -1125,84 +1016,73 @@ var file_planfile_proto_rawDesc = []byte{ 0x69, 0x76, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x13, 0x61, 0x66, 0x74, 0x65, 0x72, 0x53, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x50, 0x61, 0x74, - 0x68, 0x73, 0x22, 0x84, 0x04, 0x0a, 0x16, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, - 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x1f, 0x0a, - 0x0b, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x18, 0x01, 0x20, 0x01, - 0x28, 0x09, 0x52, 0x0a, 0x6d, 0x6f, 0x64, 0x75, 0x6c, 0x65, 0x50, 0x61, 0x74, 0x68, 0x12, 0x3f, - 0x0a, 0x04, 0x6d, 0x6f, 0x64, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x2b, 0x2e, 0x74, - 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, - 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x2e, 0x52, 0x65, 0x73, - 0x6f, 0x75, 0x72, 0x63, 0x65, 0x4d, 0x6f, 0x64, 0x65, 0x52, 0x04, 0x6d, 0x6f, 0x64, 0x65, 0x12, - 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x74, - 0x79, 0x70, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x12, 0x0a, 0x03, 0x73, 0x74, 0x72, 0x18, 0x05, - 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x03, 0x73, 0x74, 0x72, 0x12, 0x12, 0x0a, 0x03, 0x69, - 0x6e, 0x74, 0x18, 0x06, 0x20, 0x01, 0x28, 0x03, 0x48, 0x00, 0x52, 0x03, 0x69, 0x6e, 0x74, 0x12, - 0x1f, 0x0a, 0x0b, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x07, - 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x4b, 0x65, 0x79, - 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x18, 0x08, 0x20, 0x01, - 0x28, 0x09, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, 0x26, 0x0a, 0x06, - 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, + 0x68, 0x73, 0x22, 0xd3, 0x02, 0x0a, 0x16, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, + 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, + 0x04, 0x61, 0x64, 0x64, 0x72, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x61, 0x64, 0x64, + 0x72, 0x12, 0x22, 0x0a, 0x0d, 0x70, 0x72, 0x65, 0x76, 0x5f, 0x72, 0x75, 0x6e, 0x5f, 0x61, 0x64, + 0x64, 0x72, 0x18, 0x0e, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x70, 0x72, 0x65, 0x76, 0x52, 0x75, + 0x6e, 0x41, 0x64, 0x64, 0x72, 0x12, 0x1f, 0x0a, 0x0b, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, + 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x6f, + 0x73, 0x65, 0x64, 0x4b, 0x65, 0x79, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, + 0x65, 0x72, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, + 0x65, 0x72, 0x12, 0x26, 0x0a, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x09, 0x20, 0x01, + 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, 0x6e, + 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, + 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, + 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x10, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, + 0x5f, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x18, 0x0b, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, + 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x0f, 0x72, 0x65, + 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x52, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x12, 0x49, 0x0a, + 0x0d, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x0c, + 0x20, 0x01, 0x28, 0x0e, 0x32, 0x24, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, + 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, + 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x52, 0x0c, 0x61, 0x63, 0x74, 0x69, + 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x22, 0x68, 0x0a, 0x0c, 0x4f, 0x75, 0x74, 0x70, + 0x75, 0x74, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, + 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x26, 0x0a, 0x06, + 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, - 0x61, 0x6e, 0x67, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, - 0x0a, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, - 0x0a, 0x10, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x70, 0x6c, 0x61, - 0x63, 0x65, 0x18, 0x0b, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, - 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x0f, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, - 0x52, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x12, 0x49, 0x0a, 0x0d, 0x61, 0x63, 0x74, 0x69, 0x6f, - 0x6e, 0x5f, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x24, - 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, - 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, - 0x61, 0x73, 0x6f, 0x6e, 0x52, 0x0c, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, - 0x6f, 0x6e, 0x22, 0x25, 0x0a, 0x0c, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x4d, 0x6f, - 0x64, 0x65, 0x12, 0x0b, 0x0a, 0x07, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65, 0x64, 0x10, 0x00, 0x12, - 0x08, 0x0a, 0x04, 0x64, 0x61, 0x74, 0x61, 0x10, 0x01, 0x42, 0x0e, 0x0a, 0x0c, 0x69, 0x6e, 0x73, - 0x74, 0x61, 0x6e, 0x63, 0x65, 0x5f, 0x6b, 0x65, 0x79, 0x22, 0x68, 0x0a, 0x0c, 0x4f, 0x75, 0x74, - 0x70, 0x75, 0x74, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, - 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x26, 0x0a, - 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, - 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, - 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x1c, 0x0a, 0x09, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, - 0x76, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, - 0x69, 0x76, 0x65, 0x22, 0x28, 0x0a, 0x0c, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, - 0x6c, 0x75, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x73, 0x67, 0x70, 0x61, 0x63, 0x6b, 0x18, 0x01, - 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x6d, 0x73, 0x67, 0x70, 0x61, 0x63, 0x6b, 0x22, 0x1e, 0x0a, - 0x04, 0x48, 0x61, 0x73, 0x68, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x68, 0x61, 0x32, 0x35, 0x36, 0x18, - 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x73, 0x68, 0x61, 0x32, 0x35, 0x36, 0x22, 0xa5, 0x01, - 0x0a, 0x04, 0x50, 0x61, 0x74, 0x68, 0x12, 0x27, 0x0a, 0x05, 0x73, 0x74, 0x65, 0x70, 0x73, 0x18, - 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, - 0x61, 0x74, 0x68, 0x2e, 0x53, 0x74, 0x65, 0x70, 0x52, 0x05, 0x73, 0x74, 0x65, 0x70, 0x73, 0x1a, - 0x74, 0x0a, 0x04, 0x53, 0x74, 0x65, 0x70, 0x12, 0x27, 0x0a, 0x0e, 0x61, 0x74, 0x74, 0x72, 0x69, - 0x62, 0x75, 0x74, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x48, - 0x00, 0x52, 0x0d, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, - 0x12, 0x37, 0x0a, 0x0b, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x5f, 0x6b, 0x65, 0x79, 0x18, - 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, - 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x48, 0x00, 0x52, 0x0a, 0x65, - 0x6c, 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x4b, 0x65, 0x79, 0x42, 0x0a, 0x0a, 0x08, 0x73, 0x65, 0x6c, - 0x65, 0x63, 0x74, 0x6f, 0x72, 0x2a, 0x31, 0x0a, 0x04, 0x4d, 0x6f, 0x64, 0x65, 0x12, 0x0a, 0x0a, - 0x06, 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c, 0x10, 0x00, 0x12, 0x0b, 0x0a, 0x07, 0x44, 0x45, 0x53, - 0x54, 0x52, 0x4f, 0x59, 0x10, 0x01, 0x12, 0x10, 0x0a, 0x0c, 0x52, 0x45, 0x46, 0x52, 0x45, 0x53, - 0x48, 0x5f, 0x4f, 0x4e, 0x4c, 0x59, 0x10, 0x02, 0x2a, 0x70, 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, - 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4f, 0x50, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, - 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x01, 0x12, 0x08, 0x0a, 0x04, 0x52, 0x45, 0x41, 0x44, - 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x55, 0x50, 0x44, 0x41, 0x54, 0x45, 0x10, 0x03, 0x12, 0x0a, - 0x0a, 0x06, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x05, 0x12, 0x16, 0x0a, 0x12, 0x44, 0x45, - 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, - 0x10, 0x06, 0x12, 0x16, 0x0a, 0x12, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, - 0x4e, 0x5f, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x07, 0x2a, 0x80, 0x01, 0x0a, 0x1c, 0x52, - 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, - 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, - 0x4f, 0x4e, 0x45, 0x10, 0x00, 0x12, 0x1b, 0x0a, 0x17, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, - 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x54, 0x41, 0x49, 0x4e, 0x54, 0x45, 0x44, - 0x10, 0x01, 0x12, 0x16, 0x0a, 0x12, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x59, - 0x5f, 0x52, 0x45, 0x51, 0x55, 0x45, 0x53, 0x54, 0x10, 0x02, 0x12, 0x21, 0x0a, 0x1d, 0x52, 0x45, - 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x43, 0x41, - 0x4e, 0x4e, 0x4f, 0x54, 0x5f, 0x55, 0x50, 0x44, 0x41, 0x54, 0x45, 0x10, 0x03, 0x42, 0x42, 0x5a, - 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, - 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x2f, - 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x73, 0x2f, 0x69, - 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x70, 0x72, 0x6f, 0x74, - 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, + 0x61, 0x6e, 0x67, 0x65, 0x12, 0x1c, 0x0a, 0x09, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, + 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, + 0x76, 0x65, 0x22, 0x28, 0x0a, 0x0c, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, + 0x75, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x73, 0x67, 0x70, 0x61, 0x63, 0x6b, 0x18, 0x01, 0x20, + 0x01, 0x28, 0x0c, 0x52, 0x07, 0x6d, 0x73, 0x67, 0x70, 0x61, 0x63, 0x6b, 0x22, 0x1e, 0x0a, 0x04, + 0x48, 0x61, 0x73, 0x68, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x68, 0x61, 0x32, 0x35, 0x36, 0x18, 0x01, + 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x73, 0x68, 0x61, 0x32, 0x35, 0x36, 0x22, 0xa5, 0x01, 0x0a, + 0x04, 0x50, 0x61, 0x74, 0x68, 0x12, 0x27, 0x0a, 0x05, 0x73, 0x74, 0x65, 0x70, 0x73, 0x18, 0x01, + 0x20, 0x03, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, + 0x74, 0x68, 0x2e, 0x53, 0x74, 0x65, 0x70, 0x52, 0x05, 0x73, 0x74, 0x65, 0x70, 0x73, 0x1a, 0x74, + 0x0a, 0x04, 0x53, 0x74, 0x65, 0x70, 0x12, 0x27, 0x0a, 0x0e, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, + 0x75, 0x74, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, + 0x52, 0x0d, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, + 0x37, 0x0a, 0x0b, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x02, + 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, + 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x48, 0x00, 0x52, 0x0a, 0x65, 0x6c, + 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x4b, 0x65, 0x79, 0x42, 0x0a, 0x0a, 0x08, 0x73, 0x65, 0x6c, 0x65, + 0x63, 0x74, 0x6f, 0x72, 0x2a, 0x31, 0x0a, 0x04, 0x4d, 0x6f, 0x64, 0x65, 0x12, 0x0a, 0x0a, 0x06, + 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c, 0x10, 0x00, 0x12, 0x0b, 0x0a, 0x07, 0x44, 0x45, 0x53, 0x54, + 0x52, 0x4f, 0x59, 0x10, 0x01, 0x12, 0x10, 0x0a, 0x0c, 0x52, 0x45, 0x46, 0x52, 0x45, 0x53, 0x48, + 0x5f, 0x4f, 0x4e, 0x4c, 0x59, 0x10, 0x02, 0x2a, 0x70, 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, 0x6f, + 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4f, 0x50, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x43, + 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x01, 0x12, 0x08, 0x0a, 0x04, 0x52, 0x45, 0x41, 0x44, 0x10, + 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x55, 0x50, 0x44, 0x41, 0x54, 0x45, 0x10, 0x03, 0x12, 0x0a, 0x0a, + 0x06, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x05, 0x12, 0x16, 0x0a, 0x12, 0x44, 0x45, 0x4c, + 0x45, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, + 0x06, 0x12, 0x16, 0x0a, 0x12, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, + 0x5f, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x07, 0x2a, 0x80, 0x01, 0x0a, 0x1c, 0x52, 0x65, + 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, + 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, + 0x4e, 0x45, 0x10, 0x00, 0x12, 0x1b, 0x0a, 0x17, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, + 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x54, 0x41, 0x49, 0x4e, 0x54, 0x45, 0x44, 0x10, + 0x01, 0x12, 0x16, 0x0a, 0x12, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x59, 0x5f, + 0x52, 0x45, 0x51, 0x55, 0x45, 0x53, 0x54, 0x10, 0x02, 0x12, 0x21, 0x0a, 0x1d, 0x52, 0x45, 0x50, + 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x43, 0x41, 0x4e, + 0x4e, 0x4f, 0x54, 0x5f, 0x55, 0x50, 0x44, 0x41, 0x54, 0x45, 0x10, 0x03, 0x42, 0x42, 0x5a, 0x40, + 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, + 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x2f, 0x69, + 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x73, 0x2f, 0x69, 0x6e, + 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, + 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( @@ -1217,51 +1097,49 @@ func file_planfile_proto_rawDescGZIP() []byte { return file_planfile_proto_rawDescData } -var file_planfile_proto_enumTypes = make([]protoimpl.EnumInfo, 4) +var file_planfile_proto_enumTypes = make([]protoimpl.EnumInfo, 3) var file_planfile_proto_msgTypes = make([]protoimpl.MessageInfo, 11) var file_planfile_proto_goTypes = []interface{}{ - (Mode)(0), // 0: tfplan.Mode - (Action)(0), // 1: tfplan.Action - (ResourceInstanceActionReason)(0), // 2: tfplan.ResourceInstanceActionReason - (ResourceInstanceChange_ResourceMode)(0), // 3: tfplan.ResourceInstanceChange.ResourceMode - (*Plan)(nil), // 4: tfplan.Plan - (*Backend)(nil), // 5: tfplan.Backend - (*Change)(nil), // 6: tfplan.Change - (*ResourceInstanceChange)(nil), // 7: tfplan.ResourceInstanceChange - (*OutputChange)(nil), // 8: tfplan.OutputChange - (*DynamicValue)(nil), // 9: tfplan.DynamicValue - (*Hash)(nil), // 10: tfplan.Hash - (*Path)(nil), // 11: tfplan.Path - nil, // 12: tfplan.Plan.VariablesEntry - nil, // 13: tfplan.Plan.ProviderHashesEntry - (*Path_Step)(nil), // 14: tfplan.Path.Step + (Mode)(0), // 0: tfplan.Mode + (Action)(0), // 1: tfplan.Action + (ResourceInstanceActionReason)(0), // 2: tfplan.ResourceInstanceActionReason + (*Plan)(nil), // 3: tfplan.Plan + (*Backend)(nil), // 4: tfplan.Backend + (*Change)(nil), // 5: tfplan.Change + (*ResourceInstanceChange)(nil), // 6: tfplan.ResourceInstanceChange + (*OutputChange)(nil), // 7: tfplan.OutputChange + (*DynamicValue)(nil), // 8: tfplan.DynamicValue + (*Hash)(nil), // 9: tfplan.Hash + (*Path)(nil), // 10: tfplan.Path + nil, // 11: tfplan.Plan.VariablesEntry + nil, // 12: tfplan.Plan.ProviderHashesEntry + (*Path_Step)(nil), // 13: tfplan.Path.Step } var file_planfile_proto_depIdxs = []int32{ 0, // 0: tfplan.Plan.ui_mode:type_name -> tfplan.Mode - 12, // 1: tfplan.Plan.variables:type_name -> tfplan.Plan.VariablesEntry - 7, // 2: tfplan.Plan.resource_changes:type_name -> tfplan.ResourceInstanceChange - 8, // 3: tfplan.Plan.output_changes:type_name -> tfplan.OutputChange - 13, // 4: tfplan.Plan.provider_hashes:type_name -> tfplan.Plan.ProviderHashesEntry - 5, // 5: tfplan.Plan.backend:type_name -> tfplan.Backend - 9, // 6: tfplan.Backend.config:type_name -> tfplan.DynamicValue + 11, // 1: tfplan.Plan.variables:type_name -> tfplan.Plan.VariablesEntry + 6, // 2: tfplan.Plan.resource_changes:type_name -> tfplan.ResourceInstanceChange + 7, // 3: tfplan.Plan.output_changes:type_name -> tfplan.OutputChange + 12, // 4: tfplan.Plan.provider_hashes:type_name -> tfplan.Plan.ProviderHashesEntry + 4, // 5: tfplan.Plan.backend:type_name -> tfplan.Backend + 8, // 6: tfplan.Backend.config:type_name -> tfplan.DynamicValue 1, // 7: tfplan.Change.action:type_name -> tfplan.Action - 9, // 8: tfplan.Change.values:type_name -> tfplan.DynamicValue - 11, // 9: tfplan.Change.before_sensitive_paths:type_name -> tfplan.Path - 11, // 10: tfplan.Change.after_sensitive_paths:type_name -> tfplan.Path - 3, // 11: tfplan.ResourceInstanceChange.mode:type_name -> tfplan.ResourceInstanceChange.ResourceMode - 6, // 12: tfplan.ResourceInstanceChange.change:type_name -> tfplan.Change - 11, // 13: tfplan.ResourceInstanceChange.required_replace:type_name -> tfplan.Path - 2, // 14: tfplan.ResourceInstanceChange.action_reason:type_name -> tfplan.ResourceInstanceActionReason - 6, // 15: tfplan.OutputChange.change:type_name -> tfplan.Change - 14, // 16: tfplan.Path.steps:type_name -> tfplan.Path.Step - 9, // 17: tfplan.Plan.VariablesEntry.value:type_name -> tfplan.DynamicValue - 10, // 18: tfplan.Plan.ProviderHashesEntry.value:type_name -> tfplan.Hash - 9, // 19: tfplan.Path.Step.element_key:type_name -> tfplan.DynamicValue - 20, // [20:20] is the sub-list for method output_type - 20, // [20:20] is the sub-list for method input_type - 20, // [20:20] is the sub-list for extension type_name - 20, // [20:20] is the sub-list for extension extendee - 0, // [0:20] is the sub-list for field type_name + 8, // 8: tfplan.Change.values:type_name -> tfplan.DynamicValue + 10, // 9: tfplan.Change.before_sensitive_paths:type_name -> tfplan.Path + 10, // 10: tfplan.Change.after_sensitive_paths:type_name -> tfplan.Path + 5, // 11: tfplan.ResourceInstanceChange.change:type_name -> tfplan.Change + 10, // 12: tfplan.ResourceInstanceChange.required_replace:type_name -> tfplan.Path + 2, // 13: tfplan.ResourceInstanceChange.action_reason:type_name -> tfplan.ResourceInstanceActionReason + 5, // 14: tfplan.OutputChange.change:type_name -> tfplan.Change + 13, // 15: tfplan.Path.steps:type_name -> tfplan.Path.Step + 8, // 16: tfplan.Plan.VariablesEntry.value:type_name -> tfplan.DynamicValue + 9, // 17: tfplan.Plan.ProviderHashesEntry.value:type_name -> tfplan.Hash + 8, // 18: tfplan.Path.Step.element_key:type_name -> tfplan.DynamicValue + 19, // [19:19] is the sub-list for method output_type + 19, // [19:19] is the sub-list for method input_type + 19, // [19:19] is the sub-list for extension type_name + 19, // [19:19] is the sub-list for extension extendee + 0, // [0:19] is the sub-list for field type_name } func init() { file_planfile_proto_init() } @@ -1379,10 +1257,6 @@ func file_planfile_proto_init() { } } } - file_planfile_proto_msgTypes[3].OneofWrappers = []interface{}{ - (*ResourceInstanceChange_Str)(nil), - (*ResourceInstanceChange_Int)(nil), - } file_planfile_proto_msgTypes[10].OneofWrappers = []interface{}{ (*Path_Step_AttributeName)(nil), (*Path_Step_ElementKey)(nil), @@ -1392,7 +1266,7 @@ func file_planfile_proto_init() { File: protoimpl.DescBuilder{ GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: file_planfile_proto_rawDesc, - NumEnums: 4, + NumEnums: 3, NumMessages: 11, NumExtensions: 0, NumServices: 0, diff --git a/internal/plans/internal/planproto/planfile.proto b/internal/plans/internal/planproto/planfile.proto index bd673e50da32..6ec4ae402441 100644 --- a/internal/plans/internal/planproto/planfile.proto +++ b/internal/plans/internal/planproto/planfile.proto @@ -129,34 +129,23 @@ enum ResourceInstanceActionReason { } message ResourceInstanceChange { - // module_path is an address to the module that defined this resource. - // module_path is omitted for resources in the root module. For descendent modules - // it is a string like module.foo.module.bar as would be seen at the beginning of a - // resource address. The format of this string is not yet frozen and so external - // callers should treat it as an opaque key for filtering purposes. - string module_path = 1; - - // mode is the resource mode. - ResourceMode mode = 2; - enum ResourceMode { - managed = 0; // for "resource" blocks in configuration - data = 1; // for "data" blocks in configuration - } - - // type is the resource type name, like "aws_instance". - string type = 3; + // addr is a string representation of the resource instance address that + // this change will apply to. + string addr = 13; - // name is the logical name of the resource as defined in configuration. - // For example, in aws_instance.foo this would be "foo". - string name = 4; + // prev_run_addr is a string representation of the address at which + // this resource instance was tracked during the previous apply operation. + // + // This is populated only if it would be different from addr due to + // Terraform having reacted to refactoring annotations in the configuration. + // If empty, the previous run address is the same as the current address. + string prev_run_addr = 14; - // instance_key is either an integer index or a string key, depending on which iteration - // attributes ("count" or "for_each") are being used for this resource. If none - // are in use, this field is omitted. - oneof instance_key { - string str = 5; - int64 int = 6; - }; + // NOTE: Earlier versions of this format had fields 1 through 6 describing + // various indivdual parts of "addr". We're now using our standard compact + // string representation to capture the same information. We don't support + // preserving plan files from one Terraform version to the next, so we + // no longer declare nor accept those fields. // deposed_key, if set, indicates that this change applies to a deposed // object for the indicated instance with the given deposed key. If not @@ -169,8 +158,7 @@ message ResourceInstanceChange { string provider = 8; // Description of the proposed change. May use "create", "read", "update", - // "replace" and "delete" actions. "no-op" changes are not currently used here - // but consumers must accept and discard them to allow for future expansion. + // "replace", "delete" and "no-op" actions. Change change = 9; // raw blob value provided by the provider as additional context for the diff --git a/internal/plans/planfile/tfplan.go b/internal/plans/planfile/tfplan.go index 735a33ae12c0..7572020b6c41 100644 --- a/internal/plans/planfile/tfplan.go +++ b/internal/plans/planfile/tfplan.go @@ -12,7 +12,6 @@ import ( "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/plans/internal/planproto" "github.com/hashicorp/terraform/internal/states" - "github.com/hashicorp/terraform/internal/tfdiags" "github.com/hashicorp/terraform/version" "github.com/zclconf/go-cty/cty" ) @@ -157,12 +156,23 @@ func resourceChangeFromTfplan(rawChange *planproto.ResourceInstanceChange) (*pla ret := &plans.ResourceInstanceChangeSrc{} - moduleAddr := addrs.RootModuleInstance - if rawChange.ModulePath != "" { - var diags tfdiags.Diagnostics - moduleAddr, diags = addrs.ParseModuleInstanceStr(rawChange.ModulePath) + if rawChange.Addr == "" { + // If "Addr" isn't populated then seems likely that this is a plan + // file created by an earlier version of Terraform, which had the + // same information spread over various other fields: + // ModulePath, Mode, Name, Type, and InstanceKey. + return nil, fmt.Errorf("no instance address for resource instance change; perhaps this plan was created by a different version of Terraform?") + } + + instAddr, diags := addrs.ParseAbsResourceInstanceStr(rawChange.Addr) + if diags.HasErrors() { + return nil, fmt.Errorf("invalid resource instance address %q: %w", rawChange.Addr, diags.Err()) + } + prevRunAddr := instAddr + if rawChange.PrevRunAddr != "" { + prevRunAddr, diags = addrs.ParseAbsResourceInstanceStr(rawChange.PrevRunAddr) if diags.HasErrors() { - return nil, diags.Err() + return nil, fmt.Errorf("invalid resource instance previous run address %q: %w", rawChange.PrevRunAddr, diags.Err()) } } @@ -172,37 +182,8 @@ func resourceChangeFromTfplan(rawChange *planproto.ResourceInstanceChange) (*pla } ret.ProviderAddr = providerAddr - var mode addrs.ResourceMode - switch rawChange.Mode { - case planproto.ResourceInstanceChange_managed: - mode = addrs.ManagedResourceMode - case planproto.ResourceInstanceChange_data: - mode = addrs.DataResourceMode - default: - return nil, fmt.Errorf("resource has invalid mode %s", rawChange.Mode) - } - - typeName := rawChange.Type - name := rawChange.Name - - resAddr := addrs.Resource{ - Mode: mode, - Type: typeName, - Name: name, - } - - var instKey addrs.InstanceKey - switch rawTk := rawChange.InstanceKey.(type) { - case nil: - case *planproto.ResourceInstanceChange_Int: - instKey = addrs.IntKey(rawTk.Int) - case *planproto.ResourceInstanceChange_Str: - instKey = addrs.StringKey(rawTk.Str) - default: - return nil, fmt.Errorf("instance of %s has invalid key type %T", resAddr.Absolute(moduleAddr), rawChange.InstanceKey) - } - - ret.Addr = resAddr.Instance(instKey).Absolute(moduleAddr) + ret.Addr = instAddr + ret.PrevRunAddr = prevRunAddr if rawChange.DeposedKey != "" { if len(rawChange.DeposedKey) != 8 { @@ -454,35 +435,20 @@ func writeTfplan(plan *plans.Plan, w io.Writer) error { func resourceChangeToTfplan(change *plans.ResourceInstanceChangeSrc) (*planproto.ResourceInstanceChange, error) { ret := &planproto.ResourceInstanceChange{} - ret.ModulePath = change.Addr.Module.String() - - relAddr := change.Addr.Resource - - switch relAddr.Resource.Mode { - case addrs.ManagedResourceMode: - ret.Mode = planproto.ResourceInstanceChange_managed - case addrs.DataResourceMode: - ret.Mode = planproto.ResourceInstanceChange_data - default: - return nil, fmt.Errorf("resource %s has unsupported mode %s", relAddr, relAddr.Resource.Mode) + if change.PrevRunAddr.Resource.Resource.Type == "" { + // Suggests that an old caller wasn't yet updated to populate this + // properly. All code that generates plans should populate this field, + // even if it's just to write in the same value as in change.Addr. + change.PrevRunAddr = change.Addr } - ret.Type = relAddr.Resource.Type - ret.Name = relAddr.Resource.Name - - switch tk := relAddr.Key.(type) { - case nil: - // Nothing to do, then. - case addrs.IntKey: - ret.InstanceKey = &planproto.ResourceInstanceChange_Int{ - Int: int64(tk), - } - case addrs.StringKey: - ret.InstanceKey = &planproto.ResourceInstanceChange_Str{ - Str: string(tk), - } - default: - return nil, fmt.Errorf("resource %s has unsupported instance key type %T", relAddr, relAddr.Key) + ret.Addr = change.Addr.String() + ret.PrevRunAddr = change.PrevRunAddr.String() + if ret.PrevRunAddr == ret.Addr { + // In the on-disk format we leave PrevRunAddr unpopulated in the common + // case where it's the same as Addr, and then fill it back in again on + // read. + ret.PrevRunAddr = "" } ret.DeposedKey = string(change.DeposedKey) @@ -500,7 +466,7 @@ func resourceChangeToTfplan(change *plans.ResourceInstanceChangeSrc) (*planproto valChange, err := changeToTfplan(&change.ChangeSrc) if err != nil { - return nil, fmt.Errorf("failed to serialize resource %s change: %s", relAddr, err) + return nil, fmt.Errorf("failed to serialize resource %s change: %s", change.Addr, err) } ret.Change = valChange @@ -514,7 +480,7 @@ func resourceChangeToTfplan(change *plans.ResourceInstanceChangeSrc) (*planproto case plans.ResourceInstanceReplaceByRequest: ret.ActionReason = planproto.ResourceInstanceActionReason_REPLACE_BY_REQUEST default: - return nil, fmt.Errorf("resource %s has unsupported action reason %s", relAddr, change.ActionReason) + return nil, fmt.Errorf("resource %s has unsupported action reason %s", change.Addr, change.ActionReason) } if len(change.Private) > 0 { diff --git a/internal/plans/planfile/tfplan_test.go b/internal/plans/planfile/tfplan_test.go index cc90874f0729..b6c69657e4d3 100644 --- a/internal/plans/planfile/tfplan_test.go +++ b/internal/plans/planfile/tfplan_test.go @@ -57,6 +57,11 @@ func TestTFPlanRoundTrip(t *testing.T) { Type: "test_thing", Name: "woot", }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), + PrevRunAddr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), ProviderAddr: addrs.AbsProviderConfig{ Provider: addrs.NewDefaultProvider("test"), Module: addrs.RootModule, @@ -93,7 +98,12 @@ func TestTFPlanRoundTrip(t *testing.T) { Mode: addrs.ManagedResourceMode, Type: "test_thing", Name: "woot", - }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), + }.Instance(addrs.IntKey(1)).Absolute(addrs.RootModuleInstance), + PrevRunAddr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.IntKey(1)).Absolute(addrs.RootModuleInstance), DeposedKey: "foodface", ProviderAddr: addrs.AbsProviderConfig{ Provider: addrs.NewDefaultProvider("test"), @@ -214,6 +224,11 @@ func TestTFPlanRoundTripDestroy(t *testing.T) { Type: "test_thing", Name: "woot", }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), + PrevRunAddr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), ProviderAddr: addrs.AbsProviderConfig{ Provider: addrs.NewDefaultProvider("test"), Module: addrs.RootModule, diff --git a/internal/terraform/node_resource_abstract_instance.go b/internal/terraform/node_resource_abstract_instance.go index d2279f88f3c7..7735897ce46a 100644 --- a/internal/terraform/node_resource_abstract_instance.go +++ b/internal/terraform/node_resource_abstract_instance.go @@ -392,8 +392,9 @@ func (n *NodeAbstractResourceInstance) planDestroy(ctx EvalContext, currentState // that we checked something and concluded no changes were needed // vs. that something being entirely excluded e.g. due to -target. noop := &plans.ResourceInstanceChange{ - Addr: absAddr, - DeposedKey: deposedKey, + Addr: absAddr, + PrevRunAddr: absAddr, // TODO-PrevRunAddr: If this instance was moved/renamed in this run, record its old address + DeposedKey: deposedKey, Change: plans.Change{ Action: plans.NoOp, Before: cty.NullVal(cty.DynamicPseudoType), @@ -419,8 +420,9 @@ func (n *NodeAbstractResourceInstance) planDestroy(ctx EvalContext, currentState // Plan is always the same for a destroy. We don't need the provider's // help for this one. plan := &plans.ResourceInstanceChange{ - Addr: absAddr, - DeposedKey: deposedKey, + Addr: absAddr, + PrevRunAddr: absAddr, // TODO-PrevRunAddr: If this instance was moved/renamed in this run, record its old address + DeposedKey: deposedKey, Change: plans.Change{ Action: plans.Delete, Before: currentState.Value, @@ -444,7 +446,7 @@ func (n *NodeAbstractResourceInstance) planDestroy(ctx EvalContext, currentState return plan, diags } -// writeChange saves a planned change for an instance object into the set of +// writeChange saves a planned change for an instance object into the set of // global planned changes. func (n *NodeAbstractResourceInstance) writeChange(ctx EvalContext, change *plans.ResourceInstanceChange, deposedKey states.DeposedKey) error { changes := ctx.Changes() @@ -469,6 +471,16 @@ func (n *NodeAbstractResourceInstance) writeChange(ctx EvalContext, change *plan // Should never happen, and indicates a bug in the caller. panic("inconsistent address and/or deposed key in writeChange") } + if change.PrevRunAddr.Resource.Resource.Type == "" { + // Should never happen, and indicates a bug in the caller. + // (The change.Encode function actually has its own fixup to just + // quietly make this match change.Addr in the incorrect case, but we + // intentionally panic here in order to catch incorrect callers where + // the stack trace will hopefully be actually useful. The tolerance + // at the next layer down is mainly to accommodate sloppy input in + // older tests.) + panic("unpopulated ResourceInstanceChange.PrevRunAddr in writeChange") + } ri := n.Addr.Resource schema, _ := providerSchema.SchemaForResourceAddr(ri.Resource) @@ -1054,6 +1066,7 @@ func (n *NodeAbstractResourceInstance) plan( // Update our return plan plan = &plans.ResourceInstanceChange{ Addr: n.Addr, + PrevRunAddr: n.Addr, // TODO-PrevRunAddr: If this instance was moved/renamed in this run, record its old address Private: plannedPrivate, ProviderAddr: n.ResolvedProvider, Change: plans.Change{ @@ -1515,6 +1528,7 @@ func (n *NodeAbstractResourceInstance) planDataSource(ctx EvalContext, currentSt // value containing unknowns from PlanDataResourceObject. plannedChange := &plans.ResourceInstanceChange{ Addr: n.Addr, + PrevRunAddr: n.Addr, // data resources are not refactorable ProviderAddr: n.ResolvedProvider, Change: plans.Change{ Action: plans.Read, From 4faac6ee43872d9f7c413a176152f1b76a77eec6 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Mon, 23 Aug 2021 17:43:41 -0700 Subject: [PATCH 020/644] core: Record move result information in the plan Here we wire through the "move results" into the graph walk data structures so that all of the the nodes which produce plans.ResourceInstanceChange values can capture the "PrevRunAddr" for each resource instance. This doesn't actually quite work yet, because the logic in Context.Plan isn't actually correct and so the updated state from refactoring.ApplyMoves isn't actually visible as the "previous run state". For that reason, the context test in this commit is currently skipped, with the intent of re-enabling it once the updated state is properly propagating into the plan graph walk and thus we can actually react to the result of the move while choosing actions for those addresses. --- internal/refactoring/move_execute.go | 12 +++ internal/terraform/context.go | 55 ++++++++++---- internal/terraform/context_import.go | 2 +- internal/terraform/context_plan2_test.go | 74 +++++++++++++++++++ internal/terraform/context_validate_test.go | 2 +- internal/terraform/eval_context.go | 11 +++ internal/terraform/eval_context_builtin.go | 6 ++ internal/terraform/eval_context_mock.go | 9 +++ internal/terraform/graph_walk_context.go | 13 ++-- .../node_resource_abstract_instance.go | 25 ++++++- 10 files changed, 185 insertions(+), 24 deletions(-) diff --git a/internal/refactoring/move_execute.go b/internal/refactoring/move_execute.go index 800810981db7..178a336af0b7 100644 --- a/internal/refactoring/move_execute.go +++ b/internal/refactoring/move_execute.go @@ -2,9 +2,11 @@ package refactoring import ( "fmt" + "log" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/dag" + "github.com/hashicorp/terraform/internal/logging" "github.com/hashicorp/terraform/internal/states" ) @@ -53,6 +55,13 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] } } + if startNodes.Len() == 0 { + log.Println("[TRACE] refactoring.ApplyMoves: No 'moved' statements to consider in this configuration") + return results + } + + log.Printf("[TRACE] refactoring.ApplyMoves: Processing 'moved' statements in the configuration\n%s", logging.Indent(g.String())) + g.ReverseDepthFirstWalk(startNodes, func(v dag.Vertex, depth int) error { stmt := v.(*MoveStatement) @@ -69,6 +78,7 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] // For a module endpoint we just try the module address // directly. if newAddr, matches := modAddr.MoveDestination(stmt.From, stmt.To); matches { + log.Printf("[TRACE] refactoring.ApplyMoves: %s has moved to %s", modAddr, newAddr) // We need to visit all of the resource instances in the // module and record them individually as results. for _, rs := range ms.Resources { @@ -94,6 +104,7 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] for _, rs := range ms.Resources { rAddr := rs.Addr if newAddr, matches := rAddr.MoveDestination(stmt.From, stmt.To); matches { + log.Printf("[TRACE] refactoring.ApplyMoves: resource %s has moved to %s", rAddr, newAddr) for key := range rs.Instances { oldInst := rAddr.Instance(key) newInst := newAddr.Instance(key) @@ -110,6 +121,7 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] for key := range rs.Instances { iAddr := rAddr.Instance(key) if newAddr, matches := iAddr.MoveDestination(stmt.From, stmt.To); matches { + log.Printf("[TRACE] refactoring.ApplyMoves: resource instance %s has moved to %s", iAddr, newAddr) result := MoveResult{From: iAddr, To: newAddr} results[iAddr.UniqueKey()] = result results[newAddr.UniqueKey()] = result diff --git a/internal/terraform/context.go b/internal/terraform/context.go index 0a092c33b01c..4e96fd25e750 100644 --- a/internal/terraform/context.go +++ b/internal/terraform/context.go @@ -129,6 +129,20 @@ type Context struct { refreshState *states.State prevRunState *states.State + // NOTE: If you're considering adding something new here, consider first + // whether it'd work to add it to type graphWalkOpts instead, possibly by + // adding new arguments to one of the exported operation methods, to scope + // it only to a particular operation rather than having it survive from one + // operation to the next as global mutable state. + // + // Historically we used fields here as a bit of a dumping ground for + // data that needed to ambiently pass between methods of Context, but + // that has tended to cause surprising misbehavior when data from one + // walk inadvertently bleeds into another walk against the same context. + // Perhaps one day we'll move changes, state, refreshState, and prevRunState + // to graphWalkOpts too. Ideally there shouldn't be anything in here which + // changes after NewContext returns. + hooks []Hook components contextComponentFactory schemas *Schemas @@ -491,7 +505,7 @@ func (c *Context) Eval(path addrs.ModuleInstance) (*lang.Scope, tfdiags.Diagnost diags = diags.Append(graphDiags) if !diags.HasErrors() { var walkDiags tfdiags.Diagnostics - walker, walkDiags = c.walk(graph, walkEval) + walker, walkDiags = c.walk(graph, walkEval, &graphWalkOpts{}) diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) } @@ -500,7 +514,7 @@ func (c *Context) Eval(path addrs.ModuleInstance) (*lang.Scope, tfdiags.Diagnost // If we skipped walking the graph (due to errors) then we'll just // use a placeholder graph walker here, which'll refer to the // unmodified state. - walker = c.graphWalker(walkEval) + walker = c.graphWalker(walkEval, &graphWalkOpts{}) } // This is a bit weird since we don't normally evaluate outside of @@ -545,7 +559,7 @@ func (c *Context) Apply() (*states.State, tfdiags.Diagnostics) { } // Walk the graph - walker, walkDiags := c.walk(graph, operation) + walker, walkDiags := c.walk(graph, operation, &graphWalkOpts{}) diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) @@ -656,7 +670,7 @@ The -target option is not for routine use, and is provided only for exceptional func (c *Context) plan() (*plans.Plan, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - moveStmts, _ := c.prePlanFindAndApplyMoves() + moveStmts, moveResults := c.prePlanFindAndApplyMoves() graph, graphDiags := c.Graph(GraphTypePlan, nil) diags = diags.Append(graphDiags) @@ -665,7 +679,9 @@ func (c *Context) plan() (*plans.Plan, tfdiags.Diagnostics) { } // Do the walk - walker, walkDiags := c.walk(graph, walkPlan) + walker, walkDiags := c.walk(graph, walkPlan, &graphWalkOpts{ + MoveResults: moveResults, + }) diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) if walkDiags.HasErrors() { @@ -700,7 +716,7 @@ func (c *Context) destroyPlan() (*plans.Plan, tfdiags.Diagnostics) { } c.changes = plans.NewChanges() - moveStmts, _ := c.prePlanFindAndApplyMoves() + moveStmts, moveResults := c.prePlanFindAndApplyMoves() // A destroy plan starts by running Refresh to read any pending data // sources, and remove missing managed resources. This is required because @@ -734,7 +750,9 @@ func (c *Context) destroyPlan() (*plans.Plan, tfdiags.Diagnostics) { } // Do the walk - walker, walkDiags := c.walk(graph, walkPlanDestroy) + walker, walkDiags := c.walk(graph, walkPlanDestroy, &graphWalkOpts{ + MoveResults: moveResults, + }) diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) if walkDiags.HasErrors() { @@ -764,7 +782,7 @@ func (c *Context) destroyPlan() (*plans.Plan, tfdiags.Diagnostics) { func (c *Context) refreshOnlyPlan() (*plans.Plan, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - moveStmts, _ := c.prePlanFindAndApplyMoves() + moveStmts, moveResults := c.prePlanFindAndApplyMoves() graph, graphDiags := c.Graph(GraphTypePlanRefreshOnly, nil) diags = diags.Append(graphDiags) @@ -773,7 +791,9 @@ func (c *Context) refreshOnlyPlan() (*plans.Plan, tfdiags.Diagnostics) { } // Do the walk - walker, walkDiags := c.walk(graph, walkPlan) + walker, walkDiags := c.walk(graph, walkPlan, &graphWalkOpts{ + MoveResults: moveResults, + }) diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) if walkDiags.HasErrors() { @@ -918,7 +938,7 @@ func (c *Context) Validate() tfdiags.Diagnostics { } // Walk - walker, walkDiags := c.walk(graph, walkValidate) + walker, walkDiags := c.walk(graph, walkValidate, &graphWalkOpts{}) diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) if walkDiags.HasErrors() { @@ -991,10 +1011,18 @@ func (c *Context) releaseRun() { c.runContext = nil } -func (c *Context) walk(graph *Graph, operation walkOperation) (*ContextGraphWalker, tfdiags.Diagnostics) { +// graphWalkOpts is an assortment of options and inputs we need when +// constructing a graph walker. +type graphWalkOpts struct { + // MoveResults is a table of the results of applying move statements prior + // to a plan walk. Irrelevant and totally ignored for non-plan walks. + MoveResults map[addrs.UniqueKey]refactoring.MoveResult +} + +func (c *Context) walk(graph *Graph, operation walkOperation, opts *graphWalkOpts) (*ContextGraphWalker, tfdiags.Diagnostics) { log.Printf("[DEBUG] Starting graph walk: %s", operation.String()) - walker := c.graphWalker(operation) + walker := c.graphWalker(operation, opts) // Watch for a stop so we can call the provider Stop() API. watchStop, watchWait := c.watchStop(walker) @@ -1009,7 +1037,7 @@ func (c *Context) walk(graph *Graph, operation walkOperation) (*ContextGraphWalk return walker, diags } -func (c *Context) graphWalker(operation walkOperation) *ContextGraphWalker { +func (c *Context) graphWalker(operation walkOperation, opts *graphWalkOpts) *ContextGraphWalker { var state *states.SyncState var refreshState *states.SyncState var prevRunState *states.SyncState @@ -1040,6 +1068,7 @@ func (c *Context) graphWalker(operation walkOperation) *ContextGraphWalker { PrevRunState: prevRunState, Changes: c.changes.SyncWrapper(), InstanceExpander: instances.NewExpander(), + MoveResults: opts.MoveResults, Operation: operation, StopContext: c.runContext, RootVariableValues: c.variables, diff --git a/internal/terraform/context_import.go b/internal/terraform/context_import.go index 32ac6e0d36b1..ccee059d7799 100644 --- a/internal/terraform/context_import.go +++ b/internal/terraform/context_import.go @@ -60,7 +60,7 @@ func (c *Context) Import(opts *ImportOpts) (*states.State, tfdiags.Diagnostics) } // Walk it - _, walkDiags := c.walk(graph, walkImport) + _, walkDiags := c.walk(graph, walkImport, &graphWalkOpts{}) diags = diags.Append(walkDiags) if walkDiags.HasErrors() { return c.state, diags diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 048495cad645..a5242ad89967 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -730,6 +730,80 @@ provider "test" { } } +func TestContext2Plan_movedResourceBasic(t *testing.T) { + t.Skip("Context.Plan doesn't properly propagate moves into the prior state yet") + + addrA := mustResourceInstanceAddr("test_object.a") + addrB := mustResourceInstanceAddr("test_object.b") + m := testModuleInline(t, map[string]string{ + "main.tf": ` + resource "test_object" "b" { + } + + moved { + from = test_object.a + to = test_object.b + } + + terraform { + experiments = [config_driven_move] + } + `, + }) + + state := states.BuildState(func(s *states.SyncState) { + // The prior state tracks test_object.a, which we should treat as + // test_object.b because of the "moved" block in the config. + s.SetResourceInstanceCurrent(addrA, &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Config: m, + State: state, + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + ForceReplace: []addrs.AbsResourceInstance{ + addrA, + }, + }) + + plan, diags := ctx.Plan() + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + t.Run(addrA.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addrA) + if instPlan != nil { + t.Fatalf("unexpected plan for %s; should've moved to %s", addrA, addrB) + } + }) + t.Run(addrB.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addrB) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addrB) + } + + if got, want := instPlan.Addr, addrB; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addrA; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.NoOp; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + }) +} + func TestContext2Plan_refreshOnlyMode(t *testing.T) { addr := mustResourceInstanceAddr("test_object.a") diff --git a/internal/terraform/context_validate_test.go b/internal/terraform/context_validate_test.go index cae189482c38..be3acb7c501a 100644 --- a/internal/terraform/context_validate_test.go +++ b/internal/terraform/context_validate_test.go @@ -1305,7 +1305,7 @@ func TestContext2Validate_PlanGraphBuilder(t *testing.T) { t.Fatalf("errors from PlanGraphBuilder: %s", diags.Err()) } defer c.acquireRun("validate-test")() - walker, diags := c.walk(graph, walkValidate) + walker, diags := c.walk(graph, walkValidate, &graphWalkOpts{}) if diags.HasErrors() { t.Fatal(diags.Err()) } diff --git a/internal/terraform/eval_context.go b/internal/terraform/eval_context.go index 0a711f87323c..2cee1b0711d9 100644 --- a/internal/terraform/eval_context.go +++ b/internal/terraform/eval_context.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" + "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" "github.com/zclconf/go-cty/cty" @@ -165,6 +166,16 @@ type EvalContext interface { // EvalContext objects for a given configuration. InstanceExpander() *instances.Expander + // MoveResults returns a map describing the results of handling any + // resource instance move statements prior to the graph walk, so that + // the graph walk can then record that information appropriately in other + // artifacts produced by the graph walk. + // + // This data structure is created prior to the graph walk and read-only + // thereafter, so callers must not modify the returned map or any other + // objects accessible through it. + MoveResults() map[addrs.UniqueKey]refactoring.MoveResult + // WithPath returns a copy of the context with the internal path set to the // path argument. WithPath(path addrs.ModuleInstance) EvalContext diff --git a/internal/terraform/eval_context_builtin.go b/internal/terraform/eval_context_builtin.go index 4e956b8dee9e..1b971fd6b19c 100644 --- a/internal/terraform/eval_context_builtin.go +++ b/internal/terraform/eval_context_builtin.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" + "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/version" "github.com/hashicorp/terraform/internal/states" @@ -74,6 +75,7 @@ type BuiltinEvalContext struct { RefreshStateValue *states.SyncState PrevRunStateValue *states.SyncState InstanceExpanderValue *instances.Expander + MoveResultsValue map[addrs.UniqueKey]refactoring.MoveResult } // BuiltinEvalContext implements EvalContext @@ -367,3 +369,7 @@ func (ctx *BuiltinEvalContext) PrevRunState() *states.SyncState { func (ctx *BuiltinEvalContext) InstanceExpander() *instances.Expander { return ctx.InstanceExpanderValue } + +func (ctx *BuiltinEvalContext) MoveResults() map[addrs.UniqueKey]refactoring.MoveResult { + return ctx.MoveResultsValue +} diff --git a/internal/terraform/eval_context_mock.go b/internal/terraform/eval_context_mock.go index 7d2e64c6ac12..0e4fd32d6903 100644 --- a/internal/terraform/eval_context_mock.go +++ b/internal/terraform/eval_context_mock.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" + "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" "github.com/zclconf/go-cty/cty" @@ -128,6 +129,9 @@ type MockEvalContext struct { PrevRunStateCalled bool PrevRunStateState *states.SyncState + MoveResultsCalled bool + MoveResultsResults map[addrs.UniqueKey]refactoring.MoveResult + InstanceExpanderCalled bool InstanceExpanderExpander *instances.Expander } @@ -347,6 +351,11 @@ func (c *MockEvalContext) PrevRunState() *states.SyncState { return c.PrevRunStateState } +func (c *MockEvalContext) MoveResults() map[addrs.UniqueKey]refactoring.MoveResult { + c.MoveResultsCalled = true + return c.MoveResultsResults +} + func (c *MockEvalContext) InstanceExpander() *instances.Expander { c.InstanceExpanderCalled = true return c.InstanceExpanderExpander diff --git a/internal/terraform/graph_walk_context.go b/internal/terraform/graph_walk_context.go index 4c16d44c71dd..fcda4fa73442 100644 --- a/internal/terraform/graph_walk_context.go +++ b/internal/terraform/graph_walk_context.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" + "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" ) @@ -23,11 +24,12 @@ type ContextGraphWalker struct { // Configurable values Context *Context - State *states.SyncState // Used for safe concurrent access to state - RefreshState *states.SyncState // Used for safe concurrent access to state - PrevRunState *states.SyncState // Used for safe concurrent access to state - Changes *plans.ChangesSync // Used for safe concurrent writes to changes - InstanceExpander *instances.Expander // Tracks our gradual expansion of module and resource instances + State *states.SyncState // Used for safe concurrent access to state + RefreshState *states.SyncState // Used for safe concurrent access to state + PrevRunState *states.SyncState // Used for safe concurrent access to state + Changes *plans.ChangesSync // Used for safe concurrent writes to changes + InstanceExpander *instances.Expander // Tracks our gradual expansion of module and resource instances + MoveResults map[addrs.UniqueKey]refactoring.MoveResult // Read-only record of earlier processing of move statements Operation walkOperation StopContext context.Context RootVariableValues InputValues @@ -88,6 +90,7 @@ func (w *ContextGraphWalker) EvalContext() EvalContext { InstanceExpanderValue: w.InstanceExpander, Components: w.Context.components, Schemas: w.Context.schemas, + MoveResultsValue: w.MoveResults, ProviderCache: w.providerCache, ProviderInputConfig: w.Context.providerInputConfig, ProviderLock: &w.providerLock, diff --git a/internal/terraform/node_resource_abstract_instance.go b/internal/terraform/node_resource_abstract_instance.go index 7735897ce46a..89275809bbb5 100644 --- a/internal/terraform/node_resource_abstract_instance.go +++ b/internal/terraform/node_resource_abstract_instance.go @@ -393,7 +393,7 @@ func (n *NodeAbstractResourceInstance) planDestroy(ctx EvalContext, currentState // vs. that something being entirely excluded e.g. due to -target. noop := &plans.ResourceInstanceChange{ Addr: absAddr, - PrevRunAddr: absAddr, // TODO-PrevRunAddr: If this instance was moved/renamed in this run, record its old address + PrevRunAddr: n.prevRunAddr(ctx), DeposedKey: deposedKey, Change: plans.Change{ Action: plans.NoOp, @@ -421,7 +421,7 @@ func (n *NodeAbstractResourceInstance) planDestroy(ctx EvalContext, currentState // help for this one. plan := &plans.ResourceInstanceChange{ Addr: absAddr, - PrevRunAddr: absAddr, // TODO-PrevRunAddr: If this instance was moved/renamed in this run, record its old address + PrevRunAddr: n.prevRunAddr(ctx), DeposedKey: deposedKey, Change: plans.Change{ Action: plans.Delete, @@ -1066,7 +1066,7 @@ func (n *NodeAbstractResourceInstance) plan( // Update our return plan plan = &plans.ResourceInstanceChange{ Addr: n.Addr, - PrevRunAddr: n.Addr, // TODO-PrevRunAddr: If this instance was moved/renamed in this run, record its old address + PrevRunAddr: n.prevRunAddr(ctx), Private: plannedPrivate, ProviderAddr: n.ResolvedProvider, Change: plans.Change{ @@ -1528,7 +1528,7 @@ func (n *NodeAbstractResourceInstance) planDataSource(ctx EvalContext, currentSt // value containing unknowns from PlanDataResourceObject. plannedChange := &plans.ResourceInstanceChange{ Addr: n.Addr, - PrevRunAddr: n.Addr, // data resources are not refactorable + PrevRunAddr: n.prevRunAddr(ctx), ProviderAddr: n.ResolvedProvider, Change: plans.Change{ Action: plans.Read, @@ -2267,3 +2267,20 @@ func (n *NodeAbstractResourceInstance) apply( return nil, diags } } + +func (n *NodeAbstractResourceInstance) prevRunAddr(ctx EvalContext) addrs.AbsResourceInstance { + return resourceInstancePrevRunAddr(ctx, n.Addr) +} + +func resourceInstancePrevRunAddr(ctx EvalContext, currentAddr addrs.AbsResourceInstance) addrs.AbsResourceInstance { + table := ctx.MoveResults() + + result, ok := table[currentAddr.UniqueKey()] + if !ok { + // If there's no entry in the table then we'll assume it didn't move + // at all, and so its previous address is the same as the current one. + return currentAddr + } + + return result.From +} From 89b05050ec0d1cae275fb9a3072e85364ba37a22 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 24 Aug 2021 12:06:38 -0700 Subject: [PATCH 021/644] core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions. --- internal/backend/backend.go | 60 +- internal/backend/local/backend.go | 2 +- internal/backend/local/backend_apply.go | 45 +- internal/backend/local/backend_local.go | 146 +- internal/backend/local/backend_local_test.go | 93 +- internal/backend/local/backend_plan.go | 18 +- internal/backend/local/backend_plan_test.go | 2 +- internal/backend/local/backend_refresh.go | 14 +- internal/backend/local/backend_test.go | 6 +- .../backend/local/testdata/invalid/invalid.tf | 6 + internal/backend/remote/backend.go | 2 + internal/backend/remote/backend_context.go | 32 +- .../backend/remote/backend_context_test.go | 2 +- internal/command/add.go | 14 +- internal/command/console.go | 13 +- internal/command/graph.go | 88 +- internal/command/import.go | 11 +- internal/command/import_test.go | 2 +- internal/command/meta.go | 1 - internal/command/plan_test.go | 2 +- internal/command/providers_schema.go | 10 +- internal/command/show.go | 11 +- internal/command/state_show.go | 8 +- internal/command/test.go | 64 +- internal/command/validate.go | 24 +- internal/repl/session_test.go | 8 +- internal/states/sync.go | 10 + internal/terraform/context.go | 851 +----- internal/terraform/context_apply.go | 142 + internal/terraform/context_apply2_test.go | 87 +- internal/terraform/context_apply_test.go | 2631 +++++++---------- internal/terraform/context_eval.go | 104 + internal/terraform/context_eval_test.go | 4 +- internal/terraform/context_fixtures_test.go | 1 - internal/terraform/context_graph_type.go | 30 - internal/terraform/context_import.go | 43 +- internal/terraform/context_import_test.go | 133 +- internal/terraform/context_input.go | 48 +- internal/terraform/context_input_test.go | 110 +- internal/terraform/context_plan.go | 435 +++ internal/terraform/context_plan2_test.go | 132 +- internal/terraform/context_plan_test.go | 660 ++--- internal/terraform/context_refresh.go | 37 + internal/terraform/context_refresh_test.go | 169 +- internal/terraform/context_test.go | 62 +- internal/terraform/context_validate.go | 88 + internal/terraform/context_validate_test.go | 312 +- internal/terraform/context_walk.go | 122 + internal/terraform/graph.go | 12 +- .../terraform/graph_builder_apply_test.go | 4 +- internal/terraform/graph_walk_context.go | 9 +- internal/terraform/graphtype_string.go | 29 - internal/terraform/node_provider.go | 3 + .../node_resource_abstract_instance.go | 1 + internal/terraform/variables.go | 13 + 55 files changed, 3289 insertions(+), 3677 deletions(-) create mode 100644 internal/backend/local/testdata/invalid/invalid.tf create mode 100644 internal/terraform/context_apply.go create mode 100644 internal/terraform/context_eval.go delete mode 100644 internal/terraform/context_graph_type.go create mode 100644 internal/terraform/context_plan.go create mode 100644 internal/terraform/context_refresh.go create mode 100644 internal/terraform/context_validate.go create mode 100644 internal/terraform/context_walk.go delete mode 100644 internal/terraform/graphtype_string.go diff --git a/internal/backend/backend.go b/internal/backend/backend.go index db4370ce1919..caac42cc6731 100644 --- a/internal/backend/backend.go +++ b/internal/backend/backend.go @@ -141,9 +141,63 @@ type Enhanced interface { // configurations, variables, and more. Not all backends may support this // so we separate it out into its own optional interface. type Local interface { - // Context returns a runnable terraform Context. The operation parameter - // doesn't need a Type set but it needs other options set such as Module. - Context(*Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics) + // LocalRun uses information in the Operation to prepare a set of objects + // needed to start running that operation. + // + // The operation doesn't need a Type set, but it needs various other + // options set. This is a rather odd API that tries to treat all + // operations as the same when they really aren't; see the local and remote + // backend's implementations of this to understand what this actually + // does, because this operation has no well-defined contract aside from + // "whatever it already does". + LocalRun(*Operation) (*LocalRun, statemgr.Full, tfdiags.Diagnostics) +} + +// LocalRun represents the assortment of objects that we can collect or +// calculate from an Operation object, which we can then use for local +// operations. +// +// The operation methods on terraform.Context (Plan, Apply, Import, etc) each +// generate new artifacts which supersede parts of the LocalRun object that +// started the operation, so callers should be careful to use those subsequent +// artifacts instead of the fields of LocalRun where appropriate. The LocalRun +// data intentionally doesn't update as a result of calling methods on Context, +// in order to make data flow explicit. +// +// This type is a weird architectural wart resulting from the overly-general +// way our backend API models operations, whereby we behave as if all +// Terraform operations have the same inputs and outputs even though they +// are actually all rather different. The exact meaning of the fields in +// this type therefore vary depending on which OperationType was passed to +// Local.Context in order to create an object of this type. +type LocalRun struct { + // Core is an already-initialized Terraform Core context, ready to be + // used to run operations such as Plan and Apply. + Core *terraform.Context + + // Config is the configuration we're working with, which typically comes + // from either config files directly on local disk (when we're creating + // a plan, or similar) or from a snapshot embedded in a plan file + // (when we're applying a saved plan). + Config *configs.Config + + // InputState is the state that should be used for whatever is the first + // method call to a context created with CoreOpts. When creating a plan + // this will be the previous run state, but when applying a saved plan + // this will be the prior state recorded in that plan. + InputState *states.State + + // PlanOpts are options to pass to a Plan or Plan-like operation. + // + // This is nil when we're applying a saved plan, because the plan itself + // contains enough information about its options to apply it. + PlanOpts *terraform.PlanOpts + + // Plan is a plan loaded from a saved plan file, if our operation is to + // apply that saved plan. + // + // This is nil when we're not applying a saved plan. + Plan *plans.Plan } // An operation represents an operation for Terraform to execute. diff --git a/internal/backend/local/backend.go b/internal/backend/local/backend.go index a19c1bc1b92f..f5d07b20f039 100644 --- a/internal/backend/local/backend.go +++ b/internal/backend/local/backend.go @@ -284,7 +284,7 @@ func (b *Local) Operation(ctx context.Context, op *backend.Operation) (*backend. f = b.opApply default: return nil, fmt.Errorf( - "Unsupported operation type: %s\n\n"+ + "unsupported operation type: %s\n\n"+ "This is a bug in Terraform and should be reported. The local backend\n"+ "is built-in to Terraform and should always support all operations.", op.Type) diff --git a/internal/backend/local/backend_apply.go b/internal/backend/local/backend_apply.go index 7d68006bef65..5b143a74f46d 100644 --- a/internal/backend/local/backend_apply.go +++ b/internal/backend/local/backend_apply.go @@ -5,7 +5,6 @@ import ( "fmt" "log" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/plans" @@ -23,7 +22,7 @@ func (b *Local) opApply( runningOp *backend.RunningOperation) { log.Printf("[INFO] backend/local: starting Apply operation") - var diags tfdiags.Diagnostics + var diags, moreDiags tfdiags.Diagnostics // If we have a nil module at this point, then set it to an empty tree // to avoid any potential crashes. @@ -43,7 +42,7 @@ func (b *Local) opApply( op.Hooks = append(op.Hooks, stateHook) // Get our context - tfCtx, _, opState, contextDiags := b.context(op) + lr, _, opState, contextDiags := b.localRun(op) diags = diags.Append(contextDiags) if contextDiags.HasErrors() { op.ReportResult(runningOp, diags) @@ -59,15 +58,26 @@ func (b *Local) opApply( } }() - runningOp.State = tfCtx.State() + // We'll start off with our result being the input state, and replace it + // with the result state only if we eventually complete the apply + // operation. + runningOp.State = lr.InputState + var plan *plans.Plan // If we weren't given a plan, then we refresh/plan if op.PlanFile == nil { // Perform the plan log.Printf("[INFO] backend/local: apply calling Plan") - plan, planDiags := tfCtx.Plan() - diags = diags.Append(planDiags) - if planDiags.HasErrors() { + plan, moreDiags = lr.Core.Plan(lr.Config, lr.InputState, lr.PlanOpts) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + op.ReportResult(runningOp, diags) + return + } + + schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { op.ReportResult(runningOp, diags) return } @@ -75,7 +85,7 @@ func (b *Local) opApply( trivialPlan := !plan.CanApply() hasUI := op.UIOut != nil && op.UIIn != nil mustConfirm := hasUI && !op.AutoApprove && !trivialPlan - op.View.Plan(plan, tfCtx.Schemas()) + op.View.Plan(plan, schemas) if mustConfirm { var desc, query string @@ -119,7 +129,7 @@ func (b *Local) opApply( Description: desc, }) if err != nil { - diags = diags.Append(errwrap.Wrapf("Error asking for approval: {{err}}", err)) + diags = diags.Append(fmt.Errorf("error asking for approval: %w", err)) op.ReportResult(runningOp, diags) return } @@ -130,16 +140,7 @@ func (b *Local) opApply( } } } else { - plan, err := op.PlanFile.ReadPlan() - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Invalid plan file", - fmt.Sprintf("Failed to read plan from plan file: %s.", err), - )) - op.ReportResult(runningOp, diags) - return - } + plan = lr.Plan for _, change := range plan.Changes.Resources { if change.Action != plans.NoOp { op.View.PlannedChange(change) @@ -157,12 +158,10 @@ func (b *Local) opApply( go func() { defer close(doneCh) log.Printf("[INFO] backend/local: apply calling Apply") - _, applyDiags = tfCtx.Apply() - // we always want the state, even if apply failed - applyState = tfCtx.State() + applyState, applyDiags = lr.Core.Apply(plan, lr.Config) }() - if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) { + if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) { return } diags = diags.Append(applyDiags) diff --git a/internal/backend/local/backend_local.go b/internal/backend/local/backend_local.go index 0ad9f1fd5719..0ba6e66fc681 100644 --- a/internal/backend/local/backend_local.go +++ b/internal/backend/local/backend_local.go @@ -6,7 +6,6 @@ import ( "log" "sort" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configload" @@ -18,25 +17,29 @@ import ( ) // backend.Local implementation. -func (b *Local) Context(op *backend.Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics) { +func (b *Local) LocalRun(op *backend.Operation) (*backend.LocalRun, statemgr.Full, tfdiags.Diagnostics) { // Make sure the type is invalid. We use this as a way to know not - // to ask for input/validate. + // to ask for input/validate. We're modifying this through a pointer, + // so we're mutating an object that belongs to the caller here, which + // seems bad but we're preserving it for now until we have time to + // properly design this API, vs. just preserving whatever it currently + // happens to do. op.Type = backend.OperationTypeInvalid op.StateLocker = op.StateLocker.WithContext(context.Background()) - ctx, _, stateMgr, diags := b.context(op) - return ctx, stateMgr, diags + lr, _, stateMgr, diags := b.localRun(op) + return lr, stateMgr, diags } -func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.Snapshot, statemgr.Full, tfdiags.Diagnostics) { +func (b *Local) localRun(op *backend.Operation) (*backend.LocalRun, *configload.Snapshot, statemgr.Full, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics // Get the latest state. log.Printf("[TRACE] backend/local: requesting state manager for workspace %q", op.Workspace) s, err := b.StateMgr(op.Workspace) if err != nil { - diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err)) + diags = diags.Append(fmt.Errorf("error loading state: %w", err)) return nil, nil, nil, diags } log.Printf("[TRACE] backend/local: requesting state lock for workspace %q", op.Workspace) @@ -54,35 +57,20 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload. log.Printf("[TRACE] backend/local: reading remote state for workspace %q", op.Workspace) if err := s.RefreshState(); err != nil { - diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err)) + diags = diags.Append(fmt.Errorf("error loading state: %w", err)) return nil, nil, nil, diags } + ret := &backend.LocalRun{} + // Initialize our context options - var opts terraform.ContextOpts + var coreOpts terraform.ContextOpts if v := b.ContextOpts; v != nil { - opts = *v + coreOpts = *v } + coreOpts.UIInput = op.UIIn + coreOpts.Hooks = op.Hooks - // Copy set options from the operation - opts.PlanMode = op.PlanMode - opts.Targets = op.Targets - opts.ForceReplace = op.ForceReplace - opts.UIInput = op.UIIn - opts.Hooks = op.Hooks - - opts.SkipRefresh = op.Type != backend.OperationTypeRefresh && !op.PlanRefresh - if opts.SkipRefresh { - log.Printf("[DEBUG] backend/local: skipping refresh of managed resources") - } - - // Load the latest state. If we enter contextFromPlanFile below then the - // state snapshot in the plan file must match this, or else it'll return - // error diagnostics. - log.Printf("[TRACE] backend/local: retrieving local state snapshot for workspace %q", op.Workspace) - opts.State = s.State() - - var tfCtx *terraform.Context var ctxDiags tfdiags.Diagnostics var configSnap *configload.Snapshot if op.PlanFile != nil { @@ -94,8 +82,8 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload. m := sm.StateSnapshotMeta() stateMeta = &m } - log.Printf("[TRACE] backend/local: building context from plan file") - tfCtx, configSnap, ctxDiags = b.contextFromPlanFile(op.PlanFile, opts, stateMeta) + log.Printf("[TRACE] backend/local: populating backend.LocalRun from plan file") + ret, configSnap, ctxDiags = b.localRunForPlanFile(op.PlanFile, ret, &coreOpts, stateMeta) if ctxDiags.HasErrors() { diags = diags.Append(ctxDiags) return nil, nil, nil, diags @@ -105,14 +93,13 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload. // available if we need to generate diagnostic message snippets. op.ConfigLoader.ImportSourcesFromSnapshot(configSnap) } else { - log.Printf("[TRACE] backend/local: building context for current working directory") - tfCtx, configSnap, ctxDiags = b.contextDirect(op, opts) + log.Printf("[TRACE] backend/local: populating backend.LocalRun for current working directory") + ret, configSnap, ctxDiags = b.localRunDirect(op, ret, &coreOpts, s) } diags = diags.Append(ctxDiags) if diags.HasErrors() { return nil, nil, nil, diags } - log.Printf("[TRACE] backend/local: finished building terraform.Context") // If we have an operation, then we automatically do the input/validate // here since every option requires this. @@ -122,7 +109,7 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload. mode := terraform.InputModeProvider log.Printf("[TRACE] backend/local: requesting interactive input, if necessary") - inputDiags := tfCtx.Input(mode) + inputDiags := ret.Core.Input(ret.Config, mode) diags = diags.Append(inputDiags) if inputDiags.HasErrors() { return nil, nil, nil, diags @@ -132,15 +119,15 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload. // If validation is enabled, validate if b.OpValidation { log.Printf("[TRACE] backend/local: running validation operation") - validateDiags := tfCtx.Validate() + validateDiags := ret.Core.Validate(ret.Config) diags = diags.Append(validateDiags) } } - return tfCtx, configSnap, s, diags + return ret, configSnap, s, diags } -func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts) (*terraform.Context, *configload.Snapshot, tfdiags.Diagnostics) { +func (b *Local) localRunDirect(op *backend.Operation, run *backend.LocalRun, coreOpts *terraform.ContextOpts, s statemgr.Full) (*backend.LocalRun, *configload.Snapshot, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics // Load the configuration using the caller-provided configuration loader. @@ -149,7 +136,7 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts) if configDiags.HasErrors() { return nil, nil, diags } - opts.Config = config + run.Config = config var rawVariables map[string]backend.UnparsedVariableValue if op.AllowUnsetVariables { @@ -163,7 +150,7 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts) // values through interactive prompts. // TODO: Need to route the operation context through into here, so that // the interactive prompts can be sensitive to its timeouts/etc. - rawVariables = b.interactiveCollectVariables(context.TODO(), op.Variables, config.Module.Variables, opts.UIInput) + rawVariables = b.interactiveCollectVariables(context.TODO(), op.Variables, config.Module.Variables, op.UIIn) } variables, varDiags := backend.ParseVariableValues(rawVariables, config.Module.Variables) @@ -171,14 +158,30 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts) if diags.HasErrors() { return nil, nil, diags } - opts.Variables = variables - tfCtx, ctxDiags := terraform.NewContext(&opts) - diags = diags.Append(ctxDiags) - return tfCtx, configSnap, diags + planOpts := &terraform.PlanOpts{ + Mode: op.PlanMode, + Targets: op.Targets, + ForceReplace: op.ForceReplace, + SetVariables: variables, + SkipRefresh: op.Type != backend.OperationTypeRefresh && !op.PlanRefresh, + } + run.PlanOpts = planOpts + + // For a "direct" local run, the input state is the most recently stored + // snapshot, from the previous run. + run.InputState = s.State() + + tfCtx, moreDiags := terraform.NewContext(coreOpts) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return nil, nil, diags + } + run.Core = tfCtx + return run, configSnap, diags } -func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextOpts, currentStateMeta *statemgr.SnapshotMeta) (*terraform.Context, *configload.Snapshot, tfdiags.Diagnostics) { +func (b *Local) localRunForPlanFile(pf *planfile.Reader, run *backend.LocalRun, coreOpts *terraform.ContextOpts, currentStateMeta *statemgr.SnapshotMeta) (*backend.LocalRun, *configload.Snapshot, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics const errSummary = "Invalid plan file" @@ -201,7 +204,7 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO if configDiags.HasErrors() { return nil, snap, diags } - opts.Config = config + run.Config = config // A plan file also contains a snapshot of the prior state the changes // are intended to apply to. @@ -230,11 +233,10 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO } } } - // The caller already wrote the "current state" here, but we're overriding - // it here with the prior state. These two should actually be identical in - // normal use, particularly if we validated the state meta above, but - // we do this here anyway to ensure consistent behavior. - opts.State = priorStateFile.State + // When we're applying a saved plan, the input state is the "prior state" + // recorded in the plan, which incorporates the result of all of the + // refreshing we did while building the plan. + run.InputState = priorStateFile.State plan, err := pf.ReadPlan() if err != nil { @@ -245,33 +247,23 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO )) return nil, snap, diags } - - variables := terraform.InputValues{} - for name, dyVal := range plan.VariableValues { - val, err := dyVal.Decode(cty.DynamicPseudoType) - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - errSummary, - fmt.Sprintf("Invalid value for variable %q recorded in plan file: %s.", name, err), - )) - continue - } - - variables[name] = &terraform.InputValue{ - Value: val, - SourceType: terraform.ValueFromPlan, - } + // When we're applying a saved plan, we populate Plan instead of PlanOpts, + // because a plan object incorporates the subset of data from PlanOps that + // we need to apply the plan. + run.Plan = plan + + // When we're applying a saved plan, our context must verify that all of + // the providers it ends up using are identical to those which created + // the plan. + coreOpts.ProviderSHA256s = plan.ProviderSHA256s + + tfCtx, moreDiags := terraform.NewContext(coreOpts) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return nil, nil, diags } - opts.Variables = variables - opts.Changes = plan.Changes - opts.Targets = plan.TargetAddrs - opts.ForceReplace = plan.ForceReplaceAddrs - opts.ProviderSHA256s = plan.ProviderSHA256s - - tfCtx, ctxDiags := terraform.NewContext(&opts) - diags = diags.Append(ctxDiags) - return tfCtx, snap, diags + run.Core = tfCtx + return run, snap, diags } // interactiveCollectVariables attempts to complete the given existing diff --git a/internal/backend/local/backend_local_test.go b/internal/backend/local/backend_local_test.go index f9db2d6b51be..67314d730c5c 100644 --- a/internal/backend/local/backend_local_test.go +++ b/internal/backend/local/backend_local_test.go @@ -1,6 +1,7 @@ package local import ( + "fmt" "os" "path/filepath" "testing" @@ -10,16 +11,19 @@ import ( "github.com/hashicorp/terraform/internal/command/clistate" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/configs/configload" + "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/initwd" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/plans/planfile" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/states/statefile" + "github.com/hashicorp/terraform/internal/states/statemgr" "github.com/hashicorp/terraform/internal/terminal" + "github.com/hashicorp/terraform/internal/tfdiags" "github.com/zclconf/go-cty/cty" ) -func TestLocalContext(t *testing.T) { +func TestLocalRun(t *testing.T) { configDir := "./testdata/empty" b, cleanup := TestLocal(t) defer cleanup() @@ -38,20 +42,24 @@ func TestLocalContext(t *testing.T) { StateLocker: stateLocker, } - _, _, diags := b.Context(op) + _, _, diags := b.LocalRun(op) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err().Error()) } - // Context() retains a lock on success + // LocalRun() retains a lock on success assertBackendStateLocked(t, b) } -func TestLocalContext_error(t *testing.T) { - configDir := "./testdata/apply" +func TestLocalRun_error(t *testing.T) { + configDir := "./testdata/invalid" b, cleanup := TestLocal(t) defer cleanup() + // This backend will return an error when asked to RefreshState, which + // should then cause LocalRun to return with the state unlocked. + b.Backend = backendWithStateStorageThatFailsRefresh{} + _, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir) defer configCleanup() @@ -66,16 +74,16 @@ func TestLocalContext_error(t *testing.T) { StateLocker: stateLocker, } - _, _, diags := b.Context(op) + _, _, diags := b.LocalRun(op) if !diags.HasErrors() { t.Fatal("unexpected success") } - // Context() unlocks the state on failure + // LocalRun() unlocks the state on failure assertBackendStateUnlocked(t, b) } -func TestLocalContext_stalePlan(t *testing.T) { +func TestLocalRun_stalePlan(t *testing.T) { configDir := "./testdata/apply" b, cleanup := TestLocal(t) defer cleanup() @@ -147,11 +155,76 @@ func TestLocalContext_stalePlan(t *testing.T) { StateLocker: stateLocker, } - _, _, diags := b.Context(op) + _, _, diags := b.LocalRun(op) if !diags.HasErrors() { t.Fatal("unexpected success") } - // Context() unlocks the state on failure + // LocalRun() unlocks the state on failure assertBackendStateUnlocked(t, b) } + +type backendWithStateStorageThatFailsRefresh struct { +} + +var _ backend.Backend = backendWithStateStorageThatFailsRefresh{} + +func (b backendWithStateStorageThatFailsRefresh) StateMgr(workspace string) (statemgr.Full, error) { + return &stateStorageThatFailsRefresh{}, nil +} + +func (b backendWithStateStorageThatFailsRefresh) ConfigSchema() *configschema.Block { + return &configschema.Block{} +} + +func (b backendWithStateStorageThatFailsRefresh) PrepareConfig(in cty.Value) (cty.Value, tfdiags.Diagnostics) { + return in, nil +} + +func (b backendWithStateStorageThatFailsRefresh) Configure(cty.Value) tfdiags.Diagnostics { + return nil +} + +func (b backendWithStateStorageThatFailsRefresh) DeleteWorkspace(name string) error { + return fmt.Errorf("unimplemented") +} + +func (b backendWithStateStorageThatFailsRefresh) Workspaces() ([]string, error) { + return []string{"default"}, nil +} + +type stateStorageThatFailsRefresh struct { + locked bool +} + +func (s *stateStorageThatFailsRefresh) Lock(info *statemgr.LockInfo) (string, error) { + if s.locked { + return "", fmt.Errorf("already locked") + } + s.locked = true + return "locked", nil +} + +func (s *stateStorageThatFailsRefresh) Unlock(id string) error { + if !s.locked { + return fmt.Errorf("not locked") + } + s.locked = false + return nil +} + +func (s *stateStorageThatFailsRefresh) State() *states.State { + return nil +} + +func (s *stateStorageThatFailsRefresh) WriteState(*states.State) error { + return fmt.Errorf("unimplemented") +} + +func (s *stateStorageThatFailsRefresh) RefreshState() error { + return fmt.Errorf("intentionally failing for testing purposes") +} + +func (s *stateStorageThatFailsRefresh) PersistState() error { + return fmt.Errorf("unimplemented") +} diff --git a/internal/backend/local/backend_plan.go b/internal/backend/local/backend_plan.go index 507d2ec98666..e25ab3ef6415 100644 --- a/internal/backend/local/backend_plan.go +++ b/internal/backend/local/backend_plan.go @@ -54,7 +54,7 @@ func (b *Local) opPlan( } // Get our context - tfCtx, configSnap, opState, ctxDiags := b.context(op) + lr, configSnap, opState, ctxDiags := b.localRun(op) diags = diags.Append(ctxDiags) if ctxDiags.HasErrors() { op.ReportResult(runningOp, diags) @@ -70,7 +70,9 @@ func (b *Local) opPlan( } }() - runningOp.State = tfCtx.State() + // Since planning doesn't immediately change the persisted state, the + // resulting state is always just the input state. + runningOp.State = lr.InputState // Perform the plan in a goroutine so we can be interrupted var plan *plans.Plan @@ -79,10 +81,10 @@ func (b *Local) opPlan( go func() { defer close(doneCh) log.Printf("[INFO] backend/local: plan calling Plan") - plan, planDiags = tfCtx.Plan() + plan, planDiags = lr.Core.Plan(lr.Config, lr.InputState, lr.PlanOpts) }() - if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) { + if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) { // If we get in here then the operation was cancelled, which is always // considered to be a failure. log.Printf("[INFO] backend/local: plan operation was force-cancelled by interrupt") @@ -144,7 +146,13 @@ func (b *Local) opPlan( } // Render the plan - op.View.Plan(plan, tfCtx.Schemas()) + schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + op.ReportResult(runningOp, diags) + return + } + op.View.Plan(plan, schemas) // If we've accumulated any warnings along the way then we'll show them // here just before we show the summary and next steps. If we encountered diff --git a/internal/backend/local/backend_plan_test.go b/internal/backend/local/backend_plan_test.go index 0b823b253171..5866048f9f0a 100644 --- a/internal/backend/local/backend_plan_test.go +++ b/internal/backend/local/backend_plan_test.go @@ -136,7 +136,7 @@ func TestLocal_plan_context_error(t *testing.T) { // the backend should be unlocked after a run assertBackendStateUnlocked(t, b) - if got, want := done(t).Stderr(), "Error: Could not load plugin"; !strings.Contains(got, want) { + if got, want := done(t).Stderr(), "Error: Failed to load plugin schemas"; !strings.Contains(got, want) { t.Fatalf("unexpected error output:\n%s\nwant: %s", got, want) } } diff --git a/internal/backend/local/backend_refresh.go b/internal/backend/local/backend_refresh.go index fa6424702f66..988a8b8f3759 100644 --- a/internal/backend/local/backend_refresh.go +++ b/internal/backend/local/backend_refresh.go @@ -6,7 +6,6 @@ import ( "log" "os" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/states/statemgr" @@ -45,7 +44,7 @@ func (b *Local) opRefresh( op.PlanRefresh = true // Get our context - tfCtx, _, opState, contextDiags := b.context(op) + lr, _, opState, contextDiags := b.localRun(op) diags = diags.Append(contextDiags) if contextDiags.HasErrors() { op.ReportResult(runningOp, diags) @@ -62,8 +61,9 @@ func (b *Local) opRefresh( } }() - // Set our state - runningOp.State = opState.State() + // If we succeed then we'll overwrite this with the resulting state below, + // but otherwise the resulting state is just the input state. + runningOp.State = lr.InputState if !runningOp.State.HasResources() { diags = diags.Append(tfdiags.Sourceless( tfdiags.Warning, @@ -78,11 +78,11 @@ func (b *Local) opRefresh( doneCh := make(chan struct{}) go func() { defer close(doneCh) - newState, refreshDiags = tfCtx.Refresh() + newState, refreshDiags = lr.Core.Refresh(lr.Config, lr.InputState, lr.PlanOpts) log.Printf("[INFO] backend/local: refresh calling Refresh") }() - if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) { + if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) { return } @@ -96,7 +96,7 @@ func (b *Local) opRefresh( err := statemgr.WriteAndPersist(opState, newState) if err != nil { - diags = diags.Append(errwrap.Wrapf("Failed to write state: {{err}}", err)) + diags = diags.Append(fmt.Errorf("failed to write state: %w", err)) op.ReportResult(runningOp, diags) return } diff --git a/internal/backend/local/backend_test.go b/internal/backend/local/backend_test.go index 4e9642660a39..d39890fa2384 100644 --- a/internal/backend/local/backend_test.go +++ b/internal/backend/local/backend_test.go @@ -178,9 +178,9 @@ type testDelegateBackend struct { deleteErr bool } -var errTestDelegateState = errors.New("State called") -var errTestDelegateStates = errors.New("States called") -var errTestDelegateDeleteState = errors.New("Delete called") +var errTestDelegateState = errors.New("state called") +var errTestDelegateStates = errors.New("states called") +var errTestDelegateDeleteState = errors.New("delete called") func (b *testDelegateBackend) StateMgr(name string) (statemgr.Full, error) { if b.stateErr { diff --git a/internal/backend/local/testdata/invalid/invalid.tf b/internal/backend/local/testdata/invalid/invalid.tf new file mode 100644 index 000000000000..7f2d0723d3b0 --- /dev/null +++ b/internal/backend/local/testdata/invalid/invalid.tf @@ -0,0 +1,6 @@ +# This configuration is intended to be loadable (valid syntax, etc) but to +# fail terraform.Context.Validate. + +locals { + a = local.nonexist +} diff --git a/internal/backend/remote/backend.go b/internal/backend/remote/backend.go index 97b27b4a6e4a..5e11872391be 100644 --- a/internal/backend/remote/backend.go +++ b/internal/backend/remote/backend.go @@ -91,6 +91,8 @@ type Remote struct { } var _ backend.Backend = (*Remote)(nil) +var _ backend.Enhanced = (*Remote)(nil) +var _ backend.Local = (*Remote)(nil) // New creates a new initialized remote backend. func New(services *disco.Disco) *Remote { diff --git a/internal/backend/remote/backend_context.go b/internal/backend/remote/backend_context.go index b622f3f7872c..5d50d11d432e 100644 --- a/internal/backend/remote/backend_context.go +++ b/internal/backend/remote/backend_context.go @@ -6,7 +6,6 @@ import ( "log" "strings" - "github.com/hashicorp/errwrap" tfe "github.com/hashicorp/go-tfe" "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/hclsyntax" @@ -18,9 +17,15 @@ import ( "github.com/zclconf/go-cty/cty" ) -// Context implements backend.Enhanced. -func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics) { +// Context implements backend.Local. +func (b *Remote) LocalRun(op *backend.Operation) (*backend.LocalRun, statemgr.Full, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics + ret := &backend.LocalRun{ + PlanOpts: &terraform.PlanOpts{ + Mode: op.PlanMode, + Targets: op.Targets, + }, + } op.StateLocker = op.StateLocker.WithContext(context.Background()) @@ -31,7 +36,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu log.Printf("[TRACE] backend/remote: requesting state manager for workspace %q", remoteWorkspaceName) stateMgr, err := b.StateMgr(op.Workspace) if err != nil { - diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err)) + diags = diags.Append(fmt.Errorf("error loading state: %w", err)) return nil, nil, diags } @@ -50,7 +55,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu log.Printf("[TRACE] backend/remote: reading remote state for workspace %q", remoteWorkspaceName) if err := stateMgr.RefreshState(); err != nil { - diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err)) + diags = diags.Append(fmt.Errorf("error loading state: %w", err)) return nil, nil, diags } @@ -61,15 +66,13 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu } // Copy set options from the operation - opts.PlanMode = op.PlanMode - opts.Targets = op.Targets opts.UIInput = op.UIIn // Load the latest state. If we enter contextFromPlanFile below then the // state snapshot in the plan file must match this, or else it'll return // error diagnostics. log.Printf("[TRACE] backend/remote: retrieving remote state snapshot for workspace %q", remoteWorkspaceName) - opts.State = stateMgr.State() + ret.InputState = stateMgr.State() log.Printf("[TRACE] backend/remote: loading configuration for the current working directory") config, configDiags := op.ConfigLoader.LoadConfig(op.ConfigDir) @@ -77,21 +80,21 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu if configDiags.HasErrors() { return nil, nil, diags } - opts.Config = config + ret.Config = config // The underlying API expects us to use the opaque workspace id to request // variables, so we'll need to look that up using our organization name // and workspace name. remoteWorkspaceID, err := b.getRemoteWorkspaceID(context.Background(), op.Workspace) if err != nil { - diags = diags.Append(errwrap.Wrapf("Error finding remote workspace: {{err}}", err)) + diags = diags.Append(fmt.Errorf("error finding remote workspace: %w", err)) return nil, nil, diags } log.Printf("[TRACE] backend/remote: retrieving variables from workspace %s/%s (%s)", remoteWorkspaceName, b.organization, remoteWorkspaceID) tfeVariables, err := b.client.Variables.List(context.Background(), remoteWorkspaceID, tfe.VariableListOptions{}) if err != nil && err != tfe.ErrResourceNotFound { - diags = diags.Append(errwrap.Wrapf("Error loading variables: {{err}}", err)) + diags = diags.Append(fmt.Errorf("error loading variables: %w", err)) return nil, nil, diags } @@ -100,7 +103,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu // more lax about them, stubbing out any unset ones as unknown. // This gives us enough information to produce a consistent context, // but not enough information to run a real operation (plan, apply, etc) - opts.Variables = stubAllVariables(op.Variables, config.Module.Variables) + ret.PlanOpts.SetVariables = stubAllVariables(op.Variables, config.Module.Variables) } else { if tfeVariables != nil { if op.Variables == nil { @@ -121,16 +124,17 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu if diags.HasErrors() { return nil, nil, diags } - opts.Variables = variables + ret.PlanOpts.SetVariables = variables } } tfCtx, ctxDiags := terraform.NewContext(&opts) diags = diags.Append(ctxDiags) + ret.Core = tfCtx log.Printf("[TRACE] backend/remote: finished building terraform.Context") - return tfCtx, stateMgr, diags + return ret, stateMgr, diags } func (b *Remote) getRemoteWorkspaceName(localWorkspaceName string) string { diff --git a/internal/backend/remote/backend_context_test.go b/internal/backend/remote/backend_context_test.go index c0470493f5c3..819f583ec844 100644 --- a/internal/backend/remote/backend_context_test.go +++ b/internal/backend/remote/backend_context_test.go @@ -204,7 +204,7 @@ func TestRemoteContextWithVars(t *testing.T) { } b.client.Variables.Create(context.TODO(), workspaceID, *v) - _, _, diags := b.Context(op) + _, _, diags := b.LocalRun(op) if test.WantError != "" { if !diags.HasErrors() { diff --git a/internal/command/add.go b/internal/command/add.go index 4e23d8144847..c1326815705f 100644 --- a/internal/command/add.go +++ b/internal/command/add.go @@ -99,7 +99,7 @@ func (c *AddCommand) Run(rawArgs []string) int { } // Get the context - ctx, _, ctxDiags := local.Context(opReq) + lr, _, ctxDiags := local.LocalRun(opReq) diags = diags.Append(ctxDiags) if ctxDiags.HasErrors() { view.Diagnostics(diags) @@ -118,10 +118,10 @@ func (c *AddCommand) Run(rawArgs []string) int { // already exist in the config. var module *configs.Module if args.Addr.Module.IsRoot() { - module = ctx.Config().Module + module = lr.Config.Module } else { // This is weird, but users can potentially specify non-existant module names - cfg := ctx.Config().Root.Descendent(args.Addr.Module.Module()) + cfg := lr.Config.Root.Descendent(args.Addr.Module.Module()) if cfg != nil { module = cfg.Module } @@ -143,7 +143,12 @@ func (c *AddCommand) Run(rawArgs []string) int { } // Get the schemas from the context - schemas := ctx.Schemas() + schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + view.Diagnostics(diags) + return 1 + } // Determine the correct provider config address. The provider-related // variables may get updated below @@ -154,7 +159,6 @@ func (c *AddCommand) Run(rawArgs []string) int { // If we are getting the values from state, get the AbsProviderConfig // directly from state as well. var resource *states.Resource - var moreDiags tfdiags.Diagnostics if args.FromState { resource, moreDiags = c.getResource(b, args.Addr.ContainingResource()) if moreDiags.HasErrors() { diff --git a/internal/command/console.go b/internal/command/console.go index 9423f0e32207..3195988683b0 100644 --- a/internal/command/console.go +++ b/internal/command/console.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/helper/wrappedstreams" "github.com/hashicorp/terraform/internal/repl" + "github.com/hashicorp/terraform/internal/terraform" "github.com/hashicorp/terraform/internal/tfdiags" "github.com/mitchellh/cli" @@ -95,7 +96,7 @@ func (c *ConsoleCommand) Run(args []string) int { } // Get the context - ctx, _, ctxDiags := local.Context(opReq) + lr, _, ctxDiags := local.LocalRun(opReq) diags = diags.Append(ctxDiags) if ctxDiags.HasErrors() { c.showDiagnostics(diags) @@ -116,10 +117,18 @@ func (c *ConsoleCommand) Run(args []string) int { ErrorWriter: wrappedstreams.Stderr(), } + evalOpts := &terraform.EvalOpts{} + if lr.PlanOpts != nil { + // the LocalRun type is built primarily to support the main operations, + // so the variable values end up in the "PlanOpts" even though we're + // not actually making a plan. + evalOpts.SetVariables = lr.PlanOpts.SetVariables + } + // Before we can evaluate expressions, we must compute and populate any // derived values (input variables, local values, output values) // that are not stored in the persistent state. - scope, scopeDiags := ctx.Eval(addrs.RootModuleInstance) + scope, scopeDiags := lr.Core.Eval(lr.Config, lr.InputState, addrs.RootModuleInstance, evalOpts) diags = diags.Append(scopeDiags) if scope == nil { // scope is nil if there are errors so bad that we can't even build a scope. diff --git a/internal/command/graph.go b/internal/command/graph.go index 04a0ab46ccd4..87880a855732 100644 --- a/internal/command/graph.go +++ b/internal/command/graph.go @@ -4,12 +4,12 @@ import ( "fmt" "strings" - "github.com/hashicorp/terraform/internal/plans/planfile" - "github.com/hashicorp/terraform/internal/tfdiags" - "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/dag" + "github.com/hashicorp/terraform/internal/plans" + "github.com/hashicorp/terraform/internal/plans/planfile" "github.com/hashicorp/terraform/internal/terraform" + "github.com/hashicorp/terraform/internal/tfdiags" ) // GraphCommand is a Command implementation that takes a Terraform @@ -103,35 +103,64 @@ func (c *GraphCommand) Run(args []string) int { } // Get the context - ctx, _, ctxDiags := local.Context(opReq) + lr, _, ctxDiags := local.LocalRun(opReq) diags = diags.Append(ctxDiags) if ctxDiags.HasErrors() { c.showDiagnostics(diags) return 1 } - // Determine the graph type - graphType := terraform.GraphTypePlan - if planFile != nil { - graphType = terraform.GraphTypeApply + if graphTypeStr == "" { + switch { + case lr.Plan != nil: + graphTypeStr = "apply" + default: + graphTypeStr = "plan" + } } - if graphTypeStr != "" { - v, ok := terraform.GraphTypeMap[graphTypeStr] - if !ok { - c.Ui.Error(fmt.Sprintf("Invalid graph type requested: %s", graphTypeStr)) - return 1 + var g *terraform.Graph + var graphDiags tfdiags.Diagnostics + switch graphTypeStr { + case "plan": + g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.NormalMode) + case "plan-refresh-only": + g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.RefreshOnlyMode) + case "plan-destroy": + g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.DestroyMode) + case "apply": + plan := lr.Plan + + // Historically "terraform graph" would allow the nonsensical request to + // render an apply graph without a plan, so we continue to support that + // here, though perhaps one day this should be an error. + if lr.Plan == nil { + plan = &plans.Plan{ + Changes: plans.NewChanges(), + UIMode: plans.NormalMode, + PriorState: lr.InputState, + PrevRunState: lr.InputState, + } } - graphType = v + g, graphDiags = lr.Core.ApplyGraphForUI(plan, lr.Config) + case "eval", "validate": + // Terraform v0.12 through v1.0 supported both of these, but the + // graph variants for "eval" and "validate" are purely implementation + // details and don't reveal anything (user-model-wise) that you can't + // see in the plan graph. + graphDiags = graphDiags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Graph type no longer available", + fmt.Sprintf("The graph type %q is no longer available. Use -type=plan instead to get a similar result.", graphTypeStr), + )) + default: + graphDiags = graphDiags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Unsupported graph type", + `The -type=... argument must be either "plan", "plan-refresh-only", "plan-destroy", or "apply".`, + )) } - - // Skip validation during graph generation - we want to see the graph even if - // it is invalid for some reason. - g, graphDiags := ctx.Graph(graphType, &terraform.ContextGraphOpts{ - Verbose: verbose, - Validate: false, - }) diags = diags.Append(graphDiags) if graphDiags.HasErrors() { c.showDiagnostics(diags) @@ -165,19 +194,13 @@ func (c *GraphCommand) Help() string { helpText := ` Usage: terraform [global options] graph [options] - Outputs the visual execution graph of Terraform resources according to - either the current configuration or an execution plan. + Produces a representation of the dependency graph between different + objects in the current configuration and state. - The graph is outputted in DOT format. The typical program that can + The graph is presented in the DOT language. The typical program that can read this format is GraphViz, but many web services are also available to read this format. - The -type flag can be used to control the type of graph shown. Terraform - creates different graphs for different operations. See the options below - for the list of types supported. The default type is "plan" if a - configuration is given, and "apply" if a plan file is passed as an - argument. - Options: -plan=tfplan Render graph using the specified plan file instead of the @@ -186,8 +209,9 @@ Options: -draw-cycles Highlight any cycles in the graph with colored edges. This helps when diagnosing cycle errors. - -type=plan Type of graph to output. Can be: plan, plan-destroy, apply, - validate, input, refresh. + -type=plan Type of graph to output. Can be: plan, plan-refresh-only, + plan-destroy, or apply. By default Terraform chooses + "plan", or "apply" if you also set the -plan=... option. -module-depth=n (deprecated) In prior versions of Terraform, specified the depth of modules to show in the output. diff --git a/internal/command/import.go b/internal/command/import.go index ec118b498bf6..7fc61a2f0c9d 100644 --- a/internal/command/import.go +++ b/internal/command/import.go @@ -212,7 +212,7 @@ func (c *ImportCommand) Run(args []string) int { } // Get the context - ctx, state, ctxDiags := local.Context(opReq) + lr, state, ctxDiags := local.LocalRun(opReq) diags = diags.Append(ctxDiags) if ctxDiags.HasErrors() { c.showDiagnostics(diags) @@ -230,13 +230,18 @@ func (c *ImportCommand) Run(args []string) int { // Perform the import. Note that as you can see it is possible for this // API to import more than one resource at once. For now, we only allow // one while we stabilize this feature. - newState, importDiags := ctx.Import(&terraform.ImportOpts{ + newState, importDiags := lr.Core.Import(lr.Config, lr.InputState, &terraform.ImportOpts{ Targets: []*terraform.ImportTarget{ - &terraform.ImportTarget{ + { Addr: addr, ID: args[1], }, }, + + // The LocalRun idea is designed around our primary operations, so + // the input variables end up represented as plan options even though + // this particular operation isn't really a plan. + SetVariables: lr.PlanOpts.SetVariables, }) diags = diags.Append(importDiags) if diags.HasErrors() { diff --git a/internal/command/import_test.go b/internal/command/import_test.go index 28470dfc5bb0..bb56751e72ee 100644 --- a/internal/command/import_test.go +++ b/internal/command/import_test.go @@ -331,7 +331,7 @@ func TestImport_initializationErrorShouldUnlock(t *testing.T) { } // specifically, it should fail due to a missing provider - msg := ui.ErrorWriter.String() + msg := strings.ReplaceAll(ui.ErrorWriter.String(), "\n", " ") if want := `unknown provider "registry.terraform.io/hashicorp/unknown"`; !strings.Contains(msg, want) { t.Errorf("incorrect message\nwant substring: %s\ngot:\n%s", want, msg) } diff --git a/internal/command/meta.go b/internal/command/meta.go index cc4ff3b1a2fb..70d317779429 100644 --- a/internal/command/meta.go +++ b/internal/command/meta.go @@ -444,7 +444,6 @@ func (m *Meta) contextOpts() (*terraform.ContextOpts, error) { var opts terraform.ContextOpts - opts.Targets = m.targets opts.UIInput = m.UIInput() opts.Parallelism = m.parallelism diff --git a/internal/command/plan_test.go b/internal/command/plan_test.go index 84bf5f6ae84f..5638a9abfd3f 100644 --- a/internal/command/plan_test.go +++ b/internal/command/plan_test.go @@ -1051,7 +1051,7 @@ func TestPlan_init_required(t *testing.T) { t.Fatalf("expected error, got success") } got := output.Stderr() - if !strings.Contains(got, `Error: Could not load plugin`) { + if !strings.Contains(got, `Please run "terraform init".`) { t.Fatal("wrong error message in output:", got) } } diff --git a/internal/command/providers_schema.go b/internal/command/providers_schema.go index e2be06ca04b7..372564f12217 100644 --- a/internal/command/providers_schema.go +++ b/internal/command/providers_schema.go @@ -89,14 +89,20 @@ func (c *ProvidersSchemaCommand) Run(args []string) int { } // Get the context - ctx, _, ctxDiags := local.Context(opReq) + lr, _, ctxDiags := local.LocalRun(opReq) diags = diags.Append(ctxDiags) if ctxDiags.HasErrors() { c.showDiagnostics(diags) return 1 } - schemas := ctx.Schemas() + schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + c.showDiagnostics(diags) + return 1 + } + jsonSchemas, err := jsonprovider.Marshal(schemas) if err != nil { c.Ui.Error(fmt.Sprintf("Failed to marshal provider schemas to json: %s", err)) diff --git a/internal/command/show.go b/internal/command/show.go index 7504a1973469..9886768cad91 100644 --- a/internal/command/show.go +++ b/internal/command/show.go @@ -101,7 +101,7 @@ func (c *ShowCommand) Run(args []string) int { } // Get the context - ctx, _, ctxDiags := local.Context(opReq) + lr, _, ctxDiags := local.LocalRun(opReq) diags = diags.Append(ctxDiags) if ctxDiags.HasErrors() { c.showDiagnostics(diags) @@ -109,7 +109,12 @@ func (c *ShowCommand) Run(args []string) int { } // Get the schemas from the context - schemas := ctx.Schemas() + schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + c.showDiagnostics(diags) + return 1 + } var planErr, stateErr error var plan *plans.Plan @@ -148,7 +153,7 @@ func (c *ShowCommand) Run(args []string) int { if plan != nil { if jsonOutput { - config := ctx.Config() + config := lr.Config jsonPlan, err := jsonplan.Marshal(config, plan, stateFile, schemas) if err != nil { diff --git a/internal/command/state_show.go b/internal/command/state_show.go index 073548a09927..e95eca70fe66 100644 --- a/internal/command/state_show.go +++ b/internal/command/state_show.go @@ -82,14 +82,18 @@ func (c *StateShowCommand) Run(args []string) int { } // Get the context (required to get the schemas) - ctx, _, ctxDiags := local.Context(opReq) + lr, _, ctxDiags := local.LocalRun(opReq) if ctxDiags.HasErrors() { c.showDiagnostics(ctxDiags) return 1 } // Get the schemas from the context - schemas := ctx.Schemas() + schemas, diags := lr.Core.Schemas(lr.Config, lr.InputState) + if diags.HasErrors() { + c.showDiagnostics(diags) + return 1 + } // Get the state env, err := c.Workspace() diff --git a/internal/command/test.go b/internal/command/test.go index 9c786f7050a0..eff291179a2c 100644 --- a/internal/command/test.go +++ b/internal/command/test.go @@ -495,7 +495,16 @@ func (c *TestCommand) testSuiteProviders(suiteDirs testCommandSuiteDirs, testPro return ret, diags } -func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory, state *states.State, plan *plans.Plan, destroy bool) (*terraform.Context, tfdiags.Diagnostics) { +type testSuiteRunContext struct { + Core *terraform.Context + + PlanMode plans.Mode + Config *configs.Config + InputState *states.State + Changes *plans.Changes +} + +func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory, state *states.State, plan *plans.Plan, destroy bool) (*testSuiteRunContext, tfdiags.Diagnostics) { var changes *plans.Changes if plan != nil { changes = plan.Changes @@ -506,8 +515,7 @@ func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerF planMode = plans.DestroyMode } - return terraform.NewContext(&terraform.ContextOpts{ - Config: suiteDirs.Config, + tfCtx, diags := terraform.NewContext(&terraform.ContextOpts{ Providers: providerFactories, // We just use the provisioners from the main Meta here, because @@ -519,73 +527,83 @@ func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerF Meta: &terraform.ContextMeta{ Env: "test_" + suiteDirs.SuiteName, }, - - State: state, - Changes: changes, - PlanMode: planMode, }) + if diags.HasErrors() { + return nil, diags + } + return &testSuiteRunContext{ + Core: tfCtx, + + PlanMode: planMode, + Config: suiteDirs.Config, + InputState: state, + Changes: changes, + }, diags } func (c *TestCommand) testSuitePlan(ctx context.Context, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*plans.Plan, tfdiags.Diagnostics) { log.Printf("[TRACE] terraform test: create plan for suite %q", suiteDirs.SuiteName) - tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, nil, false) + runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, nil, false) if diags.HasErrors() { return nil, diags } - // We'll also validate as part of planning, since the "terraform plan" - // command would typically do that and so inconsistencies we detect only - // during planning typically produce error messages saying that they are - // a bug in Terraform. - // (It's safe to use the same context for both validate and plan, because - // validate doesn't generate any new sticky content inside the context - // as plan and apply both do.) - moreDiags := tfCtx.Validate() + // We'll also validate as part of planning, to ensure that the test + // configuration would pass "terraform validate". This is actually + // largely redundant with the runCtx.Core.Plan call below, but was + // included here originally because Plan did _originally_ assume that + // an earlier Validate had already passed, but now does its own + // validation work as (mostly) a superset of validate. + moreDiags := runCtx.Core.Validate(runCtx.Config) diags = diags.Append(moreDiags) if diags.HasErrors() { return nil, diags } - plan, moreDiags := tfCtx.Plan() + plan, moreDiags := runCtx.Core.Plan( + runCtx.Config, runCtx.InputState, &terraform.PlanOpts{Mode: runCtx.PlanMode}, + ) diags = diags.Append(moreDiags) return plan, diags } func (c *TestCommand) testSuiteApply(ctx context.Context, plan *plans.Plan, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*states.State, tfdiags.Diagnostics) { log.Printf("[TRACE] terraform test: apply plan for suite %q", suiteDirs.SuiteName) - tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, plan, false) + runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, plan, false) if diags.HasErrors() { // To make things easier on the caller, we'll return a valid empty // state even in this case. return states.NewState(), diags } - state, moreDiags := tfCtx.Apply() + state, moreDiags := runCtx.Core.Apply(plan, runCtx.Config) diags = diags.Append(moreDiags) return state, diags } func (c *TestCommand) testSuiteDestroy(ctx context.Context, state *states.State, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*states.State, tfdiags.Diagnostics) { log.Printf("[TRACE] terraform test: plan to destroy any existing objects for suite %q", suiteDirs.SuiteName) - tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, state, nil, true) + runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, state, nil, true) if diags.HasErrors() { return state, diags } - plan, moreDiags := tfCtx.Plan() + plan, moreDiags := runCtx.Core.Plan( + runCtx.Config, runCtx.InputState, &terraform.PlanOpts{Mode: runCtx.PlanMode}, + ) diags = diags.Append(moreDiags) if diags.HasErrors() { return state, diags } log.Printf("[TRACE] terraform test: apply the plan to destroy any existing objects for suite %q", suiteDirs.SuiteName) - tfCtx, moreDiags = c.testSuiteContext(suiteDirs, providerFactories, state, plan, true) + runCtx, moreDiags = c.testSuiteContext(suiteDirs, providerFactories, state, plan, true) diags = diags.Append(moreDiags) if diags.HasErrors() { return state, diags } - state, moreDiags = tfCtx.Apply() + state, moreDiags = runCtx.Core.Apply(plan, runCtx.Config) diags = diags.Append(moreDiags) return state, diags } diff --git a/internal/command/validate.go b/internal/command/validate.go index 7801ffd718ee..110fcec8c32e 100644 --- a/internal/command/validate.go +++ b/internal/command/validate.go @@ -5,8 +5,6 @@ import ( "path/filepath" "strings" - "github.com/zclconf/go-cty/cty" - "github.com/hashicorp/terraform/internal/command/arguments" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/terraform" @@ -73,31 +71,11 @@ func (c *ValidateCommand) validate(dir string) tfdiags.Diagnostics { return diags } - // "validate" is to check if the given module is valid regardless of - // input values, current state, etc. Therefore we populate all of the - // input values with unknown values of the expected type, allowing us - // to perform a type check without assuming any particular values. - varValues := make(terraform.InputValues) - for name, variable := range cfg.Module.Variables { - ty := variable.Type - if ty == cty.NilType { - // Can't predict the type at all, so we'll just mark it as - // cty.DynamicVal (unknown value of cty.DynamicPseudoType). - ty = cty.DynamicPseudoType - } - varValues[name] = &terraform.InputValue{ - Value: cty.UnknownVal(ty), - SourceType: terraform.ValueFromCLIArg, - } - } - opts, err := c.contextOpts() if err != nil { diags = diags.Append(err) return diags } - opts.Config = cfg - opts.Variables = varValues tfCtx, ctxDiags := terraform.NewContext(opts) diags = diags.Append(ctxDiags) @@ -105,7 +83,7 @@ func (c *ValidateCommand) validate(dir string) tfdiags.Diagnostics { return diags } - validateDiags := tfCtx.Validate() + validateDiags := tfCtx.Validate(cfg) diags = diags.Append(validateDiags) return diags } diff --git a/internal/repl/session_test.go b/internal/repl/session_test.go index 9eb73f5e5f85..7110324e1ecc 100644 --- a/internal/repl/session_test.go +++ b/internal/repl/session_test.go @@ -204,17 +204,19 @@ func testSession(t *testing.T, test testSessionTest) { // Build the TF context ctx, diags := terraform.NewContext(&terraform.ContextOpts{ - State: test.State, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): providers.FactoryFixed(p), }, - Config: config, }) if diags.HasErrors() { t.Fatalf("failed to create context: %s", diags.Err()) } - scope, diags := ctx.Eval(addrs.RootModuleInstance) + state := test.State + if state == nil { + state = states.NewState() + } + scope, diags := ctx.Eval(config, state, addrs.RootModuleInstance, &terraform.EvalOpts{}) if diags.HasErrors() { t.Fatalf("failed to create scope: %s", diags.Err()) } diff --git a/internal/states/sync.go b/internal/states/sync.go index 286eae965593..c70714f9df49 100644 --- a/internal/states/sync.go +++ b/internal/states/sync.go @@ -533,6 +533,16 @@ func (s *SyncState) Unlock() { s.lock.Unlock() } +// Close extracts the underlying state from inside this wrapper, making the +// wrapper invalid for any future operations. +func (s *SyncState) Close() *State { + s.lock.Lock() + ret := s.state + s.state = nil // make sure future operations can't still modify it + s.lock.Unlock() + return ret +} + // maybePruneModule will remove a module from the state altogether if it is // empty, unless it's the root module which must always be present. // diff --git a/internal/terraform/context.go b/internal/terraform/context.go index 4e96fd25e750..abb20528a448 100644 --- a/internal/terraform/context.go +++ b/internal/terraform/context.go @@ -10,12 +10,8 @@ import ( "github.com/apparentlymart/go-versions/versions" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/instances" - "github.com/hashicorp/terraform/internal/lang" - "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" - "github.com/hashicorp/terraform/internal/refactoring" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" "github.com/zclconf/go-cty/cty" @@ -41,16 +37,7 @@ const ( // ContextOpts are the user-configurable options to create a context with // NewContext. type ContextOpts struct { - Config *configs.Config - Changes *plans.Changes - State *states.State - Targets []addrs.Targetable - ForceReplace []addrs.AbsResourceInstance - Variables InputValues Meta *ContextMeta - PlanMode plans.Mode - SkipRefresh bool - Hooks []Hook Parallelism int Providers map[addrs.Provider]providers.Factory @@ -96,58 +83,18 @@ type ContextMeta struct { // perform operations on infrastructure. This structure is built using // NewContext. type Context struct { - config *configs.Config - changes *plans.Changes - skipRefresh bool - targets []addrs.Targetable - forceReplace []addrs.AbsResourceInstance - variables InputValues - meta *ContextMeta - planMode plans.Mode - - // state, refreshState, and prevRunState simultaneously track three - // different incarnations of the Terraform state: - // - // "state" is always the most "up-to-date". During planning it represents - // our best approximation of the planned new state, and during applying - // it represents the results of all of the actions we've taken so far. - // - // "refreshState" is populated and relevant only during planning, where we - // update it to reflect a provider's sense of the current state of the - // remote object each resource instance is bound to but don't include - // any changes implied by the configuration. - // - // "prevRunState" is similar to refreshState except that it doesn't even - // include the result of the provider's refresh step, and instead reflects - // the state as we found it prior to any changes, although it does reflect - // the result of running the provider's schema upgrade actions so that the - // resource instance objects will all conform to the _current_ resource - // type schemas if planning is successful, so that in that case it will - // be meaningful to compare prevRunState to refreshState to detect changes - // made outside of Terraform. - state *states.State - refreshState *states.State - prevRunState *states.State + // meta captures some misc. information about the working directory where + // we're taking these actions, and thus which should remain steady between + // operations. + meta *ContextMeta - // NOTE: If you're considering adding something new here, consider first - // whether it'd work to add it to type graphWalkOpts instead, possibly by - // adding new arguments to one of the exported operation methods, to scope - // it only to a particular operation rather than having it survive from one - // operation to the next as global mutable state. - // - // Historically we used fields here as a bit of a dumping ground for - // data that needed to ambiently pass between methods of Context, but - // that has tended to cause surprising misbehavior when data from one - // walk inadvertently bleeds into another walk against the same context. - // Perhaps one day we'll move changes, state, refreshState, and prevRunState - // to graphWalkOpts too. Ideally there shouldn't be anything in here which - // changes after NewContext returns. + components contextComponentFactory + dependencyLocks *depsfile.Locks + providersInDevelopment map[addrs.Provider]struct{} - hooks []Hook - components contextComponentFactory - schemas *Schemas - sh *stopHook - uiInput UIInput + hooks []Hook + sh *stopHook + uiInput UIInput l sync.Mutex // Lock acquired during any task parallelSem Semaphore @@ -168,14 +115,9 @@ type Context struct { // If the returned diagnostics contains errors then the resulting context is // invalid and must not be used. func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + log.Printf("[TRACE] terraform.NewContext: starting") - diags := CheckCoreVersionRequirements(opts.Config) - // If version constraints are not met then we'll bail early since otherwise - // we're likely to just see a bunch of other errors related to - // incompatibilities, which could be overwhelming for the user. - if diags.HasErrors() { - return nil, diags - } // Copy all the hooks and add our stop hook. We don't append directly // to the Config so that we're not modifying that in-place. @@ -184,11 +126,6 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { copy(hooks, opts.Hooks) hooks[len(opts.Hooks)] = sh - state := opts.State - if state == nil { - state = states.NewState() - } - // Determine parallelism, default to 10. We do this both to limit // CPU pressure but also to have an extra guard against rate throttling // from providers. @@ -207,55 +144,47 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { par = 10 } - // Set up the variables in the following sequence: - // 0 - Take default values from the configuration - // 1 - Take values from TF_VAR_x environment variables - // 2 - Take values specified in -var flags, overriding values - // set by environment variables if necessary. This includes - // values taken from -var-file in addition. - var variables InputValues - if opts.Config != nil { - // Default variables from the configuration seed our map. - variables = DefaultVariableValues(opts.Config.Module.Variables) - } - // Variables provided by the caller (from CLI, environment, etc) can - // override the defaults. - variables = variables.Override(opts.Variables) - components := &basicComponentFactory{ providers: opts.Providers, provisioners: opts.Provisioners, } - log.Printf("[TRACE] terraform.NewContext: loading provider schemas") - schemas, err := LoadSchemas(opts.Config, opts.State, components) - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Could not load plugin", - fmt.Sprintf(errPluginInit, err), - )) - return nil, diags - } + log.Printf("[TRACE] terraform.NewContext: complete") - changes := opts.Changes - if changes == nil { - changes = plans.NewChanges() - } + return &Context{ + hooks: hooks, + meta: opts.Meta, + uiInput: opts.UIInput, - config := opts.Config - if config == nil { - config = configs.NewEmptyConfig() - } + components: components, + dependencyLocks: opts.LockedDependencies, + providersInDevelopment: opts.ProvidersInDevelopment, + + parallelSem: NewSemaphore(par), + providerInputConfig: make(map[string]map[string]cty.Value), + providerSHA256s: opts.ProviderSHA256s, + sh: sh, + }, diags +} + +func (c *Context) Schemas(config *configs.Config, state *states.State) (*Schemas, tfdiags.Diagnostics) { + // TODO: This method gets called multiple times on the same context with + // the same inputs by different parts of Terraform that all need the + // schemas, and it's typically quite expensive because it has to spin up + // plugins to gather their schemas, so it'd be good to have some caching + // here to remember plugin schemas we already loaded since the plugin + // selections can't change during the life of a *Context object. + + var diags tfdiags.Diagnostics // If we have a configuration and a set of locked dependencies, verify that // the provider requirements from the configuration can be satisfied by the // locked dependencies. - if opts.LockedDependencies != nil { + if c.dependencyLocks != nil && config != nil { reqs, providerDiags := config.ProviderRequirements() diags = diags.Append(providerDiags) - locked := opts.LockedDependencies.AllProviders() + locked := c.dependencyLocks.AllProviders() unmetReqs := make(getproviders.Requirements) for provider, versionConstraints := range reqs { // Builtin providers are not listed in the locks file @@ -263,7 +192,7 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { continue } // Development providers must be excluded from this check - if _, ok := opts.ProvidersInDevelopment[provider]; ok { + if _, ok := c.providersInDevelopment[provider]; ok { continue } // If the required provider doesn't exist in the lock, or the @@ -292,81 +221,16 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { } } - switch opts.PlanMode { - case plans.NormalMode, plans.DestroyMode: - // OK - case plans.RefreshOnlyMode: - if opts.SkipRefresh { - // The CLI layer (and other similar callers) should prevent this - // combination of options. - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Incompatible plan options", - "Cannot skip refreshing in refresh-only mode. This is a bug in Terraform.", - )) - return nil, diags - } - default: - // The CLI layer (and other similar callers) should not try to - // create a context for a mode that Terraform Core doesn't support. - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Unsupported plan mode", - fmt.Sprintf("Terraform Core doesn't know how to handle plan mode %s. This is a bug in Terraform.", opts.PlanMode), - )) - return nil, diags - } - if len(opts.ForceReplace) > 0 && opts.PlanMode != plans.NormalMode { - // The other modes don't generate no-op or update actions that we might - // upgrade to be "replace", so doesn't make sense to combine those. + ret, err := LoadSchemas(config, state, c.components) + if err != nil { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, - "Unsupported plan mode", - fmt.Sprintf("Forcing resource instance replacement (with -replace=...) is allowed only in normal planning mode."), + "Failed to load plugin schemas", + fmt.Sprintf("Error while loading schemas for plugin components: %s.", err), )) return nil, diags } - - log.Printf("[TRACE] terraform.NewContext: complete") - - // By the time we get here, we should have values defined for all of - // the root module variables, even if some of them are "unknown". It's the - // caller's responsibility to have already handled the decoding of these - // from the various ways the CLI allows them to be set and to produce - // user-friendly error messages if they are not all present, and so - // the error message from checkInputVariables should never be seen and - // includes language asking the user to report a bug. - if config != nil { - varDiags := checkInputVariables(config.Module.Variables, variables) - diags = diags.Append(varDiags) - } - - return &Context{ - components: components, - schemas: schemas, - planMode: opts.PlanMode, - changes: changes, - hooks: hooks, - meta: opts.Meta, - config: config, - state: state, - refreshState: state.DeepCopy(), - prevRunState: state.DeepCopy(), - skipRefresh: opts.SkipRefresh, - targets: opts.Targets, - forceReplace: opts.ForceReplace, - uiInput: opts.UIInput, - variables: variables, - - parallelSem: NewSemaphore(par), - providerInputConfig: make(map[string]map[string]cty.Value), - providerSHA256s: opts.ProviderSHA256s, - sh: sh, - }, diags -} - -func (c *Context) Schemas() *Schemas { - return c.schemas + return ret, diags } type ContextGraphOpts struct { @@ -377,510 +241,6 @@ type ContextGraphOpts struct { Verbose bool } -// Graph returns the graph used for the given operation type. -// -// The most extensive or complex graph type is GraphTypePlan. -func (c *Context) Graph(typ GraphType, opts *ContextGraphOpts) (*Graph, tfdiags.Diagnostics) { - if opts == nil { - opts = &ContextGraphOpts{Validate: true} - } - - log.Printf("[INFO] terraform: building graph: %s", typ) - switch typ { - case GraphTypeApply: - return (&ApplyGraphBuilder{ - Config: c.config, - Changes: c.changes, - State: c.state, - Components: c.components, - Schemas: c.schemas, - Targets: c.targets, - ForceReplace: c.forceReplace, - Validate: opts.Validate, - }).Build(addrs.RootModuleInstance) - - case GraphTypeValidate: - // The validate graph is just a slightly modified plan graph: an empty - // state is substituted in for Validate. - return ValidateGraphBuilder(&PlanGraphBuilder{ - Config: c.config, - Components: c.components, - Schemas: c.schemas, - Targets: c.targets, - Validate: opts.Validate, - State: states.NewState(), - }).Build(addrs.RootModuleInstance) - - case GraphTypePlan: - // Create the plan graph builder - return (&PlanGraphBuilder{ - Config: c.config, - State: c.state, - Components: c.components, - Schemas: c.schemas, - Targets: c.targets, - ForceReplace: c.forceReplace, - Validate: opts.Validate, - skipRefresh: c.skipRefresh, - }).Build(addrs.RootModuleInstance) - - case GraphTypePlanDestroy: - return (&DestroyPlanGraphBuilder{ - Config: c.config, - State: c.state, - Components: c.components, - Schemas: c.schemas, - Targets: c.targets, - Validate: opts.Validate, - skipRefresh: c.skipRefresh, - }).Build(addrs.RootModuleInstance) - - case GraphTypePlanRefreshOnly: - // Create the plan graph builder, with skipPlanChanges set to - // activate the "refresh only" mode. - return (&PlanGraphBuilder{ - Config: c.config, - State: c.state, - Components: c.components, - Schemas: c.schemas, - Targets: c.targets, - Validate: opts.Validate, - skipRefresh: c.skipRefresh, - skipPlanChanges: true, // this activates "refresh only" mode. - }).Build(addrs.RootModuleInstance) - - case GraphTypeEval: - return (&EvalGraphBuilder{ - Config: c.config, - State: c.state, - Components: c.components, - Schemas: c.schemas, - }).Build(addrs.RootModuleInstance) - - default: - // Should never happen, because the above is exhaustive for all graph types. - panic(fmt.Errorf("unsupported graph type %s", typ)) - } -} - -// State returns a copy of the current state associated with this context. -// -// This cannot safely be called in parallel with any other Context function. -func (c *Context) State() *states.State { - return c.state.DeepCopy() -} - -// Eval produces a scope in which expressions can be evaluated for -// the given module path. -// -// This method must first evaluate any ephemeral values (input variables, local -// values, and output values) in the configuration. These ephemeral values are -// not included in the persisted state, so they must be re-computed using other -// values in the state before they can be properly evaluated. The updated -// values are retained in the main state associated with the receiving context. -// -// This function takes no action against remote APIs but it does need access -// to all provider and provisioner instances in order to obtain their schemas -// for type checking. -// -// The result is an evaluation scope that can be used to resolve references -// against the root module. If the returned diagnostics contains errors then -// the returned scope may be nil. If it is not nil then it may still be used -// to attempt expression evaluation or other analysis, but some expressions -// may not behave as expected. -func (c *Context) Eval(path addrs.ModuleInstance) (*lang.Scope, tfdiags.Diagnostics) { - // This is intended for external callers such as the "terraform console" - // command. Internally, we create an evaluator in c.walk before walking - // the graph, and create scopes in ContextGraphWalker. - - var diags tfdiags.Diagnostics - defer c.acquireRun("eval")() - - // Start with a copy of state so that we don't affect any instances - // that other methods may have already returned. - c.state = c.state.DeepCopy() - var walker *ContextGraphWalker - - graph, graphDiags := c.Graph(GraphTypeEval, nil) - diags = diags.Append(graphDiags) - if !diags.HasErrors() { - var walkDiags tfdiags.Diagnostics - walker, walkDiags = c.walk(graph, walkEval, &graphWalkOpts{}) - diags = diags.Append(walker.NonFatalDiagnostics) - diags = diags.Append(walkDiags) - } - - if walker == nil { - // If we skipped walking the graph (due to errors) then we'll just - // use a placeholder graph walker here, which'll refer to the - // unmodified state. - walker = c.graphWalker(walkEval, &graphWalkOpts{}) - } - - // This is a bit weird since we don't normally evaluate outside of - // the context of a walk, but we'll "re-enter" our desired path here - // just to get hold of an EvalContext for it. GraphContextBuiltin - // caches its contexts, so we should get hold of the context that was - // previously used for evaluation here, unless we skipped walking. - evalCtx := walker.EnterPath(path) - return evalCtx.EvaluationScope(nil, EvalDataForNoInstanceKey), diags -} - -// Apply applies the changes represented by this context and returns -// the resulting state. -// -// Even in the case an error is returned, the state may be returned and will -// potentially be partially updated. In addition to returning the resulting -// state, this context is updated with the latest state. -// -// If the state is required after an error, the caller should call -// Context.State, rather than rely on the return value. -// -// TODO: Apply and Refresh should either always return a state, or rely on the -// State() method. Currently the helper/resource testing framework relies -// on the absence of a returned state to determine if Destroy can be -// called, so that will need to be refactored before this can be changed. -func (c *Context) Apply() (*states.State, tfdiags.Diagnostics) { - defer c.acquireRun("apply")() - - // Copy our own state - c.state = c.state.DeepCopy() - - // Build the graph. - graph, diags := c.Graph(GraphTypeApply, nil) - if diags.HasErrors() { - return nil, diags - } - - // Determine the operation - operation := walkApply - if c.planMode == plans.DestroyMode { - operation = walkDestroy - } - - // Walk the graph - walker, walkDiags := c.walk(graph, operation, &graphWalkOpts{}) - diags = diags.Append(walker.NonFatalDiagnostics) - diags = diags.Append(walkDiags) - - if c.planMode == plans.DestroyMode && !diags.HasErrors() { - // If we know we were trying to destroy objects anyway, and we - // completed without any errors, then we'll also prune out any - // leftover empty resource husks (left after all of the instances - // of a resource with "count" or "for_each" are destroyed) to - // help ensure we end up with an _actually_ empty state, assuming - // we weren't destroying with -target here. - // - // (This doesn't actually take into account -target, but that should - // be okay because it doesn't throw away anything we can't recompute - // on a subsequent "terraform plan" run, if the resources are still - // present in the configuration. However, this _will_ cause "count = 0" - // resources to read as unknown during the next refresh walk, which - // may cause some additional churn if used in a data resource or - // provider block, until we remove refreshing as a separate walk and - // just do it as part of the plan walk.) - c.state.PruneResourceHusks() - } - - if len(c.targets) > 0 { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Warning, - "Applied changes may be incomplete", - `The plan was created with the -target option in effect, so some changes requested in the configuration may have been ignored and the output values may not be fully updated. Run the following command to verify that no other changes are pending: - terraform plan - -Note that the -target option is not suitable for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, - )) - } - - // This isn't technically needed, but don't leave an old refreshed state - // around in case we re-use the context in internal tests. - c.refreshState = c.state.DeepCopy() - - return c.state, diags -} - -// Plan generates an execution plan for the given context, and returns the -// refreshed state. -// -// The execution plan encapsulates the context and can be stored -// in order to reinstantiate a context later for Apply. -// -// Plan also updates the diff of this context to be the diff generated -// by the plan, so Apply can be called after. -func (c *Context) Plan() (*plans.Plan, tfdiags.Diagnostics) { - defer c.acquireRun("plan")() - c.changes = plans.NewChanges() - var diags tfdiags.Diagnostics - - if len(c.targets) > 0 { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Warning, - "Resource targeting is in effect", - `You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration. - -The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, - )) - } - - var plan *plans.Plan - var planDiags tfdiags.Diagnostics - switch c.planMode { - case plans.NormalMode: - plan, planDiags = c.plan() - case plans.DestroyMode: - plan, planDiags = c.destroyPlan() - case plans.RefreshOnlyMode: - plan, planDiags = c.refreshOnlyPlan() - default: - panic(fmt.Sprintf("unsupported plan mode %s", c.planMode)) - } - diags = diags.Append(planDiags) - if diags.HasErrors() { - return nil, diags - } - - // convert the variables into the format expected for the plan - varVals := make(map[string]plans.DynamicValue, len(c.variables)) - for k, iv := range c.variables { - // We use cty.DynamicPseudoType here so that we'll save both the - // value _and_ its dynamic type in the plan, so we can recover - // exactly the same value later. - dv, err := plans.NewDynamicValue(iv.Value, cty.DynamicPseudoType) - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Failed to prepare variable value for plan", - fmt.Sprintf("The value for variable %q could not be serialized to store in the plan: %s.", k, err), - )) - continue - } - varVals[k] = dv - } - - // insert the run-specific data from the context into the plan; variables, - // targets and provider SHAs. - plan.VariableValues = varVals - plan.TargetAddrs = c.targets - plan.ProviderSHA256s = c.providerSHA256s - - return plan, diags -} - -func (c *Context) plan() (*plans.Plan, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - - moveStmts, moveResults := c.prePlanFindAndApplyMoves() - - graph, graphDiags := c.Graph(GraphTypePlan, nil) - diags = diags.Append(graphDiags) - if graphDiags.HasErrors() { - return nil, diags - } - - // Do the walk - walker, walkDiags := c.walk(graph, walkPlan, &graphWalkOpts{ - MoveResults: moveResults, - }) - diags = diags.Append(walker.NonFatalDiagnostics) - diags = diags.Append(walkDiags) - if walkDiags.HasErrors() { - return nil, diags - } - plan := &plans.Plan{ - UIMode: plans.NormalMode, - Changes: c.changes, - ForceReplaceAddrs: c.forceReplace, - PrevRunState: c.prevRunState.DeepCopy(), - } - - c.refreshState.SyncWrapper().RemovePlannedResourceInstanceObjects() - - refreshedState := c.refreshState.DeepCopy() - plan.PriorState = refreshedState - - // replace the working state with the updated state, so that immediate calls - // to Apply work as expected. - c.state = refreshedState - - // TODO: Record the move results in the plan - diags = diags.Append(c.postPlanValidateMoves(moveStmts, walker.InstanceExpander.AllInstances())) - - return plan, diags -} - -func (c *Context) destroyPlan() (*plans.Plan, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - destroyPlan := &plans.Plan{ - PriorState: c.state.DeepCopy(), - } - c.changes = plans.NewChanges() - - moveStmts, moveResults := c.prePlanFindAndApplyMoves() - - // A destroy plan starts by running Refresh to read any pending data - // sources, and remove missing managed resources. This is required because - // a "destroy plan" is only creating delete changes, and is essentially a - // local operation. - // - // NOTE: if skipRefresh _is_ set then we'll rely on the destroy-plan walk - // below to upgrade the prevRunState and priorState both to the latest - // resource type schemas, so NodePlanDestroyableResourceInstance.Execute - // must coordinate with this by taking that action only when c.skipRefresh - // _is_ set. This coupling between the two is unfortunate but necessary - // to work within our current structure. - if !c.skipRefresh { - refreshPlan, refreshDiags := c.plan() - diags = diags.Append(refreshDiags) - if diags.HasErrors() { - return nil, diags - } - - // insert the refreshed state into the destroy plan result, and discard - // the changes recorded from the refresh. - destroyPlan.PriorState = refreshPlan.PriorState.DeepCopy() - destroyPlan.PrevRunState = refreshPlan.PrevRunState.DeepCopy() - c.changes = plans.NewChanges() - } - - graph, graphDiags := c.Graph(GraphTypePlanDestroy, nil) - diags = diags.Append(graphDiags) - if graphDiags.HasErrors() { - return nil, diags - } - - // Do the walk - walker, walkDiags := c.walk(graph, walkPlanDestroy, &graphWalkOpts{ - MoveResults: moveResults, - }) - diags = diags.Append(walker.NonFatalDiagnostics) - diags = diags.Append(walkDiags) - if walkDiags.HasErrors() { - return nil, diags - } - - if c.skipRefresh { - // If we didn't do refreshing then both the previous run state and - // the prior state are the result of upgrading the previous run state, - // which we should've upgraded as part of the plan-destroy walk - // in NodePlanDestroyableResourceInstance.Execute, so they'll have the - // current schema but neither will reflect any out-of-band changes in - // the remote system. - destroyPlan.PrevRunState = c.prevRunState.DeepCopy() - destroyPlan.PriorState = c.prevRunState.DeepCopy() - } - - destroyPlan.UIMode = plans.DestroyMode - destroyPlan.Changes = c.changes - - // TODO: Record the move results in the plan - diags = diags.Append(c.postPlanValidateMoves(moveStmts, walker.InstanceExpander.AllInstances())) - - return destroyPlan, diags -} - -func (c *Context) refreshOnlyPlan() (*plans.Plan, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - - moveStmts, moveResults := c.prePlanFindAndApplyMoves() - - graph, graphDiags := c.Graph(GraphTypePlanRefreshOnly, nil) - diags = diags.Append(graphDiags) - if graphDiags.HasErrors() { - return nil, diags - } - - // Do the walk - walker, walkDiags := c.walk(graph, walkPlan, &graphWalkOpts{ - MoveResults: moveResults, - }) - diags = diags.Append(walker.NonFatalDiagnostics) - diags = diags.Append(walkDiags) - if walkDiags.HasErrors() { - return nil, diags - } - plan := &plans.Plan{ - UIMode: plans.RefreshOnlyMode, - Changes: c.changes, - PrevRunState: c.prevRunState.DeepCopy(), - } - - // If the graph builder and graph nodes correctly obeyed our directive - // to refresh only, the set of resource changes should always be empty. - // We'll safety-check that here so we can return a clear message about it, - // rather than probably just generating confusing output at the UI layer. - if len(plan.Changes.Resources) != 0 { - // Some extra context in the logs in case the user reports this message - // as a bug, as a starting point for debugging. - for _, rc := range plan.Changes.Resources { - if depKey := rc.DeposedKey; depKey == states.NotDeposed { - log.Printf("[DEBUG] Refresh-only plan includes %s change for %s", rc.Action, rc.Addr) - } else { - log.Printf("[DEBUG] Refresh-only plan includes %s change for %s deposed object %s", rc.Action, rc.Addr, depKey) - } - } - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Invalid refresh-only plan", - "Terraform generated planned resource changes in a refresh-only plan. This is a bug in Terraform.", - )) - } - - c.refreshState.SyncWrapper().RemovePlannedResourceInstanceObjects() - - refreshedState := c.refreshState - plan.PriorState = refreshedState.DeepCopy() - - // replace the working state with the updated state, so that immediate calls - // to Apply work as expected. DeepCopy because such an apply should not - // mutate - c.state = refreshedState - - // TODO: Record the move results in the plan - diags = diags.Append(c.postPlanValidateMoves(moveStmts, walker.InstanceExpander.AllInstances())) - - return plan, diags -} - -func (c *Context) prePlanFindAndApplyMoves() ([]refactoring.MoveStatement, map[addrs.UniqueKey]refactoring.MoveResult) { - moveStmts := refactoring.FindMoveStatements(c.config) - moveResults := refactoring.ApplyMoves(moveStmts, c.prevRunState) - if len(c.targets) > 0 { - for _, result := range moveResults { - matchesTarget := false - for _, targetAddr := range c.targets { - if targetAddr.TargetContains(result.From) { - matchesTarget = true - break - } - } - if !matchesTarget { - // TODO: Return an error stating that a targeted plan is - // only valid if it includes this address that was moved. - } - } - } - return moveStmts, moveResults -} - -func (c *Context) postPlanValidateMoves(stmts []refactoring.MoveStatement, allInsts instances.Set) tfdiags.Diagnostics { - return refactoring.ValidateMoves(stmts, c.config, allInsts) -} - -// Refresh goes through all the resources in the state and refreshes them -// to their latest state. This is done by executing a plan, and retaining the -// state while discarding the change set. -// -// In the case of an error, there is no state returned. -func (c *Context) Refresh() (*states.State, tfdiags.Diagnostics) { - p, diags := c.Plan() - if diags.HasErrors() { - return nil, diags - } - - return p.PriorState, diags -} - // Stop stops the running task. // // Stop will block until the task completes. @@ -911,63 +271,6 @@ func (c *Context) Stop() { log.Printf("[WARN] terraform: stop complete") } -// Validate performs semantic validation of the configuration, and returning -// any warnings or errors. -// -// Syntax and structural checks are performed by the configuration loader, -// and so are not repeated here. -func (c *Context) Validate() tfdiags.Diagnostics { - defer c.acquireRun("validate")() - - var diags tfdiags.Diagnostics - - // If we have errors at this point then we probably won't be able to - // construct a graph without producing redundant errors, so we'll halt early. - if diags.HasErrors() { - return diags - } - - // Build the graph so we can walk it and run Validate on nodes. - // We also validate the graph generated here, but this graph doesn't - // necessarily match the graph that Plan will generate, so we'll validate the - // graph again later after Planning. - graph, graphDiags := c.Graph(GraphTypeValidate, nil) - diags = diags.Append(graphDiags) - if graphDiags.HasErrors() { - return diags - } - - // Walk - walker, walkDiags := c.walk(graph, walkValidate, &graphWalkOpts{}) - diags = diags.Append(walker.NonFatalDiagnostics) - diags = diags.Append(walkDiags) - if walkDiags.HasErrors() { - return diags - } - - return diags -} - -// Config returns the configuration tree associated with this context. -func (c *Context) Config() *configs.Config { - return c.config -} - -// Variables will return the mapping of variables that were defined -// for this Context. If Input was called, this mapping may be different -// than what was given. -func (c *Context) Variables() InputValues { - return c.variables -} - -// SetVariable sets a variable after a context has already been built. -func (c *Context) SetVariable(k string, v cty.Value) { - c.variables[k] = &InputValue{ - Value: v, - SourceType: ValueFromCaller, - } -} - func (c *Context) acquireRun(phase string) func() { // With the run lock held, grab the context lock to make changes // to the run context. @@ -1011,70 +314,6 @@ func (c *Context) releaseRun() { c.runContext = nil } -// graphWalkOpts is an assortment of options and inputs we need when -// constructing a graph walker. -type graphWalkOpts struct { - // MoveResults is a table of the results of applying move statements prior - // to a plan walk. Irrelevant and totally ignored for non-plan walks. - MoveResults map[addrs.UniqueKey]refactoring.MoveResult -} - -func (c *Context) walk(graph *Graph, operation walkOperation, opts *graphWalkOpts) (*ContextGraphWalker, tfdiags.Diagnostics) { - log.Printf("[DEBUG] Starting graph walk: %s", operation.String()) - - walker := c.graphWalker(operation, opts) - - // Watch for a stop so we can call the provider Stop() API. - watchStop, watchWait := c.watchStop(walker) - - // Walk the real graph, this will block until it completes - diags := graph.Walk(walker) - - // Close the channel so the watcher stops, and wait for it to return. - close(watchStop) - <-watchWait - - return walker, diags -} - -func (c *Context) graphWalker(operation walkOperation, opts *graphWalkOpts) *ContextGraphWalker { - var state *states.SyncState - var refreshState *states.SyncState - var prevRunState *states.SyncState - - switch operation { - case walkValidate: - // validate should not use any state - state = states.NewState().SyncWrapper() - - // validate currently uses the plan graph, so we have to populate the - // refreshState and the prevRunState. - refreshState = states.NewState().SyncWrapper() - prevRunState = states.NewState().SyncWrapper() - - case walkPlan, walkPlanDestroy: - state = c.state.SyncWrapper() - refreshState = c.refreshState.SyncWrapper() - prevRunState = c.prevRunState.SyncWrapper() - - default: - state = c.state.SyncWrapper() - } - - return &ContextGraphWalker{ - Context: c, - State: state, - RefreshState: refreshState, - PrevRunState: prevRunState, - Changes: c.changes.SyncWrapper(), - InstanceExpander: instances.NewExpander(), - MoveResults: opts.MoveResults, - Operation: operation, - StopContext: c.runContext, - RootVariableValues: c.variables, - } -} - // watchStop immediately returns a `stop` and a `wait` chan after dispatching // the watchStop goroutine. This will watch the runContext for cancellation and // stop the providers accordingly. When the watch is no longer needed, the diff --git a/internal/terraform/context_apply.go b/internal/terraform/context_apply.go new file mode 100644 index 000000000000..4ba7e8dc0eb9 --- /dev/null +++ b/internal/terraform/context_apply.go @@ -0,0 +1,142 @@ +package terraform + +import ( + "fmt" + "log" + + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/plans" + "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/tfdiags" + "github.com/zclconf/go-cty/cty" +) + +// Apply performs the actions described by the given Plan object and returns +// the resulting updated state. +// +// The given configuration *must* be the same configuration that was passed +// earlier to Context.Plan in order to create this plan. +// +// Even if the returned diagnostics contains errors, Apply always returns the +// resulting state which is likely to have been partially-updated. +func (c *Context) Apply(plan *plans.Plan, config *configs.Config) (*states.State, tfdiags.Diagnostics) { + defer c.acquireRun("apply")() + var diags tfdiags.Diagnostics + + schemas, moreDiags := c.Schemas(config, plan.PriorState) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return nil, diags + } + + log.Printf("[DEBUG] Building and walking apply graph for %s plan", plan.UIMode) + + graph, operation, moreDiags := c.applyGraph(plan, config, schemas, true) + if moreDiags.HasErrors() { + return nil, diags + } + + variables := InputValues{} + for name, dyVal := range plan.VariableValues { + val, err := dyVal.Decode(cty.DynamicPseudoType) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Invalid variable value in plan", + fmt.Sprintf("Invalid value for variable %q recorded in plan file: %s.", name, err), + )) + continue + } + + variables[name] = &InputValue{ + Value: val, + SourceType: ValueFromPlan, + } + } + + workingState := plan.PriorState.DeepCopy() + walker, walkDiags := c.walk(graph, operation, &graphWalkOpts{ + Config: config, + Schemas: schemas, + InputState: workingState, + Changes: plan.Changes, + RootVariableValues: variables, + }) + diags = diags.Append(walker.NonFatalDiagnostics) + diags = diags.Append(walkDiags) + + newState := walker.State.Close() + if plan.UIMode == plans.DestroyMode && !diags.HasErrors() { + // NOTE: This is a vestigial violation of the rule that we mustn't + // use plan.UIMode to affect apply-time behavior. + // We ideally ought to just call newState.PruneResourceHusks + // unconditionally here, but we historically didn't and haven't yet + // verified that it'd be safe to do so. + newState.PruneResourceHusks() + } + + if len(plan.TargetAddrs) > 0 { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Warning, + "Applied changes may be incomplete", + `The plan was created with the -target option in effect, so some changes requested in the configuration may have been ignored and the output values may not be fully updated. Run the following command to verify that no other changes are pending: + terraform plan + +Note that the -target option is not suitable for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, + )) + } + + return newState, diags +} + +func (c *Context) applyGraph(plan *plans.Plan, config *configs.Config, schemas *Schemas, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) { + graph, diags := (&ApplyGraphBuilder{ + Config: config, + Changes: plan.Changes, + State: plan.PriorState, + Components: c.components, + Schemas: schemas, + Targets: plan.TargetAddrs, + ForceReplace: plan.ForceReplaceAddrs, + Validate: validate, + }).Build(addrs.RootModuleInstance) + + operation := walkApply + if plan.UIMode == plans.DestroyMode { + // NOTE: This is a vestigial violation of the rule that we mustn't + // use plan.UIMode to affect apply-time behavior. It's a design error + // if anything downstream switches behavior when operation is set + // to walkDestroy, but we've not yet fully audited that. + // TODO: Audit that and remove walkDestroy as an operation mode. + operation = walkDestroy + } + + return graph, operation, diags +} + +// ApplyGraphForUI is a last vestage of graphs in the public interface of +// Context (as opposed to graphs as an implementation detail) intended only for +// use by the "terraform graph" command when asked to render an apply-time +// graph. +// +// The result of this is intended only for rendering ot the user as a dot +// graph, and so may change in future in order to make the result more useful +// in that context, even if drifts away from the physical graph that Terraform +// Core currently uses as an implementation detail of planning. +func (c *Context) ApplyGraphForUI(plan *plans.Plan, config *configs.Config) (*Graph, tfdiags.Diagnostics) { + // For now though, this really is just the internal graph, confusing + // implementation details and all. + + var diags tfdiags.Diagnostics + + schemas, moreDiags := c.Schemas(config, plan.PriorState) + diags = diags.Append(moreDiags) + if diags.HasErrors() { + return nil, diags + } + + graph, _, moreDiags := c.applyGraph(plan, config, schemas, false) + diags = diags.Append(moreDiags) + return graph, diags +} diff --git a/internal/terraform/context_apply2_test.go b/internal/terraform/context_apply2_test.go index 68159595951d..82a48e2d7e56 100644 --- a/internal/terraform/context_apply2_test.go +++ b/internal/terraform/context_apply2_test.go @@ -47,21 +47,20 @@ func TestContext2Apply_createBeforeDestroy_deposedKeyPreApply(t *testing.T) { hook := new(MockHook) ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{hook}, + Hooks: []Hook{hook}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -145,28 +144,27 @@ output "data" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: ps, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.Err()) } // now destroy the whole thing ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: ps, - PlanMode: plans.DestroyMode, }) - _, diags = ctx.Plan() + plan, diags = ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -177,7 +175,7 @@ output "data" { return resp } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -231,18 +229,15 @@ resource "test_instance" "a" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatal(diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - _, diags := ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -322,17 +317,13 @@ resource "aws_instance" "bin" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) bar := plan.PriorState.ResourceInstance(barAddr) if len(bar.Current.Dependencies) == 0 || !bar.Current.Dependencies[0].Equal(fooAddr.ContainingResource().Config()) { @@ -354,7 +345,7 @@ resource "aws_instance" "bin" { t.Fatalf("baz should depend on bam after refresh, but got %s", baz.Current.Dependencies) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -430,19 +421,15 @@ resource "test_resource" "b" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - _, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -475,18 +462,15 @@ output "out" { p := simpleMockProvider() ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -496,10 +480,8 @@ output "out" { t.Fatalf("Expected 1 sensitive mark for test_object.a, got %#v\n", obj.Current.AttrSensitivePaths) } - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) // make sure the same marks are compared in the next plan as well for _, c := range plan.Changes.Resources { @@ -543,27 +525,20 @@ resource "test_object" "y" { p := simpleMockProvider() ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - _, diags = ctx.Apply() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + state, diags := ctx.Apply(plan, m) + assertNoErrors(t, diags) // FINAL PLAN: - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) // make sure the same marks are compared in the next plan as well for _, c := range plan.Changes.Resources { diff --git a/internal/terraform/context_apply_test.go b/internal/terraform/context_apply_test.go index 4eb4f62490b3..9d155e73e865 100644 --- a/internal/terraform/context_apply_test.go +++ b/internal/terraform/context_apply_test.go @@ -38,17 +38,15 @@ func TestContext2Apply_basic(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -78,13 +76,12 @@ func TestContext2Apply_unstable(t *testing.T) { p := testProvider("test") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected error during Plan: %s", diags.Err()) } @@ -104,7 +101,7 @@ func TestContext2Apply_unstable(t *testing.T) { t.Fatalf("Attribute 'random' has known value %#v; should be unknown in plan", rd.After.GetAttr("random")) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("unexpected error during Apply: %s", diags.Err()) } @@ -135,17 +132,15 @@ func TestContext2Apply_escape(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -165,17 +160,15 @@ func TestContext2Apply_resourceCountOneList(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoDiagnostics(t, diags) got := strings.TrimSpace(state.String()) @@ -195,17 +188,15 @@ func TestContext2Apply_resourceCountZeroList(t *testing.T) { p := testProvider("null") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -250,17 +241,15 @@ func TestContext2Apply_resourceDependsOnModule(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -324,18 +313,15 @@ func TestContext2Apply_resourceDependsOnModuleStateOnly(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) if !reflect.DeepEqual(order, []string{"child", "parent"}) { @@ -354,17 +340,15 @@ func TestContext2Apply_resourceDependsOnModuleDestroy(t *testing.T) { var globalState *states.State { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -397,19 +381,17 @@ func TestContext2Apply_resourceDependsOnModuleDestroy(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: globalState, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, globalState, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -452,17 +434,15 @@ func TestContext2Apply_resourceDependsOnModuleGrandchild(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -505,17 +485,15 @@ func TestContext2Apply_resourceDependsOnModuleInModule(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -534,17 +512,15 @@ func TestContext2Apply_mapVarBetweenModules(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -574,17 +550,15 @@ func TestContext2Apply_refCount(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -607,17 +581,15 @@ func TestContext2Apply_providerAlias(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -643,16 +615,16 @@ func TestContext2Apply_providerAliasConfigure(t *testing.T) { p2.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("another"): testProviderFuncFixed(p2), }, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } // Configure to record calls AFTER Plan above @@ -668,7 +640,7 @@ func TestContext2Apply_providerAliasConfigure(t *testing.T) { return } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -695,17 +667,15 @@ func TestContext2Apply_providerWarning(t *testing.T) { return } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -732,17 +702,15 @@ func TestContext2Apply_emptyModule(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -771,20 +739,19 @@ func TestContext2Apply_createBeforeDestroy(t *testing.T) { mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -852,20 +819,19 @@ func TestContext2Apply_createBeforeDestroyUpdate(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -910,20 +876,19 @@ func TestContext2Apply_createBeforeDestroy_dependsNonCBD(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -974,21 +939,20 @@ func TestContext2Apply_createBeforeDestroy_hook(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -1052,20 +1016,19 @@ func TestContext2Apply_createBeforeDestroy_deposedCount(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -1113,20 +1076,19 @@ func TestContext2Apply_createBeforeDestroy_deposedOnly(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -1154,22 +1116,22 @@ func TestContext2Apply_destroyComputed(t *testing.T) { mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + if diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("plan failed") } else { - t.Logf("plan:\n\n%s", legacyDiffComparisonString(p.Changes)) + t.Logf("plan:\n\n%s", legacyDiffComparisonString(plan.Changes)) } - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("apply failed") } @@ -1222,20 +1184,18 @@ func testContext2Apply_destroyDependsOn(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, Parallelism: 1, // To check ordering }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -1276,7 +1236,7 @@ func TestContext2Apply_destroyDependsOnStateOnly(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"bar"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "aws_instance", @@ -1318,20 +1278,18 @@ func testContext2Apply_destroyDependsOnStateOnly(t *testing.T, state *states.Sta } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, Parallelism: 1, // To check ordering }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -1372,7 +1330,7 @@ func TestContext2Apply_destroyDependsOnStateOnlyModule(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"bar"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "aws_instance", @@ -1415,20 +1373,18 @@ func testContext2Apply_destroyDependsOnStateOnlyModule(t *testing.T, state *stat } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, Parallelism: 1, // To check ordering }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -1450,19 +1406,19 @@ func TestContext2Apply_dataBasic(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) actual := strings.TrimSpace(state.String()) @@ -1495,22 +1451,22 @@ func TestContext2Apply_destroyData(t *testing.T) { hook := &testHook{} ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, - Hooks: []Hook{hook}, + Hooks: []Hook{hook}, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - newState, diags := ctx.Apply() + newState, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -1563,21 +1519,21 @@ func TestContext2Apply_destroySkipsCBD(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } } @@ -1597,19 +1553,17 @@ func TestContext2Apply_destroyModuleVarProviderConfig(t *testing.T) { mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - _, diags := ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -1650,20 +1604,20 @@ func TestContext2Apply_destroyCrossProviders(t *testing.T) { addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p_aws), } - ctx := getContextForApply_destroyCrossProviders(t, m, providers) + ctx, m, state := getContextForApply_destroyCrossProviders(t, m, providers) - if _, diags := ctx.Plan(); diags.HasErrors() { - logDiagnostics(t, diags) - t.Fatal("plan failed") - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("apply failed") } } -func getContextForApply_destroyCrossProviders(t *testing.T, m *configs.Config, providerFactories map[addrs.Provider]providers.Factory) *Context { +func getContextForApply_destroyCrossProviders(t *testing.T, m *configs.Config, providerFactories map[addrs.Provider]providers.Factory) (*Context, *configs.Config, *states.State) { state := states.NewState() root := state.EnsureModule(addrs.RootModuleInstance) root.SetResourceInstanceCurrent( @@ -1685,13 +1639,10 @@ func getContextForApply_destroyCrossProviders(t *testing.T, m *configs.Config, p ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: providerFactories, - State: state, - PlanMode: plans.DestroyMode, }) - return ctx + return ctx, m, state } func TestContext2Apply_minimal(t *testing.T) { @@ -1700,17 +1651,15 @@ func TestContext2Apply_minimal(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -1728,7 +1677,6 @@ func TestContext2Apply_cancel(t *testing.T) { m := testModule(t, "apply-cancel") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -1750,15 +1698,14 @@ func TestContext2Apply_cancel(t *testing.T) { } p.PlanResourceChangeFn = testDiffFn - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) // Start the Apply in a goroutine var applyDiags tfdiags.Diagnostics stateCh := make(chan *states.State) go func() { - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) applyDiags = diags stateCh <- state @@ -1792,7 +1739,6 @@ func TestContext2Apply_cancelBlock(t *testing.T) { m := testModule(t, "apply-cancel-block") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -1814,15 +1760,14 @@ func TestContext2Apply_cancelBlock(t *testing.T) { return testApplyFn(req) } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) // Start the Apply in a goroutine var applyDiags tfdiags.Diagnostics stateCh := make(chan *states.State) go func() { - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) applyDiags = diags stateCh <- state @@ -1891,7 +1836,6 @@ func TestContext2Apply_cancelProvisioner(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -1913,15 +1857,14 @@ func TestContext2Apply_cancelProvisioner(t *testing.T) { return nil } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) // Start the Apply in a goroutine var applyDiags tfdiags.Diagnostics stateCh := make(chan *states.State) go func() { - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) applyDiags = diags stateCh <- state @@ -1998,24 +1941,22 @@ func TestContext2Apply_compute(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - ctx.variables = InputValues{ - "value": &InputValue{ - Value: cty.NumberIntVal(1), - SourceType: ValueFromCaller, + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + SetVariables: InputValues{ + "value": &InputValue{ + Value: cty.NumberIntVal(1), + SourceType: ValueFromCaller, + }, }, - } - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2060,19 +2001,15 @@ func TestContext2Apply_countDecrease(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - logDiagnostics(t, diags) - t.Fatal("plan failed") - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) actual := strings.TrimSpace(s.String()) @@ -2114,18 +2051,15 @@ func TestContext2Apply_countDecreaseToOneX(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2166,24 +2100,22 @@ func TestContext2Apply_countDecreaseToOneCorrupted(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } else { - got := strings.TrimSpace(legacyPlanComparisonString(ctx.State(), p.Changes)) + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) + { + got := strings.TrimSpace(legacyPlanComparisonString(state, plan.Changes)) want := strings.TrimSpace(testTerraformApplyCountDecToOneCorruptedPlanStr) if got != want { t.Fatalf("wrong plan result\ngot:\n%s\nwant:\n%s", got, want) } } - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2211,16 +2143,14 @@ func TestContext2Apply_countTainted(t *testing.T) { mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) { - plan, diags := ctx.Plan() - assertNoErrors(t, diags) got := strings.TrimSpace(legacyDiffComparisonString(plan.Changes)) want := strings.TrimSpace(` DESTROY/CREATE: aws_instance.foo[0] @@ -2237,7 +2167,7 @@ CREATE: aws_instance.foo[1] } } - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) got := strings.TrimSpace(s.String()) @@ -2264,17 +2194,15 @@ func TestContext2Apply_countVariable(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2292,17 +2220,15 @@ func TestContext2Apply_countVariableRef(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2334,20 +2260,17 @@ func TestContext2Apply_provisionerInterpCount(t *testing.T) { "local-exec": testProvisionerFuncFixed(pr), } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: Providers, Provisioners: provisioners, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatalf("plan failed unexpectedly: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) // We'll marshal and unmarshal the plan here, to ensure that we have // a clean new context as would be created if we separately ran // terraform plan -out=tfplan && terraform apply tfplan - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatal(err) } @@ -2359,7 +2282,7 @@ func TestContext2Apply_provisionerInterpCount(t *testing.T) { } // Applying the plan should now succeed - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply failed unexpectedly: %s", diags.Err()) } @@ -2376,22 +2299,22 @@ func TestContext2Apply_foreachVariable(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "foo": &InputValue{ Value: cty.StringVal("hello"), }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2409,17 +2332,15 @@ func TestContext2Apply_moduleBasic(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2490,19 +2411,17 @@ func TestContext2Apply_moduleDestroyOrder(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2542,17 +2461,15 @@ func TestContext2Apply_moduleInheritAlias(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2591,14 +2508,13 @@ func TestContext2Apply_orphanResource(t *testing.T) { // Step 1: create the resources and instances m := testModule(t, "apply-orphan-resource") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) // At this point both resources should be recorded in the state, along @@ -2627,15 +2543,13 @@ func TestContext2Apply_orphanResource(t *testing.T) { // Step 2: update with an empty config, to destroy everything m = testModule(t, "empty") ctx = testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags = ctx.Plan() + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) assertNoErrors(t, diags) // The state should now be _totally_ empty, with just an empty root module @@ -2679,18 +2593,15 @@ func TestContext2Apply_moduleOrphanInheritAlias(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2729,18 +2640,15 @@ func TestContext2Apply_moduleOrphanProvider(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } } @@ -2772,18 +2680,15 @@ func TestContext2Apply_moduleOrphanGrandchildProvider(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } } @@ -2809,17 +2714,15 @@ func TestContext2Apply_moduleGrandchildProvider(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -2844,18 +2747,16 @@ func TestContext2Apply_moduleOnlyProvider(t *testing.T) { pTest.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("test"): testProviderFuncFixed(pTest), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2873,17 +2774,15 @@ func TestContext2Apply_moduleProviderAlias(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2900,10 +2799,13 @@ func TestContext2Apply_moduleProviderAliasTargets(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.ConfigResource{ Module: addrs.RootModule, @@ -2915,12 +2817,9 @@ func TestContext2Apply_moduleProviderAliasTargets(t *testing.T) { }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -2950,19 +2849,17 @@ func TestContext2Apply_moduleProviderCloseNested(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } } @@ -2989,18 +2886,15 @@ func TestContext2Apply_moduleVarRefExisting(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3017,45 +2911,43 @@ func TestContext2Apply_moduleVarResourceCount(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.DestroyMode, + SetVariables: InputValues{ "num": &InputValue{ Value: cty.NumberIntVal(2), SourceType: ValueFromCaller, }, }, - PlanMode: plans.DestroyMode, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - if _, diags := ctx.Apply(); diags.HasErrors() { - t.Fatalf("apply errors: %s", diags.Err()) - } + state, diags := ctx.Apply(plan, m) + assertNoErrors(t, diags) ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "num": &InputValue{ Value: cty.NumberIntVal(5), SourceType: ValueFromCaller, }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } } @@ -3067,17 +2959,15 @@ func TestContext2Apply_moduleBool(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3097,20 +2987,20 @@ func TestContext2Apply_moduleTarget(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("B", addrs.NoKey), }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3150,18 +3040,16 @@ func TestContext2Apply_multiProvider(t *testing.T) { pDO.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("do"): testProviderFuncFixed(pDO), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3216,21 +3104,17 @@ func TestContext2Apply_multiProviderDestroy(t *testing.T) { // First, create the instances { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("vault"): testProviderFuncFixed(p2), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("errors during create plan: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() - if diags.HasErrors() { - t.Fatalf("errors during create apply: %s", diags.Err()) - } + s, diags := ctx.Apply(plan, m) + assertNoErrors(t, diags) state = s } @@ -3267,23 +3151,19 @@ func TestContext2Apply_multiProviderDestroy(t *testing.T) { p2.ApplyResourceChangeFn = applyFn ctx := testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - State: state, - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("vault"): testProviderFuncFixed(p2), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("errors during destroy plan: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - s, diags := ctx.Apply() - if diags.HasErrors() { - t.Fatalf("errors during destroy apply: %s", diags.Err()) - } + s, diags := ctx.Apply(plan, m) + assertNoErrors(t, diags) if !checked { t.Fatal("should be checked") @@ -3337,18 +3217,16 @@ func TestContext2Apply_multiProviderDestroyChild(t *testing.T) { // First, create the instances { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("vault"): testProviderFuncFixed(p2), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3388,20 +3266,18 @@ func TestContext2Apply_multiProviderDestroyChild(t *testing.T) { p2.ApplyResourceChangeFn = applyFn ctx := testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - State: state, - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("vault"): testProviderFuncFixed(p2), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3425,23 +3301,23 @@ func TestContext2Apply_multiVar(t *testing.T) { // First, apply with a count of 3 ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "num": &InputValue{ Value: cty.NumberIntVal(3), SourceType: ValueFromCaller, }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3457,24 +3333,23 @@ func TestContext2Apply_multiVar(t *testing.T) { // Apply again, reduce the count to 1 { ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "num": &InputValue{ Value: cty.NumberIntVal(1), SourceType: ValueFromCaller, }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3563,22 +3438,21 @@ func TestContext2Apply_multiVarComprehensive(t *testing.T) { // First, apply with a count of 3 ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "num": &InputValue{ Value: cty.NumberIntVal(3), SourceType: ValueFromCaller, }, }, }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - logDiagnostics(t, diags) - t.Fatalf("errors during plan") - } + assertNoErrors(t, diags) checkConfig := func(key string, want cty.Value) { configsLock.Lock() @@ -3675,7 +3549,7 @@ func TestContext2Apply_multiVarComprehensive(t *testing.T) { })) t.Run("apply", func(t *testing.T) { - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("error during apply: %s", diags.Err()) } @@ -3710,17 +3584,15 @@ func TestContext2Apply_multiVarOrder(t *testing.T) { // First, apply with a count of 3 ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3743,17 +3615,15 @@ func TestContext2Apply_multiVarOrderInterp(t *testing.T) { // First, apply with a count of 3 ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3779,25 +3649,25 @@ func TestContext2Apply_multiVarCountDec(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + log.Print("\n========\nStep 1 Plan\n========") + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "num": &InputValue{ Value: cty.NumberIntVal(2), SourceType: ValueFromCaller, }, }, }) - - log.Print("\n========\nStep 1 Plan\n========") - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + assertNoErrors(t, diags) log.Print("\n========\nStep 1 Apply\n========") - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3843,29 +3713,27 @@ func TestContext2Apply_multiVarCountDec(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - State: s, - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + log.Print("\n========\nStep 2 Plan\n========") + plan, diags := ctx.Plan(m, s, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "num": &InputValue{ Value: cty.NumberIntVal(1), SourceType: ValueFromCaller, }, }, }) - - log.Print("\n========\nStep 2 Plan\n========") - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + assertNoErrors(t, diags) t.Logf("Step 2 plan:\n%s", legacyDiffComparisonString(plan.Changes)) log.Print("\n========\nStep 2 Apply\n========") - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -3897,18 +3765,16 @@ func TestContext2Apply_multiVarMissingState(t *testing.T) { // First, apply with a count of 3 ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan failed: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - // Before the relevant bug was fixed, Tdiagsaform would panic during apply. - if _, diags := ctx.Apply(); diags.HasErrors() { + // Before the relevant bug was fixed, Terraform would panic during apply. + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply failed: %s", diags.Err()) } @@ -3926,18 +3792,15 @@ func TestContext2Apply_outputOrphan(t *testing.T) { root.SetOutputValue("bar", cty.StringVal("baz"), false) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3957,18 +3820,15 @@ func TestContext2Apply_outputOrphanModule(t *testing.T) { state := states.NewState() ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -3982,18 +3842,23 @@ func TestContext2Apply_outputOrphanModule(t *testing.T) { // now apply with no module in the config, which should remove the // remaining output ctx = testContext2(t, &ContextOpts{ - Config: configs.NewEmptyConfig(), Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state.DeepCopy(), }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + emptyConfig := configs.NewEmptyConfig() + + // NOTE: While updating this test to pass the state in as a Plan argument, + // rather than into the testContext2 call above, it previously said + // State: state.DeepCopy(), which is a little weird since we just + // created "s" above as the result of the previous apply, but I've preserved + // it to avoid changing the flow of this test in case that's important + // for some reason. + plan, diags = ctx.Plan(emptyConfig, state.DeepCopy(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, emptyConfig) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -4013,7 +3878,6 @@ func TestContext2Apply_providerComputedVar(t *testing.T) { pTest.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("test"): testProviderFuncFixed(pTest), @@ -4029,11 +3893,10 @@ func TestContext2Apply_providerComputedVar(t *testing.T) { return } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } } @@ -4053,17 +3916,15 @@ func TestContext2Apply_providerConfigureDisabled(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -4089,7 +3950,6 @@ func TestContext2Apply_provisionerModule(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4098,11 +3958,10 @@ func TestContext2Apply_provisionerModule(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -4137,27 +3996,27 @@ func TestContext2Apply_Provisioner_compute(t *testing.T) { } h := new(MockHook) ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, Provisioners: map[string]provisioners.Factory{ "shell": testProvisionerFuncFixed(pr), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "value": &InputValue{ Value: cty.NumberIntVal(1), SourceType: ValueFromCaller, }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -4196,7 +4055,6 @@ func TestContext2Apply_provisionerCreateFail(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4205,11 +4063,10 @@ func TestContext2Apply_provisionerCreateFail(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags == nil { t.Fatal("should error") } @@ -4233,7 +4090,6 @@ func TestContext2Apply_provisionerCreateFailNoId(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4242,11 +4098,10 @@ func TestContext2Apply_provisionerCreateFailNoId(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags == nil { t.Fatal("should error") } @@ -4270,7 +4125,6 @@ func TestContext2Apply_provisionerFail(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4279,11 +4133,10 @@ func TestContext2Apply_provisionerFail(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags == nil { t.Fatal("should error") } @@ -4318,22 +4171,19 @@ func TestContext2Apply_provisionerFail_createBeforeDestroy(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, Provisioners: map[string]provisioners.Factory{ "shell": testProvisionerFuncFixed(pr), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() - if diags == nil { + state, diags = ctx.Apply(plan, m) + if !diags.HasErrors() { t.Fatal("should error") } @@ -4360,11 +4210,9 @@ func TestContext2Apply_error_createBeforeDestroy(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.ApplyResourceChangeFn = func(req providers.ApplyResourceChangeRequest) (resp providers.ApplyResourceChangeResponse) { resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("placeholder error from ApplyFn")) @@ -4372,12 +4220,11 @@ func TestContext2Apply_error_createBeforeDestroy(t *testing.T) { } p.PlanResourceChangeFn = testDiffFn - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() - if diags == nil { + state, diags = ctx.Apply(plan, m) + if !diags.HasErrors() { t.Fatal("should have error") } if got, want := diags.Err().Error(), "placeholder error from ApplyFn"; got != want { @@ -4409,11 +4256,9 @@ func TestContext2Apply_errorDestroy_createBeforeDestroy(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.ApplyResourceChangeFn = func(req providers.ApplyResourceChangeRequest) (resp providers.ApplyResourceChangeResponse) { // Fail the destroy! @@ -4427,12 +4272,11 @@ func TestContext2Apply_errorDestroy_createBeforeDestroy(t *testing.T) { } p.PlanResourceChangeFn = testDiffFn - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() - if diags == nil { + state, diags = ctx.Apply(plan, m) + if !diags.HasErrors() { t.Fatal("should have error") } @@ -4472,14 +4316,7 @@ func TestContext2Apply_multiDepose_createBeforeDestroy(t *testing.T) { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: ps, - State: state, - Variables: InputValues{ - "require_new": &InputValue{ - Value: cty.StringVal("yes"), - }, - }, }) createdInstanceId := "bar" // Create works @@ -4504,14 +4341,20 @@ func TestContext2Apply_multiDepose_createBeforeDestroy(t *testing.T) { } } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ + "require_new": &InputValue{ + Value: cty.StringVal("yes"), + }, + }, + }) + assertNoErrors(t, diags) // Destroy is broken, so even though CBD successfully replaces the instance, // we'll have to save the Deposed instance to destroy later - state, diags := ctx.Apply() - if diags == nil { + state, diags = ctx.Apply(plan, m) + if !diags.HasErrors() { t.Fatal("should have error") } @@ -4525,24 +4368,23 @@ aws_instance.web: (1 deposed) createdInstanceId = "baz" ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: ps, - State: state, - Variables: InputValues{ + }) + + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "require_new": &InputValue{ Value: cty.StringVal("baz"), }, }, }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + assertNoErrors(t, diags) // We're replacing the primary instance once again. Destroy is _still_ // broken, so the Deposed list gets longer - state, diags = ctx.Apply() - if diags == nil { + state, diags = ctx.Apply(plan, m) + if !diags.HasErrors() { t.Fatal("should have error") } @@ -4592,22 +4434,21 @@ aws_instance.web: (1 deposed) createdInstanceId = "qux" ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: ps, - State: state, - Variables: InputValues{ + }) + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "require_new": &InputValue{ Value: cty.StringVal("qux"), }, }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) // Expect error because 1/2 of Deposed destroys failed - if diags == nil { + if !diags.HasErrors() { t.Fatal("should have error") } @@ -4628,19 +4469,18 @@ aws_instance.web: (1 deposed) createdInstanceId = "quux" ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: ps, - State: state, - Variables: InputValues{ + }) + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "require_new": &InputValue{ Value: cty.StringVal("quux"), }, }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - state, diags = ctx.Apply() + assertNoErrors(t, diags) + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal("should not have error:", diags.Err()) } @@ -4669,7 +4509,6 @@ func TestContext2Apply_provisionerFailContinue(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4678,11 +4517,10 @@ func TestContext2Apply_provisionerFailContinue(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -4715,8 +4553,7 @@ func TestContext2Apply_provisionerFailContinueHook(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4725,11 +4562,10 @@ func TestContext2Apply_provisionerFailContinueHook(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -4767,9 +4603,6 @@ func TestContext2Apply_provisionerDestroy(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, - PlanMode: plans.DestroyMode, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4778,11 +4611,12 @@ func TestContext2Apply_provisionerDestroy(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -4818,9 +4652,6 @@ func TestContext2Apply_provisionerDestroyFail(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, - PlanMode: plans.DestroyMode, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4829,11 +4660,12 @@ func TestContext2Apply_provisionerDestroyFail(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags == nil { t.Fatal("should error") } @@ -4886,9 +4718,6 @@ func TestContext2Apply_provisionerDestroyFailContinue(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, - PlanMode: plans.DestroyMode, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4897,11 +4726,12 @@ func TestContext2Apply_provisionerDestroyFailContinue(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -4955,9 +4785,6 @@ func TestContext2Apply_provisionerDestroyFailContinueFail(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, - PlanMode: plans.DestroyMode, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -4966,11 +4793,12 @@ func TestContext2Apply_provisionerDestroyFailContinueFail(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags == nil { t.Fatal("apply succeeded; wanted error from second provisioner") } @@ -5023,15 +4851,17 @@ func TestContext2Apply_provisionerDestroyTainted(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, Provisioners: map[string]provisioners.Factory{ "shell": testProvisionerFuncFixed(pr), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "input": &InputValue{ Value: cty.MapVal(map[string]cty.Value{ "a": cty.StringVal("b"), @@ -5040,12 +4870,9 @@ func TestContext2Apply_provisionerDestroyTainted(t *testing.T) { }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5085,7 +4912,6 @@ func TestContext2Apply_provisionerResourceRef(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -5094,11 +4920,10 @@ func TestContext2Apply_provisionerResourceRef(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5131,7 +4956,6 @@ func TestContext2Apply_provisionerSelfRef(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -5140,11 +4964,10 @@ func TestContext2Apply_provisionerSelfRef(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5184,7 +5007,6 @@ func TestContext2Apply_provisionerMultiSelfRef(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -5193,11 +5015,10 @@ func TestContext2Apply_provisionerMultiSelfRef(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5244,7 +5065,6 @@ func TestContext2Apply_provisionerMultiSelfRefSingle(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -5253,11 +5073,10 @@ func TestContext2Apply_provisionerMultiSelfRefSingle(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5298,7 +5117,6 @@ func TestContext2Apply_provisionerExplicitSelfRef(t *testing.T) { var state *states.State { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -5307,12 +5125,12 @@ func TestContext2Apply_provisionerExplicitSelfRef(t *testing.T) { }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5325,9 +5143,6 @@ func TestContext2Apply_provisionerExplicitSelfRef(t *testing.T) { { ctx := testContext2(t, &ContextOpts{ - Config: m, - PlanMode: plans.DestroyMode, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -5336,12 +5151,14 @@ func TestContext2Apply_provisionerExplicitSelfRef(t *testing.T) { }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5366,7 +5183,6 @@ func TestContext2Apply_provisionerForEachSelfRef(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -5375,11 +5191,10 @@ func TestContext2Apply_provisionerForEachSelfRef(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - _, diags := ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5393,7 +5208,6 @@ func TestContext2Apply_Provisioner_Diff(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -5402,12 +5216,10 @@ func TestContext2Apply_Provisioner_Diff(t *testing.T) { }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - logDiagnostics(t, diags) - t.Fatal("plan failed") - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("apply failed") @@ -5441,22 +5253,18 @@ func TestContext2Apply_Provisioner_Diff(t *testing.T) { // Re-create context with state ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, Provisioners: map[string]provisioners.Factory{ "shell": testProvisionerFuncFixed(pr), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - logDiagnostics(t, diags) - t.Fatal("plan failed") - } + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state2, diags := ctx.Apply() + state2, diags := ctx.Apply(plan, m) if diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("apply failed") @@ -5489,11 +5297,9 @@ func TestContext2Apply_outputDiffVars(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.PlanResourceChangeFn = testDiffFn @@ -5524,14 +5330,11 @@ func TestContext2Apply_outputDiffVars(t *testing.T) { // return d, nil //} - if _, diags := ctx.Plan(); diags.HasErrors() { - logDiagnostics(t, diags) - t.Fatal("plan failed") - } - if _, diags := ctx.Apply(); diags.HasErrors() { - logDiagnostics(t, diags) - t.Fatal("apply failed") - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) + + _, diags = ctx.Apply(plan, m) + assertNoErrors(t, diags) } func TestContext2Apply_destroyX(t *testing.T) { @@ -5540,19 +5343,17 @@ func TestContext2Apply_destroyX(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5560,20 +5361,18 @@ func TestContext2Apply_destroyX(t *testing.T) { // Next, plan and apply a destroy operation h.Active = true ctx = testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - State: state, - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5599,19 +5398,17 @@ func TestContext2Apply_destroyOrder(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5621,20 +5418,18 @@ func TestContext2Apply_destroyOrder(t *testing.T) { // Next, plan and apply a destroy h.Active = true ctx = testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - State: state, - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5661,19 +5456,17 @@ func TestContext2Apply_destroyModulePrefix(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5686,20 +5479,18 @@ func TestContext2Apply_destroyModulePrefix(t *testing.T) { // Next, plan and apply a destroy operation and reset the hook h = new(MockHook) ctx = testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - State: state, - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5727,19 +5518,16 @@ func TestContext2Apply_destroyNestedModule(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5768,19 +5556,16 @@ func TestContext2Apply_destroyDeeplyNestedModule(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -5800,21 +5585,20 @@ func TestContext2Apply_destroyModuleWithAttrsReferencingResource(t *testing.T) { var state *states.State { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("plan diags: %s", diags.Err()) } else { - t.Logf("Step 1 plan: %s", legacyDiffComparisonString(p.Changes)) + t.Logf("Step 1 plan: %s", legacyDiffComparisonString(plan.Changes)) } - var diags tfdiags.Diagnostics - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errs: %s", diags.Err()) } @@ -5827,24 +5611,23 @@ func TestContext2Apply_destroyModuleWithAttrsReferencingResource(t *testing.T) { { ctx := testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - Config: m, - State: state, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("destroy plan err: %s", diags.Err()) } t.Logf("Step 2 plan: %s", legacyDiffComparisonString(plan.Changes)) - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -5858,7 +5641,7 @@ func TestContext2Apply_destroyModuleWithAttrsReferencingResource(t *testing.T) { t.Fatalf("err: %s", diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("destroy apply err: %s", diags.Err()) } @@ -5878,21 +5661,18 @@ func TestContext2Apply_destroyWithModuleVariableAndCount(t *testing.T) { p.PlanResourceChangeFn = testDiffFn var state *states.State - var diags tfdiags.Diagnostics { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply err: %s", diags.Err()) } @@ -5903,22 +5683,21 @@ func TestContext2Apply_destroyWithModuleVariableAndCount(t *testing.T) { { ctx := testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - Config: m, - State: state, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("destroy plan err: %s", diags.Err()) } - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -5933,7 +5712,7 @@ func TestContext2Apply_destroyWithModuleVariableAndCount(t *testing.T) { t.Fatalf("err: %s", diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("destroy apply err: %s", diags.Err()) } @@ -5954,21 +5733,18 @@ func TestContext2Apply_destroyTargetWithModuleVariableAndCount(t *testing.T) { p.PlanResourceChangeFn = testDiffFn var state *states.State - var diags tfdiags.Diagnostics { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply err: %s", diags.Err()) } @@ -5976,18 +5752,17 @@ func TestContext2Apply_destroyTargetWithModuleVariableAndCount(t *testing.T) { { ctx := testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child", addrs.NoKey), }, }) - - _, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("plan err: %s", diags) } @@ -6003,7 +5778,7 @@ func TestContext2Apply_destroyTargetWithModuleVariableAndCount(t *testing.T) { } // Destroy, targeting the module explicitly - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("destroy apply err: %s", diags) } @@ -6032,21 +5807,18 @@ func TestContext2Apply_destroyWithModuleVariableAndCountNested(t *testing.T) { p.PlanResourceChangeFn = testDiffFn var state *states.State - var diags tfdiags.Diagnostics { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply err: %s", diags.Err()) } @@ -6057,22 +5829,21 @@ func TestContext2Apply_destroyWithModuleVariableAndCountNested(t *testing.T) { { ctx := testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - Config: m, - State: state, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("destroy plan err: %s", diags.Err()) } - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -6087,7 +5858,7 @@ func TestContext2Apply_destroyWithModuleVariableAndCountNested(t *testing.T) { t.Fatalf("err: %s", diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("destroy apply err: %s", diags.Err()) } @@ -6118,18 +5889,16 @@ func TestContext2Apply_destroyOutputs(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) @@ -6137,19 +5906,17 @@ func TestContext2Apply_destroyOutputs(t *testing.T) { // Next, plan and apply a destroy operation ctx = testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - State: state, - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6161,18 +5928,16 @@ func TestContext2Apply_destroyOutputs(t *testing.T) { // destroying again should produce no errors ctx = testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - State: state, - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatal(diags.Err()) - } + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatal(diags.Err()) } } @@ -6191,20 +5956,17 @@ func TestContext2Apply_destroyOrphan(t *testing.T) { mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.PlanResourceChangeFn = testDiffFn - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6233,22 +5995,20 @@ func TestContext2Apply_destroyTaintedProvisioner(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, Provisioners: map[string]provisioners.Factory{ "shell": testProvisionerFuncFixed(pr), }, - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6270,7 +6030,6 @@ func TestContext2Apply_error(t *testing.T) { m := testModule(t, "apply-error") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -6288,11 +6047,10 @@ func TestContext2Apply_error(t *testing.T) { } p.PlanResourceChangeFn = testDiffFn - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags == nil { t.Fatal("should have error") } @@ -6334,35 +6092,33 @@ func TestContext2Apply_errorDestroy(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, - State: states.BuildState(func(ss *states.SyncState) { - ss.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_thing", - Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"id":"baz"}`), - }, - addrs.AbsProviderConfig{ - Provider: addrs.NewDefaultProvider("test"), - Module: addrs.RootModule, - }, - ) - }), Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + state := states.BuildState(func(ss *states.SyncState) { + ss.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"baz"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModule, + }, + ) + }) + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() - if diags == nil { + state, diags = ctx.Apply(plan, m) + if !diags.HasErrors() { t.Fatal("should have error") } @@ -6410,17 +6166,15 @@ func TestContext2Apply_errorCreateInvalidNew(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags == nil { t.Fatal("should have error") } @@ -6470,35 +6224,33 @@ func TestContext2Apply_errorUpdateNullNew(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, - State: states.BuildState(func(ss *states.SyncState) { - ss.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "aws_instance", - Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"value":"old"}`), - }, - addrs.AbsProviderConfig{ - Provider: addrs.NewDefaultProvider("aws"), - Module: addrs.RootModule, - }, - ) - }), Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + state := states.BuildState(func(ss *states.SyncState) { + ss.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"value":"old"}`), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("aws"), + Module: addrs.RootModule, + }, + ) + }) + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() - if diags == nil { + state, diags = ctx.Apply(plan, m) + if !diags.HasErrors() { t.Fatal("should have error") } if got, want := len(diags), 1; got != want { @@ -6545,11 +6297,9 @@ func TestContext2Apply_errorPartial(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.ApplyResourceChangeFn = func(req providers.ApplyResourceChangeRequest) (resp providers.ApplyResourceChangeResponse) { @@ -6563,11 +6313,10 @@ func TestContext2Apply_errorPartial(t *testing.T) { } p.PlanResourceChangeFn = testDiffFn - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags == nil { t.Fatal("should have error") } @@ -6590,18 +6339,16 @@ func TestContext2Apply_hook(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -6634,19 +6381,16 @@ func TestContext2Apply_hookOrphan(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -6665,7 +6409,6 @@ func TestContext2Apply_idAttr(t *testing.T) { m := testModule(t, "apply-idattr") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -6674,11 +6417,10 @@ func TestContext2Apply_idAttr(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -6704,17 +6446,15 @@ func TestContext2Apply_outputBasic(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6732,17 +6472,15 @@ func TestContext2Apply_outputAdd(t *testing.T) { p1.ApplyResourceChangeFn = testApplyFn p1.PlanResourceChangeFn = testDiffFn ctx1 := testContext2(t, &ContextOpts{ - Config: m1, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p1), }, }) - if _, diags := ctx1.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan1, diags := ctx1.Plan(m1, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state1, diags := ctx1.Apply() + state1, diags := ctx1.Apply(plan1, m1) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6752,19 +6490,15 @@ func TestContext2Apply_outputAdd(t *testing.T) { p2.ApplyResourceChangeFn = testApplyFn p2.PlanResourceChangeFn = testDiffFn ctx2 := testContext2(t, &ContextOpts{ - Config: m2, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p2), }, - - State: state1, }) - if _, diags := ctx2.Plan(); diags.HasErrors() { - t.Fatalf("diags: %s", diags.Err()) - } + plan2, diags := ctx1.Plan(m2, state1, DefaultPlanOpts) + assertNoErrors(t, diags) - state2, diags := ctx2.Apply() + state2, diags := ctx2.Apply(plan2, m2) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6782,17 +6516,15 @@ func TestContext2Apply_outputList(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6810,17 +6542,15 @@ func TestContext2Apply_outputMulti(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6838,17 +6568,15 @@ func TestContext2Apply_outputMultiIndex(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6892,20 +6620,19 @@ func TestContext2Apply_taintX(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf("plan: %s", legacyDiffComparisonString(p.Changes)) + t.Logf("plan: %s", legacyDiffComparisonString(plan.Changes)) } - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -6948,20 +6675,19 @@ func TestContext2Apply_taintDep(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf("plan: %s", legacyDiffComparisonString(p.Changes)) + t.Logf("plan: %s", legacyDiffComparisonString(plan.Changes)) } - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7000,20 +6726,19 @@ func TestContext2Apply_taintDepRequiresNew(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf("plan: %s", legacyDiffComparisonString(p.Changes)) + t.Logf("plan: %s", legacyDiffComparisonString(plan.Changes)) } - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7031,22 +6756,22 @@ func TestContext2Apply_targeted(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "foo", ), }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7071,22 +6796,22 @@ func TestContext2Apply_targetedCount(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "foo", ), }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7113,22 +6838,22 @@ func TestContext2Apply_targetedCountIndex(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.IntKey(1), ), }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7169,28 +6894,26 @@ func TestContext2Apply_targetedDestroy(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, + }) + + if diags := ctx.Validate(m); diags.HasErrors() { + t.Fatalf("validate errors: %s", diags.Err()) + } + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "a", ), }, - PlanMode: plans.DestroyMode, }) + assertNoErrors(t, diags) - if diags := ctx.Validate(); diags.HasErrors() { - t.Fatalf("validate errors: %s", diags.Err()) - } - - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7248,24 +6971,22 @@ func TestContext2Apply_targetedDestroyCountDeps(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "foo", ), }, - PlanMode: plans.DestroyMode, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7316,24 +7037,22 @@ func TestContext2Apply_targetedDestroyModule(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child", addrs.NoKey).Resource( addrs.ManagedResourceMode, "aws_instance", "foo", ), }, - PlanMode: plans.DestroyMode, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7401,11 +7120,13 @@ func TestContext2Apply_targetedDestroyCountIndex(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.IntKey(2), @@ -7414,14 +7135,10 @@ func TestContext2Apply_targetedDestroyCountIndex(t *testing.T) { addrs.ManagedResourceMode, "aws_instance", "bar", addrs.IntKey(1), ), }, - PlanMode: plans.DestroyMode, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7448,20 +7165,20 @@ func TestContext2Apply_targetedModule(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child", addrs.NoKey), }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7497,24 +7214,26 @@ func TestContext2Apply_targetedModuleDep(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "foo", ), }, }) - - if p, diags := ctx.Plan(); diags.HasErrors() { + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf("Diff: %s", legacyDiffComparisonString(p.Changes)) + t.Logf("Diff: %s", legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7553,21 +7272,20 @@ func TestContext2Apply_targetedModuleUnrelatedOutputs(t *testing.T) { _ = state.EnsureModule(addrs.RootModuleInstance.Child("child2", addrs.NoKey)) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child2", addrs.NoKey), }, - State: state, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7600,22 +7318,22 @@ func TestContext2Apply_targetedModuleResource(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child", addrs.NoKey).Resource( addrs.ManagedResourceMode, "aws_instance", "foo", ), }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -7653,23 +7371,22 @@ func TestContext2Apply_targetedResourceOrphanModule(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "foo", ), }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } } @@ -7700,17 +7417,15 @@ func TestContext2Apply_unknownAttribute(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if !diags.HasErrors() { t.Error("should error, because attribute 'unknown' is still unknown after apply") } @@ -7727,13 +7442,12 @@ func TestContext2Apply_unknownAttributeInterpolate(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags == nil { + if _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts); diags == nil { t.Fatal("should error") } } @@ -7741,7 +7455,15 @@ func TestContext2Apply_unknownAttributeInterpolate(t *testing.T) { func TestContext2Apply_vars(t *testing.T) { fixture := contextFixtureApplyVars(t) opts := fixture.ContextOpts() - opts.Variables = InputValues{ + ctx := testContext2(t, opts) + m := fixture.Config + + diags := ctx.Validate(m) + if len(diags) != 0 { + t.Fatalf("bad: %s", diags.ErrWithWarnings()) + } + + variables := InputValues{ "foo": &InputValue{ Value: cty.StringVal("us-east-1"), SourceType: ValueFromCaller, @@ -7768,18 +7490,14 @@ func TestContext2Apply_vars(t *testing.T) { SourceType: ValueFromCaller, }, } - ctx := testContext2(t, opts) - - diags := ctx.Validate() - if len(diags) != 0 { - t.Fatalf("bad: %s", diags.ErrWithWarnings()) - } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: variables, + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -7794,7 +7512,15 @@ func TestContext2Apply_vars(t *testing.T) { func TestContext2Apply_varsEnv(t *testing.T) { fixture := contextFixtureApplyVarsEnv(t) opts := fixture.ContextOpts() - opts.Variables = InputValues{ + ctx := testContext2(t, opts) + m := fixture.Config + + diags := ctx.Validate(m) + if len(diags) != 0 { + t.Fatalf("bad: %s", diags.ErrWithWarnings()) + } + + variables := InputValues{ "string": &InputValue{ Value: cty.StringVal("baz"), SourceType: ValueFromEnvVar, @@ -7815,18 +7541,14 @@ func TestContext2Apply_varsEnv(t *testing.T) { SourceType: ValueFromEnvVar, }, } - ctx := testContext2(t, opts) - - diags := ctx.Validate() - if len(diags) != 0 { - t.Fatalf("bad: %s", diags.ErrWithWarnings()) - } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: variables, + }) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -7872,7 +7594,7 @@ func TestContext2Apply_createBefore_depends(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"baz","instance":"bar"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "aws_instance", @@ -7889,23 +7611,22 @@ func TestContext2Apply_createBefore_depends(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + if diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("plan failed") } else { - t.Logf("plan:\n%s", legacyDiffComparisonString(p.Changes)) + t.Logf("plan:\n%s", legacyDiffComparisonString(plan.Changes)) } h.Active = true - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("apply failed") @@ -8001,7 +7722,7 @@ func TestContext2Apply_singleDestroy(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"baz","instance":"bar"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "aws_instance", @@ -8018,20 +7739,17 @@ func TestContext2Apply_singleDestroy(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) h.Active = true - _, diags := ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -8060,19 +7778,18 @@ func TestContext2Apply_issue7824(t *testing.T) { // Apply cleanly step 0 ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("template"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } // Write / Read plan to simulate running it through a Plan file - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -8087,7 +7804,7 @@ func TestContext2Apply_issue7824(t *testing.T) { t.Fatalf("err: %s", diags.Err()) } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -8115,19 +7832,19 @@ func TestContext2Apply_issue5254(t *testing.T) { }) // Apply cleanly step 0 + m := testModule(t, "issue-5254/step-0") ctx := testContext2(t, &ContextOpts{ - Config: testModule(t, "issue-5254/step-0"), Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("template"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -8136,20 +7853,18 @@ func TestContext2Apply_issue5254(t *testing.T) { // Application success. Now make the modification and store a plan ctx = testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("template"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } // Write / Read plan to simulate running it through a Plan file - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -8163,7 +7878,7 @@ func TestContext2Apply_issue5254(t *testing.T) { t.Fatalf("err: %s", diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -8208,25 +7923,25 @@ func TestContext2Apply_targetedWithTaintedInState(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "iambeingadded", ), }, - State: state, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } // Write / Read plan to simulate running it through a Plan file - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -8240,7 +7955,7 @@ func TestContext2Apply_targetedWithTaintedInState(t *testing.T) { t.Fatalf("err: %s", diags.Err()) } - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -8275,19 +7990,19 @@ func TestContext2Apply_ignoreChangesCreate(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -8353,7 +8068,7 @@ func TestContext2Apply_ignoreChangesWithDep(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"eip-abc123","instance":"i-abc123"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "aws_instance", @@ -8371,7 +8086,7 @@ func TestContext2Apply_ignoreChangesWithDep(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"eip-bcd234","instance":"i-bcd234"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "aws_instance", @@ -8385,17 +8100,15 @@ func TestContext2Apply_ignoreChangesWithDep(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state.DeepCopy(), }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state.DeepCopy(), DefaultPlanOpts) assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) actual := strings.TrimSpace(s.String()) @@ -8418,20 +8131,20 @@ func TestContext2Apply_ignoreChangesAll(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("plan failed") } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) mod := state.RootModule() @@ -8460,21 +8173,18 @@ func TestContext2Apply_destroyNestedModuleWithAttrsReferencingResource(t *testin p.PlanResourceChangeFn = testDiffFn var state *states.State - var diags tfdiags.Diagnostics { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) // First plan and apply a create operation - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply err: %s", diags.Err()) } @@ -8482,20 +8192,19 @@ func TestContext2Apply_destroyNestedModuleWithAttrsReferencingResource(t *testin { ctx := testContext2(t, &ContextOpts{ - PlanMode: plans.DestroyMode, - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("destroy plan err: %s", diags.Err()) } - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -8509,7 +8218,7 @@ func TestContext2Apply_destroyNestedModuleWithAttrsReferencingResource(t *testin t.Fatalf("err: %s", diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("destroy apply err: %s", diags.Err()) } @@ -8541,7 +8250,6 @@ resource "null_instance" "depends" { `}) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, @@ -8568,10 +8276,10 @@ resource "null_instance" "depends" { } } - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) root := state.Module(addrs.RootModuleInstance) @@ -8595,7 +8303,7 @@ resource "null_instance" "depends" { } // run another plan to make sure the data source doesn't show as a change - plan, diags := ctx.Plan() + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) for _, c := range plan.Changes.Resources { @@ -8629,14 +8337,12 @@ resource "null_instance" "depends" { } ctx = testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - plan, diags = ctx.Plan() + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) expectedChanges := map[string]plans.Action{ @@ -8658,18 +8364,16 @@ func TestContext2Apply_terraformWorkspace(t *testing.T) { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Meta: &ContextMeta{Env: "foo"}, - Config: m, + Meta: &ContextMeta{Env: "foo"}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -8687,17 +8391,15 @@ func TestContext2Apply_multiRef(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -8714,20 +8416,20 @@ func TestContext2Apply_targetedModuleRecursive(t *testing.T) { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child", addrs.NoKey), }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("err: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -8756,15 +8458,13 @@ module.child.subchild: func TestContext2Apply_localVal(t *testing.T) { m := testModule(t, "apply-local-val") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{}, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("error during plan: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("error during apply: %s", diags.Err()) } @@ -8800,19 +8500,17 @@ func TestContext2Apply_destroyWithLocals(t *testing.T) { root.SetOutputValue("name", cty.StringVal("test-bar"), false) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - s, diags := ctx.Apply() + s, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("error during apply: %s", diags.Err()) } @@ -8841,35 +8539,31 @@ func TestContext2Apply_providerWithLocals(t *testing.T) { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("err: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("err: %s", diags.Err()) - } + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -8900,16 +8594,13 @@ func TestContext2Apply_destroyWithProviders(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) // test that we can't destroy if the provider is missing - if _, diags := ctx.Plan(); diags == nil { + if _, diags := ctx.Plan(m, state, &PlanOpts{Mode: plans.DestroyMode}); diags == nil { t.Fatal("expected plan error, provider.aws.baz doesn't exist") } @@ -8917,18 +8608,17 @@ func TestContext2Apply_destroyWithProviders(t *testing.T) { state.Modules["module.mod.module.removed"].Resources["aws_instance.child"].ProviderConfig = mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"].bar`) ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatal(diags.Err()) - } - state, diags := ctx.Apply() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) + + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("error during apply: %s", diags.Err()) } @@ -9009,14 +8699,12 @@ func TestContext2Apply_providersFromState(t *testing.T) { } { t.Run(tc.name, func(t *testing.T) { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: tc.state, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, tc.state, DefaultPlanOpts) if tc.err { if diags == nil { t.Fatal("expected error") @@ -9028,7 +8716,7 @@ func TestContext2Apply_providersFromState(t *testing.T) { t.Fatal(diags.Err()) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -9061,12 +8749,10 @@ func TestContext2Apply_plannedInterpolatedCount(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: Providers, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("plan failed: %s", diags.Err()) } @@ -9074,7 +8760,7 @@ func TestContext2Apply_plannedInterpolatedCount(t *testing.T) { // We'll marshal and unmarshal the plan here, to ensure that we have // a clean new context as would be created if we separately ran // terraform plan -out=tfplan && terraform apply tfplan - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -9086,7 +8772,7 @@ func TestContext2Apply_plannedInterpolatedCount(t *testing.T) { } // Applying the plan should now succeed - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply failed: %s", diags.Err()) } @@ -9122,13 +8808,12 @@ func TestContext2Apply_plannedDestroyInterpolatedCount(t *testing.T) { root.SetOutputValue("out", cty.ListVal([]cty.Value{cty.StringVal("foo"), cty.StringVal("foo")}), false) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: providers, - State: state, - PlanMode: plans.DestroyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("plan failed: %s", diags.Err()) } @@ -9136,7 +8821,7 @@ func TestContext2Apply_plannedDestroyInterpolatedCount(t *testing.T) { // We'll marshal and unmarshal the plan here, to ensure that we have // a clean new context as would be created if we separately ran // terraform plan -out=tfplan && terraform apply tfplan - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatalf("failed to round-trip through planfile: %s", err) } @@ -9148,7 +8833,7 @@ func TestContext2Apply_plannedDestroyInterpolatedCount(t *testing.T) { } // Applying the plan should now succeed - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply failed: %s", diags.Err()) } @@ -9187,22 +8872,22 @@ func TestContext2Apply_scaleInMultivarRef(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: Providers, - State: state, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "instance_count": { Value: cty.NumberIntVal(0), SourceType: ValueFromCaller, }, }, }) - - _, diags := ctx.Plan() assertNoErrors(t, diags) // Applying the plan should now succeed - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) assertNoErrors(t, diags) } @@ -9235,17 +8920,15 @@ func TestContext2Apply_inconsistentWithPlan(t *testing.T) { } } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - _, diags := ctx.Apply() + _, diags = ctx.Apply(plan, m) if !diags.HasErrors() { t.Fatalf("apply succeeded; want error") } @@ -9284,34 +8967,33 @@ func TestContext2Apply_issue19908(t *testing.T) { } } ctx := testContext2(t, &ContextOpts{ - Config: m, - State: states.BuildState(func(s *states.SyncState) { - s.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test", - Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - AttrsJSON: []byte(`{"baz":"old"}`), - Status: states.ObjectReady, - }, - addrs.AbsProviderConfig{ - Provider: addrs.NewDefaultProvider("test"), - Module: addrs.RootModule, - }, - ) - }), Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{"baz":"old"}`), + Status: states.ObjectReady, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModule, + }, + ) + }) + + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if !diags.HasErrors() { t.Fatalf("apply succeeded; want error") } @@ -9356,18 +9038,17 @@ func TestContext2Apply_invalidIndexRef(t *testing.T) { m := testModule(t, "apply-invalid-index") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected validation failure: %s", diags.Err()) } wantErr := `The given key does not identify an element in this collection value` - _, diags = c.Plan() + _, diags = c.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("plan succeeded; want error") @@ -9471,6 +9152,12 @@ func TestContext2Apply_moduleReplaceCycle(t *testing.T) { aAction = plans.CreateThenDelete } + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), + }, + }) + changes := &plans.Changes{ Resources: []*plans.ResourceInstanceChangeSrc{ { @@ -9508,17 +9195,15 @@ func TestContext2Apply_moduleReplaceCycle(t *testing.T) { }, } - ctx := testContext2(t, &ContextOpts{ - Config: m, - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), - }, - State: state, - Changes: changes, - }) + plan := &plans.Plan{ + UIMode: plans.NormalMode, + Changes: changes, + PriorState: state.DeepCopy(), + PrevRunState: state.DeepCopy(), + } t.Run(mode, func(t *testing.T) { - _, diags := ctx.Apply() + _, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -9569,7 +9254,7 @@ func TestContext2Apply_destroyDataCycle(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"a"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.DataResourceMode, Type: "null_data_source", @@ -9606,13 +9291,12 @@ func TestContext2Apply_destroyDataCycle(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: Providers, - State: state, - PlanMode: plans.DestroyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) diags.HasErrors() if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) @@ -9621,7 +9305,7 @@ func TestContext2Apply_destroyDataCycle(t *testing.T) { // We'll marshal and unmarshal the plan here, to ensure that we have // a clean new context as would be created if we separately ran // terraform plan -out=tfplan && terraform apply tfplan - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatal(err) } @@ -9644,7 +9328,7 @@ func TestContext2Apply_destroyDataCycle(t *testing.T) { return resp } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -9743,19 +9427,17 @@ func TestContext2Apply_taintedDestroyFailure(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: Providers, - State: state, Hooks: []Hook{&testHook{}}, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) diags.HasErrors() if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if !diags.HasErrors() { t.Fatal("expected error") } @@ -9857,19 +9539,18 @@ func TestContext2Apply_plannedConnectionRefs(t *testing.T) { hook := &testHook{} ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: Providers, Provisioners: provisioners, Hooks: []Hook{hook}, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) diags.HasErrors() if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -9892,7 +9573,7 @@ func TestContext2Apply_cbdCycle(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"a","require_new":"old","foo":"b"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "test_instance", @@ -9900,7 +9581,7 @@ func TestContext2Apply_cbdCycle(t *testing.T) { }, Module: addrs.RootModule, }, - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "test_instance", @@ -9925,7 +9606,7 @@ func TestContext2Apply_cbdCycle(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"b","require_new":"old","foo":"c"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "test_instance", @@ -9962,13 +9643,11 @@ func TestContext2Apply_cbdCycle(t *testing.T) { hook := &testHook{} ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: Providers, - State: state, Hooks: []Hook{hook}, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) diags.HasErrors() if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) @@ -9977,7 +9656,7 @@ func TestContext2Apply_cbdCycle(t *testing.T) { // We'll marshal and unmarshal the plan here, to ensure that we have // a clean new context as would be created if we separately ran // terraform plan -out=tfplan && terraform apply tfplan - ctxOpts, err := contextOptsForPlanViaFile(snap, plan) + ctxOpts, m, plan, err := contextOptsForPlanViaFile(snap, plan) if err != nil { t.Fatal(err) } @@ -9987,7 +9666,7 @@ func TestContext2Apply_cbdCycle(t *testing.T) { t.Fatalf("failed to create context for plan: %s", diags.Err()) } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -10026,16 +9705,15 @@ func TestContext2Apply_ProviderMeta_apply_set(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) assertNoErrors(t, diags) if !p.ApplyResourceChangeCalled { @@ -10107,16 +9785,15 @@ func TestContext2Apply_ProviderMeta_apply_unset(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) assertNoErrors(t, diags) if !p.ApplyResourceChangeCalled { @@ -10157,13 +9834,12 @@ func TestContext2Apply_ProviderMeta_plan_set(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) if !p.PlanResourceChangeCalled { @@ -10225,13 +9901,12 @@ func TestContext2Apply_ProviderMeta_plan_unset(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) if !p.PlanResourceChangeCalled { @@ -10256,13 +9931,12 @@ func TestContext2Apply_ProviderMeta_plan_setNoSchema(t *testing.T) { p := testProvider("test") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("plan supposed to error, has no errors") } @@ -10305,13 +9979,12 @@ func TestContext2Apply_ProviderMeta_plan_setInvalid(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("plan supposed to error, has no errors") } @@ -10368,19 +10041,18 @@ func TestContext2Apply_ProviderMeta_refresh_set(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - _, diags = ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) - _, diags = ctx.Refresh() + _, diags = ctx.Refresh(m, state, DefaultPlanOpts) assertNoErrors(t, diags) if !p.ReadResourceCalled { @@ -10438,30 +10110,27 @@ func TestContext2Apply_ProviderMeta_refresh_setNoSchema(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - _, diags = ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) // drop the schema before refresh, to test that it errors schema.ProviderMeta = nil p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: ctx.State(), }) - _, diags = ctx.Refresh() + _, diags = ctx.Refresh(m, state, DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("refresh supposed to error, has no errors") } @@ -10506,16 +10175,15 @@ func TestContext2Apply_ProviderMeta_refresh_setInvalid(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - _, diags = ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) // change the schema before refresh, to test that it errors @@ -10529,14 +10197,12 @@ func TestContext2Apply_ProviderMeta_refresh_setInvalid(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: ctx.State(), }) - _, diags = ctx.Refresh() + _, diags = ctx.Refresh(m, state, DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("refresh supposed to error, has no errors") } @@ -10583,7 +10249,6 @@ func TestContext2Apply_ProviderMeta_refreshdata_set(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, @@ -10616,13 +10281,13 @@ func TestContext2Apply_ProviderMeta_refreshdata_set(t *testing.T) { } } - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - _, diags = ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) - _, diags = ctx.Refresh() + _, diags = ctx.Refresh(m, state, DefaultPlanOpts) assertNoErrors(t, diags) if !p.ReadDataSourceCalled { @@ -10678,7 +10343,6 @@ func TestContext2Apply_ProviderMeta_refreshdata_unset(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, @@ -10708,10 +10372,10 @@ func TestContext2Apply_ProviderMeta_refreshdata_unset(t *testing.T) { } } - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) assertNoErrors(t, diags) if !p.ReadDataSourceCalled { @@ -10736,7 +10400,6 @@ func TestContext2Apply_ProviderMeta_refreshdata_setNoSchema(t *testing.T) { p := testProvider("test") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, @@ -10748,7 +10411,7 @@ func TestContext2Apply_ProviderMeta_refreshdata_setNoSchema(t *testing.T) { }), } - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("refresh supposed to error, has no errors") } @@ -10791,7 +10454,6 @@ func TestContext2Apply_ProviderMeta_refreshdata_setInvalid(t *testing.T) { } p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, @@ -10803,7 +10465,7 @@ func TestContext2Apply_ProviderMeta_refreshdata_setInvalid(t *testing.T) { }), } - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("refresh supposed to error, has no errors") } @@ -10868,18 +10530,17 @@ output "out" { p.PlanResourceChangeFn = testDiffFn p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -10928,18 +10589,17 @@ resource "aws_instance" "cbd" { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -11001,23 +10661,22 @@ func TestContext2Apply_moduleDependsOn(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - _, diags = ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - plan, diags := ctx.Plan() + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -11059,36 +10718,35 @@ output "c" { p := testProvider("test") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - _, diags = ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - PlanMode: plans.DestroyMode, }) - _, diags = ctx.Plan() + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -11127,37 +10785,35 @@ output "myoutput" { p := testProvider("test") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - PlanMode: plans.DestroyMode, - State: state, }) - _, diags = ctx.Plan() + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -11274,25 +10930,25 @@ locals { // reduce the count to 1 ctx := testContext2(t, &ContextOpts{ - Variables: InputValues{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "ct": &InputValue{ Value: cty.NumberIntVal(1), SourceType: ValueFromCaller, }, }, - Config: m, - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - State: state, }) - - _, diags := ctx.Plan() if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { log.Fatal(diags.ErrWithWarnings()) } @@ -11305,25 +10961,25 @@ locals { // reduce the count to 0 ctx = testContext2(t, &ContextOpts{ - Variables: InputValues{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "ct": &InputValue{ Value: cty.NumberIntVal(0), SourceType: ValueFromCaller, }, }, - Config: m, - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), - }, - State: state, }) - - _, diags = ctx.Plan() if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -11449,38 +11105,33 @@ output "output" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(testP), addrs.NewDefaultProvider("null"): testProviderFuncFixed(nullP), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(testP), addrs.NewDefaultProvider("null"): testProviderFuncFixed(nullP), }, - - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("destroy plan errors: %s", diags.Err()) - } + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("destroy apply errors: %s", diags.Err()) } } @@ -11548,37 +11199,32 @@ output "outputs" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } destroy := func() { ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - - State: state, - PlanMode: plans.DestroyMode, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("destroy plan errors: %s", diags.Err()) - } + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("destroy apply errors: %s", diags.Err()) } @@ -11621,49 +11267,48 @@ resource "test_resource" "a" { proposed["id"] = cty.UnknownVal(cty.String) return providers.PlanResourceChangeResponse{ PlannedState: cty.ObjectVal(proposed), - RequiresReplace: []cty.Path{cty.Path{cty.GetAttrStep{Name: "value"}}}, + RequiresReplace: []cty.Path{{cty.GetAttrStep{Name: "value"}}}, } } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "v": &InputValue{ Value: cty.StringVal("A"), }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "v": &InputValue{ Value: cty.StringVal("B"), }, }, - State: state, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -11690,44 +11335,43 @@ resource "test_instance" "b" { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "v": &InputValue{ Value: cty.StringVal("A"), }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "v": &InputValue{ Value: cty.StringVal("B"), }, }, - State: state, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -11752,44 +11396,43 @@ resource "test_resource" "c" { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "ct": &InputValue{ Value: cty.NumberIntVal(1), }, }, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "ct": &InputValue{ Value: cty.NumberIntVal(0), }, }, - State: state, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -11853,58 +11496,52 @@ resource "test_resource" "foo" { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } // Run a second apply with no changes ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags = ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } // Now change the variable value for sensitive_var ctx = testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags = ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "sensitive_var": &InputValue{ Value: cty.StringVal("bar"), }, }, - State: state, }) + assertNoErrors(t, diags) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } - - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -11931,13 +11568,12 @@ resource "test_resource" "foo" { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("plan errors: %s", diags.Err()) } @@ -11959,7 +11595,7 @@ resource "test_resource" "foo" { fooChangeSrc := plan.Changes.ResourceInstance(addr) verifySensitiveValue(fooChangeSrc.AfterValMarks) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -11995,13 +11631,12 @@ resource "test_resource" "baz" { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("plan errors: %s", diags.Err()) } @@ -12030,7 +11665,7 @@ resource "test_resource" "baz" { bazChangeSrc := plan.Changes.ResourceInstance(bazAddr) verifySensitiveValue(bazChangeSrc.AfterValMarks) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -12059,36 +11694,36 @@ resource "test_resource" "foo" { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: states.BuildState(func(s *states.SyncState) { - s.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"id":"foo", "value":"hello"}`), - // No AttrSensitivePaths present - }, - addrs.AbsProviderConfig{ - Provider: addrs.NewDefaultProvider("test"), - Module: addrs.RootModule, - }, - ) - }), }) - _, diags := ctx.Plan() + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_resource", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo", "value":"hello"}`), + // No AttrSensitivePaths present + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModule, + }, + ) + }) + + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) addr := mustResourceInstanceAddr("test_resource.foo") - state, diags := ctx.Apply() + state, diags = ctx.Apply(plan, m) assertNoErrors(t, diags) fooState := state.ResourceInstance(addr) @@ -12119,22 +11754,32 @@ resource "test_resource" "foo" { }) ctx2 := testContext2(t, &ContextOpts{ - Config: m2, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - _, diags = ctx2.Plan() + // NOTE: Prior to our refactoring to make the state an explicit argument + // of Plan, as opposed to hidden state inside Context, this test was + // calling ctx.Apply instead of ctx2.Apply and thus using the previous + // plan instead of this new plan. "Fixing" it to use the new plan seems + // to break the test, so we've preserved that oddity here by saving the + // old plan as oldPlan and essentially discarding the new plan entirely, + // but this seems rather suspicious and we should ideally figure out what + // this test was originally intending to do and make it do that. + oldPlan := plan + _, diags = ctx2.Plan(m2, state, DefaultPlanOpts) assertNoErrors(t, diags) - - stateWithoutSensitive, diags := ctx.Apply() + stateWithoutSensitive, diags := ctx.Apply(oldPlan, m) assertNoErrors(t, diags) fooState2 := stateWithoutSensitive.ResourceInstance(addr) if len(fooState2.Current.AttrSensitivePaths) > 0 { - t.Fatalf("wrong number of sensitive paths, expected 0, got, %v", len(fooState2.Current.AttrSensitivePaths)) + t.Fatalf( + "wrong number of sensitive paths, expected 0, got, %v\n%s", + len(fooState2.Current.AttrSensitivePaths), + spew.Sdump(fooState2.Current.AttrSensitivePaths), + ) } } @@ -12157,8 +11802,11 @@ output "out" { } `}) - ctx := testContext2(t, &ContextOpts{ - Variables: InputValues{ + ctx := testContext2(t, &ContextOpts{}) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "in": &InputValue{ Value: cty.MapVal(map[string]cty.Value{ "required": cty.StringVal("boop"), @@ -12166,15 +11814,12 @@ output "out" { SourceType: ValueFromCaller, }, }, - Config: m, }) - - _, diags := ctx.Plan() if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -12213,31 +11858,30 @@ func TestContext2Apply_provisionerSensitive(t *testing.T) { h := new(MockHook) ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, Provisioners: map[string]provisioners.Factory{ "shell": testProvisionerFuncFixed(pr), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "password": &InputValue{ Value: cty.StringVal("secret"), SourceType: ValueFromCaller, }, }, }) - - if _, diags := ctx.Plan(); diags.HasErrors() { - logDiagnostics(t, diags) - t.Fatal("plan failed") - } + assertNoErrors(t, diags) // "restart" provisioner pr.CloseCalled = false - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { logDiagnostics(t, diags) t.Fatal("apply failed") @@ -12284,17 +11928,15 @@ resource "test_resource" "foo" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } @@ -12332,17 +11974,16 @@ resource "test_instance" "a" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -12374,19 +12015,19 @@ func TestContext2Apply_dataSensitive(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - if p, diags := ctx.Plan(); diags.HasErrors() { + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) } else { - t.Logf(legacyDiffComparisonString(p.Changes)) + t.Logf(legacyDiffComparisonString(plan.Changes)) } - state, diags := ctx.Apply() + state, diags := ctx.Apply(plan, m) assertNoErrors(t, diags) addr := mustResourceInstanceAddr("data.null_data_source.testing") @@ -12429,19 +12070,17 @@ func TestContext2Apply_errorRestorePrivateData(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } - state, _ = ctx.Apply() + state, _ = ctx.Apply(plan, m) if string(state.ResourceInstance(addr).Current.Private) != "private" { t.Fatal("missing private data in state") } @@ -12476,19 +12115,17 @@ func TestContext2Apply_errorRestoreStatus(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } - state, diags = ctx.Apply() + state, diags = ctx.Apply(plan, m) errString := diags.ErrWithWarnings().Error() if !strings.Contains(errString, "oops") || !strings.Contains(errString, "warned") { @@ -12540,18 +12177,17 @@ resource "test_object" "a" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) errString := diags.ErrWithWarnings().Error() if !strings.Contains(errString, "oops") || !strings.Contains(errString, "warned") { t.Fatalf("error missing expected info: %q", errString) @@ -12577,18 +12213,17 @@ resource "test_object" "a" { p.ApplyResourceChangeResponse = &providers.ApplyResourceChangeResponse{} ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } - _, diags = ctx.Apply() + _, diags = ctx.Apply(plan, m) if !diags.HasErrors() { t.Fatal("expected and error") } diff --git a/internal/terraform/context_eval.go b/internal/terraform/context_eval.go new file mode 100644 index 000000000000..8be9b9367846 --- /dev/null +++ b/internal/terraform/context_eval.go @@ -0,0 +1,104 @@ +package terraform + +import ( + "log" + + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/lang" + "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +type EvalOpts struct { + SetVariables InputValues +} + +// Eval produces a scope in which expressions can be evaluated for +// the given module path. +// +// This method must first evaluate any ephemeral values (input variables, local +// values, and output values) in the configuration. These ephemeral values are +// not included in the persisted state, so they must be re-computed using other +// values in the state before they can be properly evaluated. The updated +// values are retained in the main state associated with the receiving context. +// +// This function takes no action against remote APIs but it does need access +// to all provider and provisioner instances in order to obtain their schemas +// for type checking. +// +// The result is an evaluation scope that can be used to resolve references +// against the root module. If the returned diagnostics contains errors then +// the returned scope may be nil. If it is not nil then it may still be used +// to attempt expression evaluation or other analysis, but some expressions +// may not behave as expected. +func (c *Context) Eval(config *configs.Config, state *states.State, moduleAddr addrs.ModuleInstance, opts *EvalOpts) (*lang.Scope, tfdiags.Diagnostics) { + // This is intended for external callers such as the "terraform console" + // command. Internally, we create an evaluator in c.walk before walking + // the graph, and create scopes in ContextGraphWalker. + + var diags tfdiags.Diagnostics + defer c.acquireRun("eval")() + + schemas, moreDiags := c.Schemas(config, state) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return nil, diags + } + + // Start with a copy of state so that we don't affect the instance that + // the caller is holding. + state = state.DeepCopy() + var walker *ContextGraphWalker + + variables := mergeDefaultInputVariableValues(opts.SetVariables, config.Module.Variables) + + // By the time we get here, we should have values defined for all of + // the root module variables, even if some of them are "unknown". It's the + // caller's responsibility to have already handled the decoding of these + // from the various ways the CLI allows them to be set and to produce + // user-friendly error messages if they are not all present, and so + // the error message from checkInputVariables should never be seen and + // includes language asking the user to report a bug. + varDiags := checkInputVariables(config.Module.Variables, variables) + diags = diags.Append(varDiags) + + log.Printf("[DEBUG] Building and walking 'eval' graph") + + graph, moreDiags := (&EvalGraphBuilder{ + Config: config, + State: state, + Components: c.components, + Schemas: schemas, + }).Build(addrs.RootModuleInstance) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return nil, diags + } + + walkOpts := &graphWalkOpts{ + InputState: state, + Config: config, + Schemas: schemas, + RootVariableValues: variables, + } + + walker, moreDiags = c.walk(graph, walkEval, walkOpts) + diags = diags.Append(moreDiags) + if walker != nil { + diags = diags.Append(walker.NonFatalDiagnostics) + } else { + // If we skipped walking the graph (due to errors) then we'll just + // use a placeholder graph walker here, which'll refer to the + // unmodified state. + walker = c.graphWalker(walkEval, walkOpts) + } + + // This is a bit weird since we don't normally evaluate outside of + // the context of a walk, but we'll "re-enter" our desired path here + // just to get hold of an EvalContext for it. ContextGraphWalker + // caches its contexts, so we should get hold of the context that was + // previously used for evaluation here, unless we skipped walking. + evalCtx := walker.EnterPath(moduleAddr) + return evalCtx.EvaluationScope(nil, EvalDataForNoInstanceKey), diags +} diff --git a/internal/terraform/context_eval_test.go b/internal/terraform/context_eval_test.go index 0fbd20e33bfd..dff6879833e7 100644 --- a/internal/terraform/context_eval_test.go +++ b/internal/terraform/context_eval_test.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/hcl/v2/hclsyntax" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/providers" + "github.com/hashicorp/terraform/internal/states" "github.com/zclconf/go-cty/cty" ) @@ -48,13 +49,12 @@ func TestContextEval(t *testing.T) { m := testModule(t, "eval-context-basic") p := testProvider("test") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - scope, diags := ctx.Eval(addrs.RootModuleInstance) + scope, diags := ctx.Eval(m, states.NewState(), addrs.RootModuleInstance, &EvalOpts{}) if diags.HasErrors() { t.Fatalf("Eval errors: %s", diags.Err()) } diff --git a/internal/terraform/context_fixtures_test.go b/internal/terraform/context_fixtures_test.go index 4853f4a4226d..2e9e9c27511d 100644 --- a/internal/terraform/context_fixtures_test.go +++ b/internal/terraform/context_fixtures_test.go @@ -25,7 +25,6 @@ type contextTestFixture struct { // _shallow_ modifications to the options as needed. func (f *contextTestFixture) ContextOpts() *ContextOpts { return &ContextOpts{ - Config: f.Config, Providers: f.Providers, Provisioners: f.Provisioners, } diff --git a/internal/terraform/context_graph_type.go b/internal/terraform/context_graph_type.go deleted file mode 100644 index 658779e6ae39..000000000000 --- a/internal/terraform/context_graph_type.go +++ /dev/null @@ -1,30 +0,0 @@ -package terraform - -//go:generate go run golang.org/x/tools/cmd/stringer -type=GraphType context_graph_type.go - -// GraphType is an enum of the type of graph to create with a Context. -// The values of the constants may change so they shouldn't be depended on; -// always use the constant name. -type GraphType byte - -const ( - GraphTypeInvalid GraphType = iota - GraphTypePlan - GraphTypePlanDestroy - GraphTypePlanRefreshOnly - GraphTypeApply - GraphTypeValidate - GraphTypeEval // only visits in-memory elements such as variables, locals, and outputs. -) - -// GraphTypeMap is a mapping of human-readable string to GraphType. This -// is useful to use as the mechanism for human input for configurable -// graph types. -var GraphTypeMap = map[string]GraphType{ - "apply": GraphTypeApply, - "plan": GraphTypePlan, - "plan-destroy": GraphTypePlanDestroy, - "plan-refresh-only": GraphTypePlanRefreshOnly, - "validate": GraphTypeValidate, - "eval": GraphTypeEval, -} diff --git a/internal/terraform/context_import.go b/internal/terraform/context_import.go index ccee059d7799..48a5858a3df1 100644 --- a/internal/terraform/context_import.go +++ b/internal/terraform/context_import.go @@ -1,7 +1,10 @@ package terraform import ( + "log" + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" ) @@ -10,6 +13,10 @@ import ( type ImportOpts struct { // Targets are the targets to import Targets []*ImportTarget + + // SetVariables are the variables set outside of the configuration, + // such as on the command line, in variables files, etc. + SetVariables InputValues } // ImportTarget is a single resource to import. @@ -35,36 +42,52 @@ type ImportTarget struct { // Further, this operation also gracefully handles partial state. If during // an import there is a failure, all previously imported resources remain // imported. -func (c *Context) Import(opts *ImportOpts) (*states.State, tfdiags.Diagnostics) { +func (c *Context) Import(config *configs.Config, prevRunState *states.State, opts *ImportOpts) (*states.State, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics // Hold a lock since we can modify our own state here defer c.acquireRun("import")() - // Copy our own state - c.state = c.state.DeepCopy() + schemas, moreDiags := c.Schemas(config, prevRunState) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return nil, diags + } + + // Don't modify our caller's state + state := prevRunState.DeepCopy() + + log.Printf("[DEBUG] Building and walking import graph") // Initialize our graph builder builder := &ImportGraphBuilder{ ImportTargets: opts.Targets, - Config: c.config, + Config: config, Components: c.components, - Schemas: c.schemas, + Schemas: schemas, } - // Build the graph! + // Build the graph graph, graphDiags := builder.Build(addrs.RootModuleInstance) diags = diags.Append(graphDiags) if graphDiags.HasErrors() { - return c.state, diags + return state, diags } + variables := mergeDefaultInputVariableValues(opts.SetVariables, config.Module.Variables) + // Walk it - _, walkDiags := c.walk(graph, walkImport, &graphWalkOpts{}) + walker, walkDiags := c.walk(graph, walkImport, &graphWalkOpts{ + Config: config, + Schemas: schemas, + InputState: state, + RootVariableValues: variables, + }) diags = diags.Append(walkDiags) if walkDiags.HasErrors() { - return c.state, diags + return state, diags } - return c.state, diags + newState := walker.State.Close() + return newState, diags } diff --git a/internal/terraform/context_import_test.go b/internal/terraform/context_import_test.go index 5a645c7e5150..605010d17569 100644 --- a/internal/terraform/context_import_test.go +++ b/internal/terraform/context_import_test.go @@ -16,7 +16,6 @@ func TestContextImport_basic(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-provider") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -33,9 +32,9 @@ func TestContextImport_basic(t *testing.T) { }, } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -49,7 +48,7 @@ func TestContextImport_basic(t *testing.T) { actual := strings.TrimSpace(state.String()) expected := strings.TrimSpace(testImportStr) if actual != expected { - t.Fatalf("bad: \n%s", actual) + t.Fatalf("wrong final state\ngot:\n%s\nwant:\n%s", actual, expected) } } @@ -57,7 +56,6 @@ func TestContextImport_countIndex(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-provider") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -74,9 +72,9 @@ func TestContextImport_countIndex(t *testing.T) { }, } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.IntKey(0), ), @@ -99,30 +97,29 @@ func TestContextImport_collision(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-provider") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) - State: states.BuildState(func(s *states.SyncState) { - s.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "aws_instance", - Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - AttrsFlat: map[string]string{ - "id": "bar", - }, - Status: states.ObjectReady, + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "aws_instance", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + AttrsFlat: map[string]string{ + "id": "bar", }, - addrs.AbsProviderConfig{ - Provider: addrs.NewDefaultProvider("aws"), - Module: addrs.RootModule, - }, - ) - }), + Status: states.ObjectReady, + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("aws"), + Module: addrs.RootModule, + }, + ) }) p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ @@ -136,9 +133,9 @@ func TestContextImport_collision(t *testing.T) { }, } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, state, &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -175,15 +172,14 @@ func TestContextImport_missingType(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -227,15 +223,14 @@ func TestContextImport_moduleProvider(t *testing.T) { m := testModule(t, "import-provider") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -263,7 +258,6 @@ func TestContextImport_providerModule(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-module") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -289,9 +283,9 @@ func TestContextImport_providerModule(t *testing.T) { return } - _, diags := ctx.Import(&ImportOpts{ + _, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.Child("child", addrs.NoKey).ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -329,16 +323,9 @@ func TestContextImport_providerConfig(t *testing.T) { p := testProvider("aws") m := testModule(t, test.module) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ - "foo": &InputValue{ - Value: cty.StringVal("bar"), - SourceType: ValueFromCaller, - }, - }, }) p.ImportResourceStateResponse = &providers.ImportResourceStateResponse{ @@ -352,15 +339,21 @@ func TestContextImport_providerConfig(t *testing.T) { }, } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), ID: "bar", }, }, + SetVariables: InputValues{ + "foo": &InputValue{ + Value: cty.StringVal("bar"), + SourceType: ValueFromCaller, + }, + }, }) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) @@ -389,7 +382,6 @@ func TestContextImport_providerConfigResources(t *testing.T) { pTest := testProvider("test") m := testModule(t, "import-provider-resources") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("test"): testProviderFuncFixed(pTest), @@ -407,9 +399,9 @@ func TestContextImport_providerConfigResources(t *testing.T) { }, } - _, diags := ctx.Import(&ImportOpts{ + _, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -429,7 +421,6 @@ func TestContextImport_refresh(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-provider") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -455,9 +446,9 @@ func TestContextImport_refresh(t *testing.T) { }), } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -480,7 +471,6 @@ func TestContextImport_refreshNil(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-provider") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -503,9 +493,9 @@ func TestContextImport_refreshNil(t *testing.T) { } } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -528,7 +518,6 @@ func TestContextImport_module(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-module") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -545,9 +534,9 @@ func TestContextImport_module(t *testing.T) { }, } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -570,7 +559,6 @@ func TestContextImport_moduleDepth2(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-module") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -587,9 +575,9 @@ func TestContextImport_moduleDepth2(t *testing.T) { }, } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).Child("nested", addrs.NoKey).ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -612,7 +600,6 @@ func TestContextImport_moduleDiff(t *testing.T) { p := testProvider("aws") m := testModule(t, "import-module") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -629,9 +616,9 @@ func TestContextImport_moduleDiff(t *testing.T) { }, } - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.Child("child", addrs.IntKey(0)).ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -692,15 +679,14 @@ func TestContextImport_multiState(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -767,15 +753,14 @@ func TestContextImport_multiStateSame(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.NoKey, ), @@ -866,15 +851,14 @@ resource "test_resource" "unused" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - state, diags := ctx.Import(&ImportOpts{ + state, diags := ctx.Import(m, states.NewState(), &ImportOpts{ Targets: []*ImportTarget{ - &ImportTarget{ + { Addr: addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "test_resource", "test", addrs.NoKey, ), @@ -888,6 +872,9 @@ resource "test_resource" "unused" { ri := state.ResourceInstance(mustResourceInstanceAddr("test_resource.test")) expected := `{"id":"test"}` + if ri == nil || ri.Current == nil { + t.Fatal("no state is recorded for resource instance test_resource.test") + } if string(ri.Current.AttrsJSON) != expected { t.Fatalf("expected %q, got %q\n", expected, ri.Current.AttrsJSON) } diff --git a/internal/terraform/context_input.go b/internal/terraform/context_input.go index 1a94e94bb016..153546d2868b 100644 --- a/internal/terraform/context_input.go +++ b/internal/terraform/context_input.go @@ -17,9 +17,21 @@ import ( // Input asks for input to fill unset required arguments in provider // configurations. // -// This modifies the configuration in-place, so asking for Input twice -// may result in different UI output showing different current values. -func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { +// Unlike the other better-behaved operation methods, this one actually +// modifies some internal state inside the receving context so that the +// captured values will be implicitly available to a subsequent call to Plan, +// or to some other operation entry point. Hopefully a future iteration of +// this will change design to make that data flow more explicit. +// +// Because Input saves the results inside the Context object, asking for +// input twice on the same Context is invalid and will lead to undefined +// behavior. +// +// Once you've called Input with a particular config, it's invalid to call +// any other Context method with a different config, because the aforementioned +// modified internal state won't match. Again, this is an architectural wart +// that we'll hopefully resolve in future. +func (c *Context) Input(config *configs.Config, mode InputMode) tfdiags.Diagnostics { // This function used to be responsible for more than it is now, so its // interface is more general than its current functionality requires. // It now exists only to handle interactive prompts for provider @@ -33,6 +45,12 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { var diags tfdiags.Diagnostics defer c.acquireRun("input")() + schemas, moreDiags := c.Schemas(config, nil) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return diags + } + if c.uiInput == nil { log.Printf("[TRACE] Context.Input: uiInput is nil, so skipping") return diags @@ -44,17 +62,15 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { log.Printf("[TRACE] Context.Input: Prompting for provider arguments") // We prompt for input only for provider configurations defined in - // the root module. At the time of writing that is an arbitrary - // restriction, but we have future plans to support "count" and - // "for_each" on modules that will then prevent us from supporting - // input for child module configurations anyway (since we'd need to - // dynamic-expand first), and provider configurations in child modules - // are not recommended since v0.11 anyway, so this restriction allows - // us to keep this relatively simple without significant hardship. + // the root module. Provider configurations in other modules are a + // legacy thing we no longer recommend, and even if they weren't we + // can't practically prompt for their inputs here because we've not + // yet done "expansion" and so we don't know whether the modules are + // using count or for_each. pcs := make(map[string]*configs.Provider) pas := make(map[string]addrs.LocalProviderConfig) - for _, pc := range c.config.Module.ProviderConfigs { + for _, pc := range config.Module.ProviderConfigs { addr := pc.Addr() pcs[addr.String()] = pc pas[addr.String()] = addr @@ -63,7 +79,7 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { // We also need to detect _implied_ provider configs from resources. // These won't have *configs.Provider objects, but they will still // exist in the map and we'll just treat them as empty below. - for _, rc := range c.config.Module.ManagedResources { + for _, rc := range config.Module.ManagedResources { pa := rc.ProviderConfigAddr() if pa.Alias != "" { continue // alias configurations cannot be implied @@ -74,7 +90,7 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { log.Printf("[TRACE] Context.Input: Provider %s implied by resource block at %s", pa, rc.DeclRange) } } - for _, rc := range c.config.Module.DataResources { + for _, rc := range config.Module.DataResources { pa := rc.ProviderConfigAddr() if pa.Alias != "" { continue // alias configurations cannot be implied @@ -96,8 +112,8 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { UIInput: c.uiInput, } - providerFqn := c.config.Module.ProviderForLocalConfig(pa) - schema := c.schemas.ProviderConfig(providerFqn) + providerFqn := config.Module.ProviderForLocalConfig(pa) + schema := schemas.ProviderConfig(providerFqn) if schema == nil { // Could either be an incorrect config or just an incomplete // mock in tests. We'll let a later pass decide, and just @@ -160,7 +176,7 @@ func (c *Context) Input(mode InputMode) tfdiags.Diagnostics { absConfigAddr := addrs.AbsProviderConfig{ Provider: providerFqn, Alias: pa.Alias, - Module: c.Config().Path, + Module: config.Path, } c.providerInputConfig[absConfigAddr.String()] = vals diff --git a/internal/terraform/context_input_test.go b/internal/terraform/context_input_test.go index 92aaaccb13f3..819856a6c7d6 100644 --- a/internal/terraform/context_input_test.go +++ b/internal/terraform/context_input_test.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configschema" + "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/states" ) @@ -46,7 +47,6 @@ func TestContext2Input_provider(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -59,7 +59,7 @@ func TestContext2Input_provider(t *testing.T) { return } - if diags := ctx.Input(InputModeStd); diags.HasErrors() { + if diags := ctx.Input(m, InputModeStd); diags.HasErrors() { t.Fatalf("input errors: %s", diags.Err()) } @@ -70,11 +70,10 @@ func TestContext2Input_provider(t *testing.T) { t.Errorf("wrong description\ngot: %q\nwant: %q", got, want) } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -117,7 +116,6 @@ func TestContext2Input_providerMulti(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -127,13 +125,12 @@ func TestContext2Input_providerMulti(t *testing.T) { var actual []interface{} var lock sync.Mutex - if diags := ctx.Input(InputModeStd); diags.HasErrors() { + if diags := ctx.Input(m, InputModeStd); diags.HasErrors() { t.Fatalf("input errors: %s", diags.Err()) } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) p.ConfigureProviderFn = func(req providers.ConfigureProviderRequest) (resp providers.ConfigureProviderResponse) { lock.Lock() @@ -141,7 +138,7 @@ func TestContext2Input_providerMulti(t *testing.T) { actual = append(actual, req.Config.GetAttr("foo").AsString()) return } - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -155,13 +152,12 @@ func TestContext2Input_providerOnce(t *testing.T) { m := testModule(t, "input-provider-once") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if diags := ctx.Input(InputModeStd); diags.HasErrors() { + if diags := ctx.Input(m, InputModeStd); diags.HasErrors() { t.Fatalf("input errors: %s", diags.Err()) } } @@ -195,7 +191,6 @@ func TestContext2Input_providerId(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -212,15 +207,14 @@ func TestContext2Input_providerId(t *testing.T) { "provider.aws.foo": "bar", } - if diags := ctx.Input(InputModeStd); diags.HasErrors() { + if diags := ctx.Input(m, InputModeStd); diags.HasErrors() { t.Fatalf("input errors: %s", diags.Err()) } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -255,16 +249,9 @@ func TestContext2Input_providerOnly(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ - "foo": &InputValue{ - Value: cty.StringVal("us-west-2"), - SourceType: ValueFromCaller, - }, - }, UIInput: input, }) @@ -278,15 +265,30 @@ func TestContext2Input_providerOnly(t *testing.T) { return } - if err := ctx.Input(InputModeProvider); err != nil { + if err := ctx.Input(m, InputModeProvider); err != nil { t.Fatalf("err: %s", err) } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + // NOTE: This is a stale test case from an older version of Terraform + // where Input was responsible for prompting for both input variables _and_ + // provider configuration arguments, where it was trying to test the case + // where we were turning off the mode of prompting for input variables. + // That's now always disabled, and so this is essentially the same as the + // normal Input test, but we're preserving it until we have time to review + // and make sure this isn't inadvertently providing unique test coverage + // other than what it set out to test. + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ + "foo": &InputValue{ + Value: cty.StringVal("us-west-2"), + SourceType: ValueFromCaller, + }, + }, + }) + assertNoErrors(t, diags) - state, err := ctx.Apply() + state, err := ctx.Apply(plan, m) if err != nil { t.Fatalf("err: %s", err) } @@ -307,16 +309,9 @@ func TestContext2Input_providerVars(t *testing.T) { m := testModule(t, "input-provider-with-vars") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ - "foo": &InputValue{ - Value: cty.StringVal("bar"), - SourceType: ValueFromCaller, - }, - }, UIInput: input, }) @@ -329,15 +324,22 @@ func TestContext2Input_providerVars(t *testing.T) { actual = req.Config.GetAttr("foo").AsString() return } - if diags := ctx.Input(InputModeStd); diags.HasErrors() { + if diags := ctx.Input(m, InputModeStd); diags.HasErrors() { t.Fatalf("input errors: %s", diags.Err()) } - if _, diags := ctx.Plan(); diags.HasErrors() { - t.Fatalf("plan errors: %s", diags.Err()) - } + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ + "foo": &InputValue{ + Value: cty.StringVal("bar"), + SourceType: ValueFromCaller, + }, + }, + }) + assertNoErrors(t, diags) - if _, diags := ctx.Apply(); diags.HasErrors() { + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } @@ -351,14 +353,13 @@ func TestContext2Input_providerVarsModuleInherit(t *testing.T) { m := testModule(t, "input-provider-with-vars-and-module") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, UIInput: input, }) - if diags := ctx.Input(InputModeStd); diags.HasErrors() { + if diags := ctx.Input(m, InputModeStd); diags.HasErrors() { t.Fatalf("input errors: %s", diags.Err()) } } @@ -369,14 +370,13 @@ func TestContext2Input_submoduleTriggersInvalidCount(t *testing.T) { m := testModule(t, "input-submodule-count") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, UIInput: input, }) - if diags := ctx.Input(InputModeStd); diags.HasErrors() { + if diags := ctx.Input(m, InputModeStd); diags.HasErrors() { t.Fatalf("input errors: %s", diags.Err()) } } @@ -427,23 +427,25 @@ func TestContext2Input_dataSourceRequiresRefresh(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, - State: state, UIInput: input, }) - if diags := ctx.Input(InputModeStd); diags.HasErrors() { + if diags := ctx.Input(m, InputModeStd); diags.HasErrors() { t.Fatalf("input errors: %s", diags.Err()) } - // ensure that plan works after Refresh - if _, diags := ctx.Refresh(); diags.HasErrors() { + // ensure that plan works after Refresh. This is a legacy test that + // doesn't really make sense anymore, because Refresh is really just + // a wrapper around plan anyway, but we're keeping it until we get a + // chance to review and check whether it's giving us any additional + // test coverage aside from what it's specifically intending to test. + if _, diags := ctx.Refresh(m, state, DefaultPlanOpts); diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } - if _, diags := ctx.Plan(); diags.HasErrors() { + if _, diags := ctx.Plan(m, state, DefaultPlanOpts); diags.HasErrors() { t.Fatalf("plan errors: %s", diags.Err()) } } diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go new file mode 100644 index 000000000000..be5d30e61376 --- /dev/null +++ b/internal/terraform/context_plan.go @@ -0,0 +1,435 @@ +package terraform + +import ( + "fmt" + "log" + + "github.com/zclconf/go-cty/cty" + + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/instances" + "github.com/hashicorp/terraform/internal/plans" + "github.com/hashicorp/terraform/internal/refactoring" + "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +// PlanOpts are the various options that affect the details of how Terraform +// will build a plan. +type PlanOpts struct { + Mode plans.Mode + SkipRefresh bool + SetVariables InputValues + Targets []addrs.Targetable + ForceReplace []addrs.AbsResourceInstance +} + +// Plan generates an execution plan for the given context, and returns the +// refreshed state. +// +// The execution plan encapsulates the context and can be stored +// in order to reinstantiate a context later for Apply. +// +// Plan also updates the diff of this context to be the diff generated +// by the plan, so Apply can be called after. +func (c *Context) Plan(config *configs.Config, prevRunState *states.State, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) { + defer c.acquireRun("plan")() + var diags tfdiags.Diagnostics + + // Save the downstream functions from needing to deal with these broken situations. + // No real callers should rely on these, but we have a bunch of old and + // sloppy tests that don't always populate arguments properly. + if config == nil { + config = configs.NewEmptyConfig() + } + if prevRunState == nil { + prevRunState = states.NewState() + } + if opts == nil { + opts = &PlanOpts{ + Mode: plans.NormalMode, + } + } + + moreDiags := CheckCoreVersionRequirements(config) + diags = diags.Append(moreDiags) + // If version constraints are not met then we'll bail early since otherwise + // we're likely to just see a bunch of other errors related to + // incompatibilities, which could be overwhelming for the user. + if diags.HasErrors() { + return nil, diags + } + + switch opts.Mode { + case plans.NormalMode, plans.DestroyMode: + // OK + case plans.RefreshOnlyMode: + if opts.SkipRefresh { + // The CLI layer (and other similar callers) should prevent this + // combination of options. + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Incompatible plan options", + "Cannot skip refreshing in refresh-only mode. This is a bug in Terraform.", + )) + return nil, diags + } + default: + // The CLI layer (and other similar callers) should not try to + // create a context for a mode that Terraform Core doesn't support. + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Unsupported plan mode", + fmt.Sprintf("Terraform Core doesn't know how to handle plan mode %s. This is a bug in Terraform.", opts.Mode), + )) + return nil, diags + } + if len(opts.ForceReplace) > 0 && opts.Mode != plans.NormalMode { + // The other modes don't generate no-op or update actions that we might + // upgrade to be "replace", so doesn't make sense to combine those. + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Unsupported plan mode", + "Forcing resource instance replacement (with -replace=...) is allowed only in normal planning mode.", + )) + return nil, diags + } + + variables := mergeDefaultInputVariableValues(opts.SetVariables, config.Module.Variables) + + // By the time we get here, we should have values defined for all of + // the root module variables, even if some of them are "unknown". It's the + // caller's responsibility to have already handled the decoding of these + // from the various ways the CLI allows them to be set and to produce + // user-friendly error messages if they are not all present, and so + // the error message from checkInputVariables should never be seen and + // includes language asking the user to report a bug. + varDiags := checkInputVariables(config.Module.Variables, variables) + diags = diags.Append(varDiags) + + if len(opts.Targets) > 0 { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Warning, + "Resource targeting is in effect", + `You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration. + +The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, + )) + } + + var plan *plans.Plan + var planDiags tfdiags.Diagnostics + switch opts.Mode { + case plans.NormalMode: + plan, planDiags = c.plan(config, prevRunState, variables, opts) + case plans.DestroyMode: + plan, planDiags = c.destroyPlan(config, prevRunState, variables, opts) + case plans.RefreshOnlyMode: + plan, planDiags = c.refreshOnlyPlan(config, prevRunState, variables, opts) + default: + panic(fmt.Sprintf("unsupported plan mode %s", opts.Mode)) + } + diags = diags.Append(planDiags) + if diags.HasErrors() { + return nil, diags + } + + // convert the variables into the format expected for the plan + varVals := make(map[string]plans.DynamicValue, len(variables)) + for k, iv := range variables { + // We use cty.DynamicPseudoType here so that we'll save both the + // value _and_ its dynamic type in the plan, so we can recover + // exactly the same value later. + dv, err := plans.NewDynamicValue(iv.Value, cty.DynamicPseudoType) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to prepare variable value for plan", + fmt.Sprintf("The value for variable %q could not be serialized to store in the plan: %s.", k, err), + )) + continue + } + varVals[k] = dv + } + + // insert the run-specific data from the context into the plan; variables, + // targets and provider SHAs. + if plan != nil { + plan.VariableValues = varVals + plan.TargetAddrs = opts.Targets + plan.ProviderSHA256s = c.providerSHA256s + } else if !diags.HasErrors() { + panic("nil plan but no errors") + } + + return plan, diags +} + +var DefaultPlanOpts = &PlanOpts{ + Mode: plans.NormalMode, +} + +func (c *Context) plan(config *configs.Config, prevRunState *states.State, rootVariables InputValues, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + + if opts.Mode != plans.NormalMode { + panic(fmt.Sprintf("called Context.plan with %s", opts.Mode)) + } + + plan, walkDiags := c.planWalk(config, prevRunState, rootVariables, opts) + diags = diags.Append(walkDiags) + if diags.HasErrors() { + return nil, diags + } + + // The refreshed state ends up with some placeholder objects in it for + // objects pending creation. We only really care about those being in + // the working state, since that's what we're going to use when applying, + // so we'll prune them all here. + plan.PriorState.SyncWrapper().RemovePlannedResourceInstanceObjects() + + return plan, diags +} + +func (c *Context) refreshOnlyPlan(config *configs.Config, prevRunState *states.State, rootVariables InputValues, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + + if opts.Mode != plans.RefreshOnlyMode { + panic(fmt.Sprintf("called Context.refreshOnlyPlan with %s", opts.Mode)) + } + + plan, walkDiags := c.planWalk(config, prevRunState, rootVariables, opts) + diags = diags.Append(walkDiags) + if diags.HasErrors() { + return nil, diags + } + + // If the graph builder and graph nodes correctly obeyed our directive + // to refresh only, the set of resource changes should always be empty. + // We'll safety-check that here so we can return a clear message about it, + // rather than probably just generating confusing output at the UI layer. + if len(plan.Changes.Resources) != 0 { + // Some extra context in the logs in case the user reports this message + // as a bug, as a starting point for debugging. + for _, rc := range plan.Changes.Resources { + if depKey := rc.DeposedKey; depKey == states.NotDeposed { + log.Printf("[DEBUG] Refresh-only plan includes %s change for %s", rc.Action, rc.Addr) + } else { + log.Printf("[DEBUG] Refresh-only plan includes %s change for %s deposed object %s", rc.Action, rc.Addr, depKey) + } + } + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Invalid refresh-only plan", + "Terraform generated planned resource changes in a refresh-only plan. This is a bug in Terraform.", + )) + } + + // Prune out any placeholder objects we put in the state to represent + // objects that would need to be created. + plan.PriorState.SyncWrapper().RemovePlannedResourceInstanceObjects() + + return plan, diags +} + +func (c *Context) destroyPlan(config *configs.Config, prevRunState *states.State, rootVariables InputValues, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + pendingPlan := &plans.Plan{} + + if opts.Mode != plans.DestroyMode { + panic(fmt.Sprintf("called Context.destroyPlan with %s", opts.Mode)) + } + + priorState := prevRunState + + // A destroy plan starts by running Refresh to read any pending data + // sources, and remove missing managed resources. This is required because + // a "destroy plan" is only creating delete changes, and is essentially a + // local operation. + // + // NOTE: if skipRefresh _is_ set then we'll rely on the destroy-plan walk + // below to upgrade the prevRunState and priorState both to the latest + // resource type schemas, so NodePlanDestroyableResourceInstance.Execute + // must coordinate with this by taking that action only when c.skipRefresh + // _is_ set. This coupling between the two is unfortunate but necessary + // to work within our current structure. + if !opts.SkipRefresh { + log.Printf("[TRACE] Context.destroyPlan: calling Context.plan to get the effect of refreshing the prior state") + normalOpts := *opts + normalOpts.Mode = plans.NormalMode + refreshPlan, refreshDiags := c.plan(config, prevRunState, rootVariables, &normalOpts) + diags = diags.Append(refreshDiags) + if diags.HasErrors() { + return nil, diags + } + + // insert the refreshed state into the destroy plan result, and ignore + // the changes recorded from the refresh. + pendingPlan.PriorState = refreshPlan.PriorState.DeepCopy() + pendingPlan.PrevRunState = refreshPlan.PrevRunState.DeepCopy() + log.Printf("[TRACE] Context.destroyPlan: now _really_ creating a destroy plan") + + // We'll use the refreshed state -- which is the "prior state" from + // the perspective of this "pending plan" -- as the starting state + // for our destroy-plan walk, so it can take into account if we + // detected during refreshing that anything was already deleted outside + // of Terraform. + priorState = pendingPlan.PriorState + } + + destroyPlan, walkDiags := c.planWalk(config, priorState, rootVariables, opts) + diags = diags.Append(walkDiags) + if walkDiags.HasErrors() { + return nil, diags + } + + if !opts.SkipRefresh { + // If we didn't skip refreshing then we want the previous run state + // prior state to be the one we originally fed into the c.plan call + // above, not the refreshed version we used for the destroy walk. + destroyPlan.PrevRunState = pendingPlan.PrevRunState + } + + return destroyPlan, diags +} + +func (c *Context) prePlanFindAndApplyMoves(config *configs.Config, prevRunState *states.State, targets []addrs.Targetable) ([]refactoring.MoveStatement, map[addrs.UniqueKey]refactoring.MoveResult) { + moveStmts := refactoring.FindMoveStatements(config) + moveResults := refactoring.ApplyMoves(moveStmts, prevRunState) + if len(targets) > 0 { + for _, result := range moveResults { + matchesTarget := false + for _, targetAddr := range targets { + if targetAddr.TargetContains(result.From) { + matchesTarget = true + break + } + } + if !matchesTarget { + // TODO: Return an error stating that a targeted plan is + // only valid if it includes this address that was moved. + } + } + } + return moveStmts, moveResults +} + +func (c *Context) postPlanValidateMoves(config *configs.Config, stmts []refactoring.MoveStatement, allInsts instances.Set) tfdiags.Diagnostics { + return refactoring.ValidateMoves(stmts, config, allInsts) +} + +func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, rootVariables InputValues, opts *PlanOpts) (*plans.Plan, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + log.Printf("[DEBUG] Building and walking plan graph for %s", opts.Mode) + + schemas, moreDiags := c.Schemas(config, prevRunState) + diags = diags.Append(moreDiags) + if diags.HasErrors() { + return nil, diags + } + + prevRunState = prevRunState.DeepCopy() // don't modify the caller's object when we process the moves + moveStmts, moveResults := c.prePlanFindAndApplyMoves(config, prevRunState, opts.Targets) + + graph, walkOp, moreDiags := c.planGraph(config, prevRunState, opts, schemas, true) + diags = diags.Append(moreDiags) + if diags.HasErrors() { + return nil, diags + } + + // If we get here then we should definitely have a non-nil "graph", which + // we can now walk. + changes := plans.NewChanges() + walker, walkDiags := c.walk(graph, walkOp, &graphWalkOpts{ + Config: config, + Schemas: schemas, + InputState: prevRunState, + Changes: changes, + MoveResults: moveResults, + RootVariableValues: rootVariables, + }) + diags = diags.Append(walker.NonFatalDiagnostics) + diags = diags.Append(walkDiags) + diags = diags.Append(c.postPlanValidateMoves(config, moveStmts, walker.InstanceExpander.AllInstances())) + + plan := &plans.Plan{ + UIMode: opts.Mode, + Changes: changes, + PriorState: walker.RefreshState.Close(), + PrevRunState: walker.PrevRunState.Close(), + + // Other fields get populated by Context.Plan after we return + } + return plan, diags +} + +func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, opts *PlanOpts, schemas *Schemas, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) { + switch mode := opts.Mode; mode { + case plans.NormalMode: + graph, diags := (&PlanGraphBuilder{ + Config: config, + State: prevRunState, + Components: c.components, + Schemas: schemas, + Targets: opts.Targets, + ForceReplace: opts.ForceReplace, + Validate: validate, + skipRefresh: opts.SkipRefresh, + }).Build(addrs.RootModuleInstance) + return graph, walkPlan, diags + case plans.RefreshOnlyMode: + graph, diags := (&PlanGraphBuilder{ + Config: config, + State: prevRunState, + Components: c.components, + Schemas: schemas, + Targets: opts.Targets, + Validate: validate, + skipRefresh: opts.SkipRefresh, + skipPlanChanges: true, // this activates "refresh only" mode. + }).Build(addrs.RootModuleInstance) + return graph, walkPlan, diags + case plans.DestroyMode: + graph, diags := (&DestroyPlanGraphBuilder{ + Config: config, + State: prevRunState, + Components: c.components, + Schemas: schemas, + Targets: opts.Targets, + Validate: validate, + skipRefresh: opts.SkipRefresh, + }).Build(addrs.RootModuleInstance) + return graph, walkPlanDestroy, diags + default: + // The above should cover all plans.Mode values + panic(fmt.Sprintf("unsupported plan mode %s", mode)) + } +} + +// PlanGraphForUI is a last vestage of graphs in the public interface of Context +// (as opposed to graphs as an implementation detail) intended only for use +// by the "terraform graph" command when asked to render a plan-time graph. +// +// The result of this is intended only for rendering ot the user as a dot +// graph, and so may change in future in order to make the result more useful +// in that context, even if drifts away from the physical graph that Terraform +// Core currently uses as an implementation detail of planning. +func (c *Context) PlanGraphForUI(config *configs.Config, prevRunState *states.State, mode plans.Mode) (*Graph, tfdiags.Diagnostics) { + // For now though, this really is just the internal graph, confusing + // implementation details and all. + + var diags tfdiags.Diagnostics + + opts := &PlanOpts{Mode: mode} + + schemas, moreDiags := c.Schemas(config, prevRunState) + diags = diags.Append(moreDiags) + if diags.HasErrors() { + return nil, diags + } + + graph, _, moreDiags := c.planGraph(config, prevRunState, opts, schemas, false) + diags = diags.Append(moreDiags) + return graph, diags +} diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index a5242ad89967..08796a3c1bf2 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -69,17 +69,13 @@ resource "test_object" "a" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) if !p.UpgradeResourceStateCalled { t.Errorf("Provider's UpgradeResourceState wasn't called; should've been") @@ -184,17 +180,13 @@ data "test_data_source" "foo" { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) for _, res := range plan.Changes.Resources { if res.Action != plans.NoOp { @@ -231,17 +223,13 @@ output "out" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.Err()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) change, err := plan.Changes.Outputs[0].Decode() if err != nil { @@ -300,16 +288,13 @@ resource "test_object" "a" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.Err()) - } + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) + assertNoErrors(t, diags) } func TestContext2Plan_dataReferencesResourceInModules(t *testing.T) { @@ -376,14 +361,12 @@ resource "test_resource" "b" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) oldMod := oldDataAddr.Module @@ -466,19 +449,16 @@ resource "test_object" "a" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - PlanMode: plans.DestroyMode, - SkipRefresh: false, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + SkipRefresh: false, // the default + }) + assertNoErrors(t, diags) if !p.UpgradeResourceStateCalled { t.Errorf("Provider's UpgradeResourceState wasn't called; should've been") @@ -569,19 +549,16 @@ resource "test_object" "a" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - PlanMode: plans.DestroyMode, - SkipRefresh: true, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.Err()) - } + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + SkipRefresh: true, + }) + assertNoErrors(t, diags) if !p.UpgradeResourceStateCalled { t.Errorf("Provider's UpgradeResourceState wasn't called; should've been") @@ -665,17 +642,13 @@ output "result" { state := states.NewState() ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) for _, res := range plan.Changes.Resources { if res.Action != plans.Create { @@ -716,18 +689,15 @@ provider "test" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - PlanMode: plans.DestroyMode, }) - _, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.Err()) - } + _, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + assertNoErrors(t, diags) } func TestContext2Plan_movedResourceBasic(t *testing.T) { @@ -762,17 +732,17 @@ func TestContext2Plan_movedResourceBasic(t *testing.T) { p := simpleMockProvider() ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, ForceReplace: []addrs.AbsResourceInstance{ addrA, }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors\n%s", diags.Err().Error()) } @@ -873,15 +843,14 @@ func TestContext2Plan_refreshOnlyMode(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - PlanMode: plans.RefreshOnlyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.RefreshOnlyMode, + }) if diags.HasErrors() { t.Fatalf("unexpected errors\n%s", diags.Err().Error()) } @@ -1010,15 +979,14 @@ func TestContext2Plan_refreshOnlyMode_deposed(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - PlanMode: plans.RefreshOnlyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.RefreshOnlyMode, + }) if diags.HasErrors() { t.Fatalf("unexpected errors\n%s", diags.Err().Error()) } @@ -1089,11 +1057,9 @@ output "root" { }`, }) - ctx := testContext2(t, &ContextOpts{ - Config: m, - }) + ctx := testContext2(t, &ContextOpts{}) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1189,17 +1155,13 @@ data "test_data_source" "foo" { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() - if diags.HasErrors() { - t.Fatal(diags.ErrWithWarnings()) - } + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) for _, res := range plan.Changes.Resources { switch res.Addr.String() { @@ -1242,17 +1204,17 @@ func TestContext2Plan_forceReplace(t *testing.T) { p := simpleMockProvider() ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, ForceReplace: []addrs.AbsResourceInstance{ addrA, }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors\n%s", diags.Err().Error()) } @@ -1310,17 +1272,17 @@ func TestContext2Plan_forceReplaceIncompleteAddr(t *testing.T) { p := simpleMockProvider() ctx := testContext2(t, &ContextOpts{ - Config: m, - State: state, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, ForceReplace: []addrs.AbsResourceInstance{ addrBare, }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors\n%s", diags.Err().Error()) } diff --git a/internal/terraform/context_plan_test.go b/internal/terraform/context_plan_test.go index 8a5bc9e6f9af..475bcb661f13 100644 --- a/internal/terraform/context_plan_test.go +++ b/internal/terraform/context_plan_test.go @@ -30,7 +30,6 @@ func TestContext2Plan_basic(t *testing.T) { m := testModule(t, "plan-good") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -39,7 +38,7 @@ func TestContext2Plan_basic(t *testing.T) { }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -52,10 +51,6 @@ func TestContext2Plan_basic(t *testing.T) { t.Errorf("wrong ProviderSHA256s %#v; want %#v", plan.ProviderSHA256s, ctx.providerSHA256s) } - if !ctx.State().Empty() { - t.Fatalf("expected empty state, got %#v\n", ctx.State()) - } - schema := p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block ty := schema.ImpliedType() for _, r := range plan.Changes.Resources { @@ -112,14 +107,12 @@ func TestContext2Plan_createBefore_deposed(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -132,8 +125,8 @@ func TestContext2Plan_createBefore_deposed(t *testing.T) { type = aws_instance Deposed ID 1 = foo`) - if ctx.State().String() != expectedState { - t.Fatalf("\nexpected: %q\ngot: %q\n", expectedState, ctx.State().String()) + if plan.PriorState.String() != expectedState { + t.Fatalf("\nexpected: %q\ngot: %q\n", expectedState, plan.PriorState.String()) } schema := p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block @@ -205,19 +198,18 @@ func TestContext2Plan_createBefore_maintainRoot(t *testing.T) { m := testModule(t, "plan-cbd-maintain-root") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } - if !ctx.State().Empty() { - t.Fatal("expected empty state, got:", ctx.State()) + if !plan.PriorState.Empty() { + t.Fatal("expected empty prior state, got:", plan.PriorState) } if len(plan.Changes.Resources) != 4 { @@ -241,19 +233,18 @@ func TestContext2Plan_emptyDiff(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } - if !ctx.State().Empty() { - t.Fatal("expected empty state, got:", ctx.State()) + if !plan.PriorState.Empty() { + t.Fatal("expected empty state, got:", plan.PriorState) } if len(plan.Changes.Resources) != 2 { @@ -280,13 +271,12 @@ func TestContext2Plan_escapedVar(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -321,19 +311,18 @@ func TestContext2Plan_minimal(t *testing.T) { m := testModule(t, "plan-empty") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } - if !ctx.State().Empty() { - t.Fatal("expected empty state, got:", ctx.State()) + if !plan.PriorState.Empty() { + t.Fatal("expected empty state, got:", plan.PriorState) } if len(plan.Changes.Resources) != 2 { @@ -360,13 +349,12 @@ func TestContext2Plan_modules(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -419,13 +407,12 @@ func TestContext2Plan_moduleExpand(t *testing.T) { m := testModule(t, "plan-modules-expand") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -434,13 +421,13 @@ func TestContext2Plan_moduleExpand(t *testing.T) { ty := schema.ImpliedType() expected := map[string]struct{}{ - `aws_instance.foo["a"]`: struct{}{}, - `module.count_child[1].aws_instance.foo[0]`: struct{}{}, - `module.count_child[1].aws_instance.foo[1]`: struct{}{}, - `module.count_child[0].aws_instance.foo[0]`: struct{}{}, - `module.count_child[0].aws_instance.foo[1]`: struct{}{}, - `module.for_each_child["a"].aws_instance.foo[1]`: struct{}{}, - `module.for_each_child["a"].aws_instance.foo[0]`: struct{}{}, + `aws_instance.foo["a"]`: {}, + `module.count_child[1].aws_instance.foo[0]`: {}, + `module.count_child[1].aws_instance.foo[1]`: {}, + `module.count_child[0].aws_instance.foo[0]`: {}, + `module.count_child[0].aws_instance.foo[1]`: {}, + `module.for_each_child["a"].aws_instance.foo[1]`: {}, + `module.for_each_child["a"].aws_instance.foo[0]`: {}, } for _, res := range plan.Changes.Resources { @@ -480,13 +467,12 @@ func TestContext2Plan_moduleCycle(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -535,13 +521,12 @@ func TestContext2Plan_moduleDeadlock(t *testing.T) { p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, err := ctx.Plan() + plan, err := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if err != nil { t.Fatalf("err: %s", err) } @@ -580,13 +565,12 @@ func TestContext2Plan_moduleInput(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -635,13 +619,12 @@ func TestContext2Plan_moduleInputComputed(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -687,19 +670,20 @@ func TestContext2Plan_moduleInputFromVar(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "foo": &InputValue{ Value: cty.StringVal("52"), SourceType: ValueFromCaller, }, }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -755,13 +739,12 @@ func TestContext2Plan_moduleMultiVar(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -830,14 +813,12 @@ func TestContext2Plan_moduleOrphans(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -880,8 +861,8 @@ module.child: ID = baz provider = provider["registry.terraform.io/hashicorp/aws"]` - if ctx.State().String() != expectedState { - t.Fatalf("\nexpected state: %q\n\ngot: %q", expectedState, ctx.State().String()) + if plan.PriorState.String() != expectedState { + t.Fatalf("\nexpected state: %q\n\ngot: %q", expectedState, plan.PriorState.String()) } } @@ -922,17 +903,15 @@ func TestContext2Plan_moduleOrphansWithProvisioner(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, Provisioners: map[string]provisioners.Factory{ "shell": testProvisionerFuncFixed(pr), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -985,8 +964,8 @@ module.parent.child2: provider = provider["registry.terraform.io/hashicorp/aws"] type = aws_instance` - if expectedState != ctx.State().String() { - t.Fatalf("\nexpect state:\n%s\n\ngot state:\n%s\n", expectedState, ctx.State().String()) + if expectedState != plan.PriorState.String() { + t.Fatalf("\nexpect state:\n%s\n\ngot state:\n%s\n", expectedState, plan.PriorState.String()) } } @@ -996,7 +975,6 @@ func TestContext2Plan_moduleProviderInherit(t *testing.T) { m := testModule(t, "plan-module-provider-inherit") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { l.Lock() @@ -1038,7 +1016,7 @@ func TestContext2Plan_moduleProviderInherit(t *testing.T) { }, }) - _, err := ctx.Plan() + _, err := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if err != nil { t.Fatalf("err: %s", err) } @@ -1058,7 +1036,6 @@ func TestContext2Plan_moduleProviderInheritDeep(t *testing.T) { m := testModule(t, "plan-module-provider-inherit-deep") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { l.Lock() @@ -1103,7 +1080,7 @@ func TestContext2Plan_moduleProviderInheritDeep(t *testing.T) { }, }) - _, err := ctx.Plan() + _, err := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if err != nil { t.Fatalf("err: %s", err) } @@ -1115,7 +1092,6 @@ func TestContext2Plan_moduleProviderDefaultsVar(t *testing.T) { m := testModule(t, "plan-module-provider-defaults-var") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { l.Lock() @@ -1157,15 +1133,17 @@ func TestContext2Plan_moduleProviderDefaultsVar(t *testing.T) { return p, nil }, }, - Variables: InputValues{ + }) + + _, err := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "foo": &InputValue{ Value: cty.StringVal("root"), SourceType: ValueFromCaller, }, }, }) - - _, err := ctx.Plan() if err != nil { t.Fatalf("err: %s", err) } @@ -1199,13 +1177,12 @@ func TestContext2Plan_moduleProviderVar(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1242,13 +1219,12 @@ func TestContext2Plan_moduleVar(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1296,13 +1272,12 @@ func TestContext2Plan_moduleVarWrongTypeBasic(t *testing.T) { m := testModule(t, "plan-module-wrong-var-type") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("succeeded; want errors") } @@ -1312,13 +1287,12 @@ func TestContext2Plan_moduleVarWrongTypeNested(t *testing.T) { m := testModule(t, "plan-module-wrong-var-type-nested") p := testProvider("null") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("succeeded; want errors") } @@ -1328,13 +1302,12 @@ func TestContext2Plan_moduleVarWithDefaultValue(t *testing.T) { m := testModule(t, "plan-module-var-with-default-value") p := testProvider("null") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1345,13 +1318,12 @@ func TestContext2Plan_moduleVarComputed(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1407,14 +1379,12 @@ func TestContext2Plan_preventDestroy_bad(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, err := ctx.Plan() + plan, err := ctx.Plan(m, state, DefaultPlanOpts) expectedErr := "aws_instance.foo has lifecycle.prevent_destroy" if !strings.Contains(fmt.Sprintf("%s", err), expectedErr) { @@ -1442,14 +1412,12 @@ func TestContext2Plan_preventDestroy_good(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1483,14 +1451,12 @@ func TestContext2Plan_preventDestroy_countBad(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, err := ctx.Plan() + plan, err := ctx.Plan(m, state, DefaultPlanOpts) expectedErr := "aws_instance.foo[1] has lifecycle.prevent_destroy" if !strings.Contains(fmt.Sprintf("%s", err), expectedErr) { @@ -1535,14 +1501,12 @@ func TestContext2Plan_preventDestroy_countGood(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1580,14 +1544,12 @@ func TestContext2Plan_preventDestroy_countGoodNoChange(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1613,22 +1575,21 @@ func TestContext2Plan_preventDestroy_destroyPlan(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) expectedErr := "aws_instance.foo has lifecycle.prevent_destroy" if !strings.Contains(fmt.Sprintf("%s", diags.Err()), expectedErr) { if plan != nil { t.Logf(legacyDiffComparisonString(plan.Changes)) } - t.Fatalf("expected err would contain %q\nerr: %s", expectedErr, diags.Err()) + t.Fatalf("expected diagnostics would contain %q\nactual diags: %s", expectedErr, diags.Err()) } } @@ -1637,7 +1598,6 @@ func TestContext2Plan_provisionerCycle(t *testing.T) { p := testProvider("aws") pr := testProvisioner() ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -1646,7 +1606,7 @@ func TestContext2Plan_provisionerCycle(t *testing.T) { }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("succeeded; want errors") } @@ -1657,13 +1617,12 @@ func TestContext2Plan_computed(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1730,13 +1689,12 @@ func TestContext2Plan_blockNestingGroup(t *testing.T) { } } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1800,13 +1758,12 @@ func TestContext2Plan_computedDataResource(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1866,16 +1823,15 @@ func TestContext2Plan_computedInFunction(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) assertNoErrors(t, diags) - _, diags = ctx.Plan() + _, diags = ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) if !p.ReadDataSourceCalled { @@ -1906,13 +1862,12 @@ func TestContext2Plan_computedDataCountResource(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1935,13 +1890,12 @@ func TestContext2Plan_localValueCount(t *testing.T) { m := testModule(t, "plan-local-value-count") p := testProvider("test") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2015,19 +1969,12 @@ func TestContext2Plan_dataResourceBecomesComputed(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - _, diags := ctx.Refresh() - if diags.HasErrors() { - t.Fatalf("unexpected errors during refresh: %s", diags.Err()) - } - - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors during plan: %s", diags.Err()) } @@ -2072,13 +2019,12 @@ func TestContext2Plan_computedList(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2136,13 +2082,12 @@ func TestContext2Plan_computedMultiIndex(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2191,13 +2136,12 @@ func TestContext2Plan_count(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2265,13 +2209,12 @@ func TestContext2Plan_countComputed(t *testing.T) { m := testModule(t, "plan-count-computed") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, err := ctx.Plan() + _, err := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if err == nil { t.Fatal("should error") } @@ -2282,13 +2225,12 @@ func TestContext2Plan_countComputedModule(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, err := ctx.Plan() + _, err := ctx.Plan(m, states.NewState(), DefaultPlanOpts) expectedErr := `The "count" value depends on resource attributes` if !strings.Contains(fmt.Sprintf("%s", err), expectedErr) { @@ -2302,13 +2244,12 @@ func TestContext2Plan_countModuleStatic(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2356,13 +2297,12 @@ func TestContext2Plan_countModuleStaticGrandchild(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2410,13 +2350,12 @@ func TestContext2Plan_countIndex(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2461,19 +2400,20 @@ func TestContext2Plan_countVar(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "instance_count": &InputValue{ Value: cty.StringVal("3"), SourceType: ValueFromCaller, }, }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2545,13 +2485,12 @@ func TestContext2Plan_countZero(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2586,13 +2525,12 @@ func TestContext2Plan_countOneIndex(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2665,14 +2603,12 @@ func TestContext2Plan_countDecreaseToOne(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2729,8 +2665,8 @@ aws_instance.foo.2: ID = bar provider = provider["registry.terraform.io/hashicorp/aws"]` - if ctx.State().String() != expectedState { - t.Fatalf("epected state:\n%q\n\ngot state:\n%q\n", expectedState, ctx.State().String()) + if plan.PriorState.String() != expectedState { + t.Fatalf("epected state:\n%q\n\ngot state:\n%q\n", expectedState, plan.PriorState.String()) } } @@ -2751,14 +2687,12 @@ func TestContext2Plan_countIncreaseFromNotSet(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2830,14 +2764,12 @@ func TestContext2Plan_countIncreaseFromOne(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -2923,14 +2855,12 @@ func TestContext2Plan_countIncreaseFromOneCorrupted(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3049,14 +2979,12 @@ func TestContext2Plan_countIncreaseWithSplatReference(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3105,13 +3033,12 @@ func TestContext2Plan_forEach(t *testing.T) { m := testModule(t, "plan-for-each") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3140,19 +3067,20 @@ func TestContext2Plan_forEachUnknownValue(t *testing.T) { m := testModule(t, "plan-for-each-unknown-value") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "foo": { Value: cty.UnknownVal(cty.String), SourceType: ValueFromCLIArg, }, }, }) - - _, diags := ctx.Plan() if !diags.HasErrors() { // Should get this error: // Invalid for_each argument: The "for_each" value depends on resource attributes that cannot be determined until apply... @@ -3190,15 +3118,14 @@ func TestContext2Plan_destroy(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3253,15 +3180,14 @@ func TestContext2Plan_moduleDestroy(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3316,15 +3242,14 @@ func TestContext2Plan_moduleDestroyCycle(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3378,15 +3303,14 @@ func TestContext2Plan_moduleDestroyMultivar(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3437,13 +3361,12 @@ func TestContext2Plan_pathVar(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("err: %s", diags.Err()) } @@ -3493,14 +3416,12 @@ func TestContext2Plan_diffVar(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3553,14 +3474,13 @@ func TestContext2Plan_hook(t *testing.T) { h := new(MockHook) p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3580,13 +3500,12 @@ func TestContext2Plan_closeProvider(t *testing.T) { m := testModule(t, "plan-close-module-provider") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3612,14 +3531,12 @@ func TestContext2Plan_orphan(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3669,13 +3586,12 @@ func TestContext2Plan_shadowUuid(t *testing.T) { m := testModule(t, "plan-shadow-uuid") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3696,14 +3612,12 @@ func TestContext2Plan_state(t *testing.T) { mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3768,7 +3682,7 @@ func TestContext2Plan_requiresReplace(t *testing.T) { Block: &configschema.Block{}, }, ResourceTypes: map[string]providers.Schema{ - "test_thing": providers.Schema{ + "test_thing": { Block: &configschema.Block{ Attributes: map[string]*configschema.Attribute{ "v": { @@ -3801,14 +3715,12 @@ func TestContext2Plan_requiresReplace(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3869,14 +3781,12 @@ func TestContext2Plan_taint(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -3949,14 +3859,12 @@ func TestContext2Plan_taintIgnoreChanges(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4032,14 +3940,12 @@ func TestContext2Plan_taintDestroyInterpolatedCountRace(t *testing.T) { for i := 0; i < 100; i++ { ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state.DeepCopy(), }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state.DeepCopy(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4089,18 +3995,19 @@ func TestContext2Plan_targeted(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "foo", ), }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4140,16 +4047,17 @@ func TestContext2Plan_targetedCrossModule(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("B", addrs.NoKey), }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4204,16 +4112,17 @@ func TestContext2Plan_targetedModuleWithProvider(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child2", addrs.NoKey), }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4260,20 +4169,19 @@ func TestContext2Plan_targetedOrphan(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "orphan", ), }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4327,20 +4235,19 @@ func TestContext2Plan_targetedModuleOrphan(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - PlanMode: plans.DestroyMode, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child", addrs.NoKey).Resource( addrs.ManagedResourceMode, "aws_instance", "orphan", ), }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4371,10 +4278,12 @@ func TestContext2Plan_targetedModuleUntargetedVariable(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ Targets: []addrs.Targetable{ addrs.RootModuleInstance.Resource( addrs.ManagedResourceMode, "aws_instance", "blue", @@ -4382,8 +4291,6 @@ func TestContext2Plan_targetedModuleUntargetedVariable(t *testing.T) { addrs.RootModuleInstance.Child("blue_mod", addrs.NoKey), }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4427,18 +4334,18 @@ func TestContext2Plan_outputContainsTargetedResource(t *testing.T) { m := testModule(t, "plan-untargeted-resource-output") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + _, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("mod", addrs.NoKey).Resource( addrs.ManagedResourceMode, "aws_instance", "a", ), }, }) - - _, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("err: %s", diags) } @@ -4477,19 +4384,18 @@ func TestContext2Plan_targetedOverTen(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ Targets: []addrs.Targetable{ addrs.RootModuleInstance.ResourceInstance( addrs.ManagedResourceMode, "aws_instance", "foo", addrs.IntKey(1), ), }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4519,19 +4425,21 @@ func TestContext2Plan_provider(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + opts := &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "foo": &InputValue{ Value: cty.StringVal("bar"), SourceType: ValueFromCaller, }, }, - }) + } - if _, err := ctx.Plan(); err != nil { + if _, err := ctx.Plan(m, states.NewState(), opts); err != nil { t.Fatalf("err: %s", err) } @@ -4544,13 +4452,12 @@ func TestContext2Plan_varListErr(t *testing.T) { m := testModule(t, "plan-var-list-err") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, err := ctx.Plan() + _, err := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if err == nil { t.Fatal("should error") @@ -4574,20 +4481,20 @@ func TestContext2Plan_ignoreChanges(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "foo": &InputValue{ Value: cty.StringVal("ami-1234abcd"), SourceType: ValueFromCaller, }, }, - State: state, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4633,11 +4540,14 @@ func TestContext2Plan_ignoreChangesWildcard(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "foo": &InputValue{ Value: cty.StringVal("ami-1234abcd"), SourceType: ValueFromCaller, @@ -4647,10 +4557,7 @@ func TestContext2Plan_ignoreChangesWildcard(t *testing.T) { SourceType: ValueFromCaller, }, }, - State: state, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4700,14 +4607,12 @@ func TestContext2Plan_ignoreChangesInMap(t *testing.T) { m := testModule(t, "plan-ignore-changes-in-map") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: s, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, s, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4757,20 +4662,20 @@ func TestContext2Plan_ignoreChangesSensitive(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "foo": &InputValue{ Value: cty.StringVal("ami-1234abcd"), SourceType: ValueFromCaller, }, }, - State: state, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4827,13 +4732,12 @@ func TestContext2Plan_moduleMapLiteral(t *testing.T) { return testDiffFn(req) } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4870,13 +4774,12 @@ func TestContext2Plan_computedValueInMap(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -4926,13 +4829,12 @@ func TestContext2Plan_moduleVariableFromSplat(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -5009,13 +4911,12 @@ func TestContext2Plan_createBeforeDestroy_depends_datasource(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -5062,8 +4963,8 @@ func TestContext2Plan_createBeforeDestroy_depends_datasource(t *testing.T) { } wantAddrs := map[string]struct{}{ - "aws_instance.foo[0]": struct{}{}, - "aws_instance.foo[1]": struct{}{}, + "aws_instance.foo[0]": {}, + "aws_instance.foo[1]": {}, } if !cmp.Equal(seenAddrs, wantAddrs) { t.Errorf("incorrect addresses in changeset:\n%s", cmp.Diff(wantAddrs, seenAddrs)) @@ -5084,13 +4985,12 @@ func TestContext2Plan_listOrder(t *testing.T) { }, }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -5153,14 +5053,12 @@ func TestContext2Plan_ignoreChangesWithFlatmaps(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -5269,24 +5167,17 @@ func TestContext2Plan_resourceNestedCount(t *testing.T) { mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatalf("validate errors: %s", diags.Err()) } - _, diags = ctx.Refresh() - if diags.HasErrors() { - t.Fatalf("refresh errors: %s", diags.Err()) - } - - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("plan errors: %s", diags.Err()) } @@ -5330,13 +5221,12 @@ func TestContext2Plan_computedAttrRefTypeMismatch(t *testing.T) { return } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("Succeeded; want type mismatch error for 'ami' argument") } @@ -5361,18 +5251,17 @@ func TestContext2Plan_selfRef(t *testing.T) { m := testModule(t, "plan-self-ref") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected validation failure: %s", diags.Err()) } - _, diags = c.Plan() + _, diags = c.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("plan succeeded; want error") } @@ -5398,18 +5287,17 @@ func TestContext2Plan_selfRefMulti(t *testing.T) { m := testModule(t, "plan-self-ref-multi") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected validation failure: %s", diags.Err()) } - _, diags = c.Plan() + _, diags = c.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("plan succeeded; want error") } @@ -5435,18 +5323,17 @@ func TestContext2Plan_selfRefMultiAll(t *testing.T) { m := testModule(t, "plan-self-ref-multi-all") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected validation failure: %s", diags.Err()) } - _, diags = c.Plan() + _, diags = c.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatalf("plan succeeded; want error") } @@ -5481,13 +5368,12 @@ output "out" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { // Should get this error: // Unsupported attribute: This object does not have an attribute named "missing" @@ -5528,13 +5414,12 @@ resource "aws_instance" "foo" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { // Should get this error: // Unsupported attribute: This object does not have an attribute named "missing" @@ -5575,13 +5460,12 @@ resource "aws_instance" "foo" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { // Should get this error: // Unsupported attribute: This object does not have an attribute named "missing" @@ -5599,13 +5483,12 @@ func TestContext2Plan_variableSensitivity(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -5660,19 +5543,20 @@ func TestContext2Plan_variableSensitivityModule(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + SetVariables: InputValues{ "another_var": &InputValue{ Value: cty.StringVal("boop"), SourceType: ValueFromCaller, }, }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -5768,13 +5652,12 @@ func TestContext2Plan_requiredModuleOutput(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -5832,13 +5715,12 @@ func TestContext2Plan_requiredModuleObject(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -5916,14 +5798,12 @@ resource "aws_instance" "foo" { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -5977,13 +5857,12 @@ output"out" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -6020,14 +5899,15 @@ resource "aws_instance" "foo" { targets = append(targets, target.Subject) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Targets: targets, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + Targets: targets, + }) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -6077,14 +5957,15 @@ resource "aws_instance" "foo" { targets := []addrs.Targetable{target.Subject} ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Targets: targets, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, + Targets: targets, + }) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -6139,13 +6020,12 @@ resource "aws_instance" "foo" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -6195,14 +6075,12 @@ data "test_data_source" "foo" {} ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -6255,14 +6133,12 @@ resource "test_instance" "b" { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) } @@ -6271,16 +6147,17 @@ func TestContext2Plan_targetedModuleInstance(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, + }) + + plan, diags := ctx.Plan(m, states.NewState(), &PlanOpts{ + Mode: plans.NormalMode, Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("mod", addrs.IntKey(0)), }, }) - - plan, diags := ctx.Plan() if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -6329,13 +6206,12 @@ data "test_data_source" "d" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -6387,13 +6263,12 @@ data "test_data_source" "e" { `}) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) } @@ -6420,15 +6295,15 @@ resource "test_instance" "a" { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, - SkipRefresh: true, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + SkipRefresh: true, + }) assertNoErrors(t, diags) if p.ReadResourceCalled { @@ -6482,13 +6357,12 @@ data "test_data_source" "b" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) // The change to data source a should not prevent data source b from being @@ -6524,12 +6398,11 @@ resource "test_instance" "a" { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -6594,13 +6467,11 @@ resource "test_instance" "a" { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -6659,13 +6530,11 @@ resource "test_instance" "a" { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -6706,7 +6575,6 @@ resource "test_instance" "a" { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), // We still need to be able to locate the provider to decode the @@ -6714,9 +6582,8 @@ resource "test_instance" "a" { // only used for an orphaned data source. addrs.NewProvider("registry.terraform.io", "local", "test"): testProviderFuncFixed(p), }, - State: state, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -6739,33 +6606,32 @@ resource "test_resource" "foo" { p := testProvider("test") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: states.BuildState(func(s *states.SyncState) { - s.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"id":"foo", "value":"hello", "sensitive_value":"hello"}`), - AttrSensitivePaths: []cty.PathValueMarks{ - {Path: cty.Path{cty.GetAttrStep{Name: "value"}}, Marks: cty.NewValueMarks(marks.Sensitive)}, - {Path: cty.Path{cty.GetAttrStep{Name: "sensitive_value"}}, Marks: cty.NewValueMarks(marks.Sensitive)}, - }, - }, - addrs.AbsProviderConfig{ - Provider: addrs.NewDefaultProvider("test"), - Module: addrs.RootModule, + }) + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_resource", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo", "value":"hello", "sensitive_value":"hello"}`), + AttrSensitivePaths: []cty.PathValueMarks{ + {Path: cty.Path{cty.GetAttrStep{Name: "value"}}, Marks: cty.NewValueMarks(marks.Sensitive)}, + {Path: cty.Path{cty.GetAttrStep{Name: "sensitive_value"}}, Marks: cty.NewValueMarks(marks.Sensitive)}, }, - ) - }), + }, + addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModule, + }, + ) }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -6782,13 +6648,12 @@ func TestContext2Plan_variableCustomValidationsSensitive(t *testing.T) { p := testProvider("test") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Plan() + _, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -6807,14 +6672,12 @@ output "planned" { `, }) - ctx := testContext2(t, &ContextOpts{ - Config: m, - State: states.BuildState(func(s *states.SyncState) { - r := s.Module(addrs.RootModuleInstance) - r.SetOutputValue("planned", cty.NullVal(cty.DynamicPseudoType), false) - }), + ctx := testContext2(t, &ContextOpts{}) + state := states.BuildState(func(s *states.SyncState) { + r := s.Module(addrs.RootModuleInstance) + r.SetOutputValue("planned", cty.NullVal(cty.DynamicPseudoType), false) }) - plan, diags := ctx.Plan() + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -6836,11 +6699,8 @@ output "planned" { `, }) - ctx := testContext2(t, &ContextOpts{ - Config: m, - State: states.NewState(), - }) - plan, diags := ctx.Plan() + ctx := testContext2(t, &ContextOpts{}) + plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) if diags.HasErrors() { t.Fatal(diags.Err()) } diff --git a/internal/terraform/context_refresh.go b/internal/terraform/context_refresh.go new file mode 100644 index 000000000000..cac5232b0d0f --- /dev/null +++ b/internal/terraform/context_refresh.go @@ -0,0 +1,37 @@ +package terraform + +import ( + "log" + + "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/plans" + "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +// Refresh is a vestigial operation that is equivalent to call to Plan and +// then taking the prior state of the resulting plan. +// +// We retain this only as a measure of semi-backward-compatibility for +// automation relying on the "terraform refresh" subcommand. The modern way +// to get this effect is to create and then apply a plan in the refresh-only +// mode. +func (c *Context) Refresh(config *configs.Config, prevRunState *states.State, opts *PlanOpts) (*states.State, tfdiags.Diagnostics) { + if opts == nil { + // This fallback is only here for tests, not for real code. + opts = &PlanOpts{ + Mode: plans.NormalMode, + } + } + if opts.Mode != plans.NormalMode { + panic("can only Refresh in the normal planning mode") + } + + log.Printf("[DEBUG] Refresh is really just plan now, so creating a %s plan", opts.Mode) + p, diags := c.Plan(config, prevRunState, opts) + if diags.HasErrors() { + return nil, diags + } + + return p.PriorState, diags +} diff --git a/internal/terraform/context_refresh_test.go b/internal/terraform/context_refresh_test.go index 7f58f1a0a04a..dd319254a6f2 100644 --- a/internal/terraform/context_refresh_test.go +++ b/internal/terraform/context_refresh_test.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/configs/hcl2shim" + "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/states" ) @@ -34,11 +35,9 @@ func TestContext2Refresh(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) schema := p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block @@ -52,7 +51,7 @@ func TestContext2Refresh(t *testing.T) { NewState: readState, } - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -123,17 +122,15 @@ func TestContext2Refresh_dynamicAttr(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: startingState, }) schema := p.GetProviderSchemaResponse.ResourceTypes["test_instance"].Block ty := schema.ImpliedType() - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, startingState, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -200,13 +197,12 @@ func TestContext2Refresh_dataComputedModuleVar(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -261,16 +257,9 @@ func TestContext2Refresh_targeted(t *testing.T) { m := testModule(t, "refresh-targeted") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - Targets: []addrs.Targetable{ - addrs.RootModuleInstance.Resource( - addrs.ManagedResourceMode, "aws_instance", "me", - ), - }, }) refreshedResources := make([]string, 0, 2) @@ -281,7 +270,14 @@ func TestContext2Refresh_targeted(t *testing.T) { } } - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, state, &PlanOpts{ + Mode: plans.NormalMode, + Targets: []addrs.Targetable{ + addrs.RootModuleInstance.Resource( + addrs.ManagedResourceMode, "aws_instance", "me", + ), + }, + }) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -339,16 +335,9 @@ func TestContext2Refresh_targetedCount(t *testing.T) { m := testModule(t, "refresh-targeted-count") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - Targets: []addrs.Targetable{ - addrs.RootModuleInstance.Resource( - addrs.ManagedResourceMode, "aws_instance", "me", - ), - }, }) refreshedResources := make([]string, 0, 2) @@ -359,7 +348,14 @@ func TestContext2Refresh_targetedCount(t *testing.T) { } } - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, state, &PlanOpts{ + Mode: plans.NormalMode, + Targets: []addrs.Targetable{ + addrs.RootModuleInstance.Resource( + addrs.ManagedResourceMode, "aws_instance", "me", + ), + }, + }) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -425,16 +421,9 @@ func TestContext2Refresh_targetedCountIndex(t *testing.T) { m := testModule(t, "refresh-targeted-count") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, - Targets: []addrs.Targetable{ - addrs.RootModuleInstance.ResourceInstance( - addrs.ManagedResourceMode, "aws_instance", "me", addrs.IntKey(0), - ), - }, }) refreshedResources := make([]string, 0, 2) @@ -445,7 +434,14 @@ func TestContext2Refresh_targetedCountIndex(t *testing.T) { } } - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, state, &PlanOpts{ + Mode: plans.NormalMode, + Targets: []addrs.Targetable{ + addrs.RootModuleInstance.ResourceInstance( + addrs.ManagedResourceMode, "aws_instance", "me", addrs.IntKey(0), + ), + }, + }) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -478,7 +474,6 @@ func TestContext2Refresh_moduleComputedVar(t *testing.T) { m := testModule(t, "refresh-module-computed-var") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -486,7 +481,7 @@ func TestContext2Refresh_moduleComputedVar(t *testing.T) { // This was failing (see GH-2188) at some point, so this test just // verifies that the failure goes away. - if _, diags := ctx.Refresh(); diags.HasErrors() { + if _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() { t.Fatalf("refresh errs: %s", diags.Err()) } } @@ -500,18 +495,16 @@ func TestContext2Refresh_delete(t *testing.T) { testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.ReadResourceResponse = &providers.ReadResourceResponse{ NewState: cty.NullVal(p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block.ImpliedType()), } - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -526,11 +519,9 @@ func TestContext2Refresh_ignoreUncreated(t *testing.T) { p := testProvider("aws") m := testModule(t, "refresh-basic") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: nil, }) p.ReadResourceResponse = &providers.ReadResourceResponse{ @@ -539,7 +530,7 @@ func TestContext2Refresh_ignoreUncreated(t *testing.T) { }), } - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -558,15 +549,13 @@ func TestContext2Refresh_hook(t *testing.T) { testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, - Hooks: []Hook{h}, + Hooks: []Hook{h}, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - if _, diags := ctx.Refresh(); diags.HasErrors() { + if _, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() { t.Fatalf("refresh errs: %s", diags.Err()) } if !h.PreRefreshCalled { @@ -588,11 +577,9 @@ func TestContext2Refresh_modules(t *testing.T) { testSetResourceInstanceCurrent(child, "aws_instance.web", `{"id":"baz"}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.ReadResourceFn = func(req providers.ReadResourceRequest) providers.ReadResourceResponse { @@ -613,7 +600,7 @@ func TestContext2Refresh_modules(t *testing.T) { } } - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -648,13 +635,12 @@ func TestContext2Refresh_moduleInputComputedOutput(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Refresh(); diags.HasErrors() { + if _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() { t.Fatalf("refresh errs: %s", diags.Err()) } } @@ -663,13 +649,12 @@ func TestContext2Refresh_moduleVarModule(t *testing.T) { m := testModule(t, "refresh-module-var-module") p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - if _, diags := ctx.Refresh(); diags.HasErrors() { + if _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() { t.Fatalf("refresh errs: %s", diags.Err()) } } @@ -679,7 +664,6 @@ func TestContext2Refresh_noState(t *testing.T) { p := testProvider("aws") m := testModule(t, "refresh-no-state") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -691,7 +675,7 @@ func TestContext2Refresh_noState(t *testing.T) { }), } - if _, diags := ctx.Refresh(); diags.HasErrors() { + if _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}); diags.HasErrors() { t.Fatalf("refresh errs: %s", diags.Err()) } } @@ -726,14 +710,12 @@ func TestContext2Refresh_output(t *testing.T) { root.SetOutputValue("foo", cty.StringVal("foo"), false) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -776,14 +758,12 @@ func TestContext2Refresh_outputPartial(t *testing.T) { testSetResourceInstanceCurrent(root, "aws_instance.foo", `{}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -804,11 +784,9 @@ func TestContext2Refresh_stateBasic(t *testing.T) { testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"bar"}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) schema := p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block @@ -825,7 +803,7 @@ func TestContext2Refresh_stateBasic(t *testing.T) { NewState: readStateVal, } - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -879,10 +857,9 @@ func TestContext2Refresh_dataCount(t *testing.T) { Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Config: m, }) - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) @@ -912,11 +889,9 @@ func TestContext2Refresh_dataState(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, - State: state, }) var readStateVal cty.Value @@ -930,7 +905,7 @@ func TestContext2Refresh_dataState(t *testing.T) { } } - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -978,11 +953,9 @@ func TestContext2Refresh_dataStateRefData(t *testing.T) { m := testModule(t, "refresh-data-ref-data") state := states.NewState() ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("null"): testProviderFuncFixed(p), }, - State: state, }) p.ReadDataSourceFn = func(req providers.ReadDataSourceRequest) providers.ReadDataSourceResponse { @@ -995,7 +968,7 @@ func TestContext2Refresh_dataStateRefData(t *testing.T) { } } - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -1016,11 +989,9 @@ func TestContext2Refresh_tainted(t *testing.T) { testSetResourceInstanceTainted(root, "aws_instance.web", `{"id":"bar"}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.ReadResourceFn = func(req providers.ReadResourceRequest) providers.ReadResourceResponse { // add the required id @@ -1032,7 +1003,7 @@ func TestContext2Refresh_tainted(t *testing.T) { } } - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -1058,14 +1029,14 @@ func TestContext2Refresh_unknownProvider(t *testing.T) { root := state.EnsureModule(addrs.RootModuleInstance) testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"]`) - _, diags := NewContext(&ContextOpts{ - Config: m, + c, diags := NewContext(&ContextOpts{ Providers: map[addrs.Provider]providers.Factory{}, - State: state, }) + assertNoDiagnostics(t, diags) + _, diags = c.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}) if !diags.HasErrors() { - t.Fatal("successfully created context; want error") + t.Fatal("successfully refreshed; want error") } if !regexp.MustCompile(`failed to instantiate provider ".+"`).MatchString(diags.Err().Error()) { @@ -1100,11 +1071,9 @@ func TestContext2Refresh_vars(t *testing.T) { testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) readStateVal, err := schema.CoerceValue(cty.ObjectVal(map[string]cty.Value{ @@ -1124,7 +1093,7 @@ func TestContext2Refresh_vars(t *testing.T) { } } - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("refresh errors: %s", diags.Err()) } @@ -1176,8 +1145,8 @@ func TestContext2Refresh_orphanModule(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"i-abc123"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{Module: addrs.Module{"module.child"}}, - addrs.ConfigResource{Module: addrs.Module{"module.child"}}, + {Module: addrs.Module{"module.child"}}, + {Module: addrs.Module{"module.child"}}, }, }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), @@ -1188,7 +1157,7 @@ func TestContext2Refresh_orphanModule(t *testing.T) { &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"i-bcd23"}`), - Dependencies: []addrs.ConfigResource{addrs.ConfigResource{Module: addrs.Module{"module.grandchild"}}}, + Dependencies: []addrs.ConfigResource{{Module: addrs.Module{"module.grandchild"}}}, }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`), ) @@ -1196,15 +1165,13 @@ func TestContext2Refresh_orphanModule(t *testing.T) { testSetResourceInstanceCurrent(grandchild, "aws_instance.baz", `{"id":"i-cde345"}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) testCheckDeadlock(t, func() { - _, err := ctx.Refresh() + _, err := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if err != nil { t.Fatalf("err: %s", err.Err()) } @@ -1239,13 +1206,12 @@ func TestContext2Validate(t *testing.T) { m := testModule(t, "validate-good") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if len(diags) != 0 { t.Fatalf("unexpected error: %#v", diags.ErrWithWarnings()) } @@ -1260,11 +1226,9 @@ func TestContext2Refresh_updateProviderInState(t *testing.T) { testSetResourceInstanceCurrent(root, "aws_instance.bar", `{"id":"foo"}`, `provider["registry.terraform.io/hashicorp/aws"].baz`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) expected := strings.TrimSpace(` @@ -1272,7 +1236,7 @@ aws_instance.bar: ID = foo provider = provider["registry.terraform.io/hashicorp/aws"].foo`) - s, diags := ctx.Refresh() + s, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -1329,14 +1293,12 @@ func TestContext2Refresh_schemaUpgradeFlatmap(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: s, }) - state, diags := ctx.Refresh() + state, diags := ctx.Refresh(m, s, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -1413,14 +1375,12 @@ func TestContext2Refresh_schemaUpgradeJSON(t *testing.T) { }) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: s, }) - state, diags := ctx.Refresh() + state, diags := ctx.Refresh(m, s, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -1471,13 +1431,12 @@ data "aws_data_source" "foo" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, states.NewState(), &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { // Should get this error: // Unsupported attribute: This object does not have an attribute named "missing" @@ -1520,14 +1479,12 @@ func TestContext2Refresh_dataResourceDependsOn(t *testing.T) { testSetResourceInstanceCurrent(root, "test_resource.a", `{"id":"a"}`, `provider["registry.terraform.io/hashicorp/test"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("unexpected errors: %s", diags.Err()) } @@ -1566,14 +1523,12 @@ resource "aws_instance" "bar" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) - state, diags := ctx.Refresh() + state, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatalf("plan errors: %s", diags.Err()) } @@ -1614,14 +1569,12 @@ func TestContext2Refresh_dataSourceOrphan(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - State: state, }) - _, diags := ctx.Refresh() + _, diags := ctx.Refresh(m, state, &PlanOpts{Mode: plans.NormalMode}) if diags.HasErrors() { t.Fatal(diags.Err()) } diff --git a/internal/terraform/context_test.go b/internal/terraform/context_test.go index f7107922c716..47fd32f4bc15 100644 --- a/internal/terraform/context_test.go +++ b/internal/terraform/context_test.go @@ -109,9 +109,12 @@ func TestNewContextRequiredVersion(t *testing.T) { Required: constraint, }) } - _, diags := NewContext(&ContextOpts{ - Config: mod, - }) + c, diags := NewContext(&ContextOpts{}) + if diags.HasErrors() { + t.Fatalf("unexpected NewContext errors: %s", diags.Err()) + } + + diags = c.Validate(mod) if diags.HasErrors() != tc.Err { t.Fatalf("err: %s", diags.Err()) } @@ -262,9 +265,6 @@ Please run "terraform init".`, devProviders[provider] = struct{}{} } opts := &ContextOpts{ - Config: testModuleInline(t, map[string]string{ - "main.tf": tc.Config, - }), LockedDependencies: locks, ProvidersInDevelopment: devProviders, Providers: map[addrs.Provider]providers.Factory{ @@ -274,7 +274,16 @@ Please run "terraform init".`, }, } - ctx, diags := NewContext(opts) + m := testModuleInline(t, map[string]string{ + "main.tf": tc.Config, + }) + + c, diags := NewContext(opts) + if diags.HasErrors() { + t.Fatalf("unexpected NewContext error: %s", diags.Err()) + } + + diags = c.Validate(m) if tc.WantErr != "" { if len(diags) == 0 { t.Fatal("expected diags but none returned") @@ -286,9 +295,6 @@ Please run "terraform init".`, if len(diags) > 0 { t.Errorf("unexpected diags: %s", diags.Err()) } - if ctx == nil { - t.Error("ctx is nil") - } } }) } @@ -717,10 +723,10 @@ func testProviderSchema(name string) *providers.GetProviderSchemaResponse { // our context tests try to exercise lots of stuff at once and so having them // round-trip things through on-disk files is often an important part of // fully representing an old bug in a regression test. -func contextOptsForPlanViaFile(configSnap *configload.Snapshot, plan *plans.Plan) (*ContextOpts, error) { +func contextOptsForPlanViaFile(configSnap *configload.Snapshot, plan *plans.Plan) (*ContextOpts, *configs.Config, *plans.Plan, error) { dir, err := ioutil.TempDir("", "terraform-contextForPlanViaFile") if err != nil { - return nil, err + return nil, nil, nil, err } defer os.RemoveAll(dir) @@ -751,49 +757,27 @@ func contextOptsForPlanViaFile(configSnap *configload.Snapshot, plan *plans.Plan filename := filepath.Join(dir, "tfplan") err = planfile.Create(filename, configSnap, prevStateFile, stateFile, plan) if err != nil { - return nil, err + return nil, nil, nil, err } pr, err := planfile.Open(filename) if err != nil { - return nil, err + return nil, nil, nil, err } config, diags := pr.ReadConfig() if diags.HasErrors() { - return nil, diags.Err() - } - - stateFile, err = pr.ReadStateFile() - if err != nil { - return nil, err + return nil, nil, nil, diags.Err() } plan, err = pr.ReadPlan() if err != nil { - return nil, err - } - - vars := make(InputValues) - for name, vv := range plan.VariableValues { - val, err := vv.Decode(cty.DynamicPseudoType) - if err != nil { - return nil, fmt.Errorf("can't decode value for variable %q: %s", name, err) - } - vars[name] = &InputValue{ - Value: val, - SourceType: ValueFromPlan, - } + return nil, nil, nil, err } return &ContextOpts{ - Config: config, - State: stateFile.State, - Changes: plan.Changes, - Variables: vars, - Targets: plan.TargetAddrs, ProviderSHA256s: plan.ProviderSHA256s, - }, nil + }, config, plan, nil } // legacyPlanComparisonString produces a string representation of the changes diff --git a/internal/terraform/context_validate.go b/internal/terraform/context_validate.go new file mode 100644 index 000000000000..bda38633ab16 --- /dev/null +++ b/internal/terraform/context_validate.go @@ -0,0 +1,88 @@ +package terraform + +import ( + "log" + + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/tfdiags" + "github.com/zclconf/go-cty/cty" +) + +// Validate performs semantic validation of a configuration, and returns +// any warnings or errors. +// +// Syntax and structural checks are performed by the configuration loader, +// and so are not repeated here. +// +// Validate considers only the configuration and so it won't catch any +// errors caused by current values in the state, or other external information +// such as root module input variables. However, the Plan function includes +// all of the same checks as Validate, in addition to the other work it does +// to consider the previous run state and the planning options. +func (c *Context) Validate(config *configs.Config) tfdiags.Diagnostics { + defer c.acquireRun("validate")() + + var diags tfdiags.Diagnostics + + moreDiags := CheckCoreVersionRequirements(config) + diags = diags.Append(moreDiags) + // If version constraints are not met then we'll bail early since otherwise + // we're likely to just see a bunch of other errors related to + // incompatibilities, which could be overwhelming for the user. + if diags.HasErrors() { + return diags + } + + schemas, moreDiags := c.Schemas(config, nil) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return diags + } + + log.Printf("[DEBUG] Building and walking validate graph") + + graph, moreDiags := ValidateGraphBuilder(&PlanGraphBuilder{ + Config: config, + Components: c.components, + Schemas: schemas, + Validate: true, + State: states.NewState(), + }).Build(addrs.RootModuleInstance) + diags = diags.Append(moreDiags) + if moreDiags.HasErrors() { + return diags + } + + // Validate is to check if the given module is valid regardless of + // input values, current state, etc. Therefore we populate all of the + // input values with unknown values of the expected type, allowing us + // to perform a type check without assuming any particular values. + varValues := make(InputValues) + for name, variable := range config.Module.Variables { + ty := variable.Type + if ty == cty.NilType { + // Can't predict the type at all, so we'll just mark it as + // cty.DynamicVal (unknown value of cty.DynamicPseudoType). + ty = cty.DynamicPseudoType + } + varValues[name] = &InputValue{ + Value: cty.UnknownVal(ty), + SourceType: ValueFromUnknown, + } + } + + walker, walkDiags := c.walk(graph, walkValidate, &graphWalkOpts{ + Config: config, + Schemas: schemas, + RootVariableValues: varValues, + }) + diags = diags.Append(walker.NonFatalDiagnostics) + diags = diags.Append(walkDiags) + if walkDiags.HasErrors() { + return diags + } + + return diags +} diff --git a/internal/terraform/context_validate_test.go b/internal/terraform/context_validate_test.go index be3acb7c501a..b25bed35e9ee 100644 --- a/internal/terraform/context_validate_test.go +++ b/internal/terraform/context_validate_test.go @@ -10,7 +10,6 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" "github.com/hashicorp/terraform/internal/states" @@ -29,13 +28,12 @@ func TestContext2Validate_badCount(t *testing.T) { m := testModule(t, "validate-bad-count") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if !diags.HasErrors() { t.Fatalf("succeeded; want error") } @@ -53,13 +51,12 @@ func TestContext2Validate_badResource_reference(t *testing.T) { m := testModule(t, "validate-bad-resource-count") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if !diags.HasErrors() { t.Fatalf("succeeded; want error") } @@ -80,52 +77,33 @@ func TestContext2Validate_badVar(t *testing.T) { m := testModule(t, "validate-bad-var") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if !diags.HasErrors() { t.Fatalf("succeeded; want error") } } -func TestContext2Validate_varMapOverrideOld(t *testing.T) { - m := testModule(t, "validate-module-pc-vars") - p := testProvider("aws") - p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ - Provider: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "foo": {Type: cty.String, Optional: true}, - }, - }, - ResourceTypes: map[string]*configschema.Block{ - "aws_instance": { - Attributes: map[string]*configschema.Attribute{}, - }, - }, - }) - - _, diags := NewContext(&ContextOpts{ - Config: m, - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), - }, - Variables: InputValues{}, - }) - if !diags.HasErrors() { - // Error should be: The input variable "provider_var" has not been assigned a value. - t.Fatalf("succeeded; want error") - } -} - func TestContext2Validate_varNoDefaultExplicitType(t *testing.T) { m := testModule(t, "validate-var-no-default-explicit-type") - _, diags := NewContext(&ContextOpts{ - Config: m, - }) + c, diags := NewContext(&ContextOpts{}) + if diags.HasErrors() { + t.Fatalf("unexpected NewContext errors: %s", diags.Err()) + } + + // NOTE: This test has grown idiosyncratic because originally Terraform + // would (optionally) check variables during validation, and then in + // Terraform v0.12 we switched to checking variables during NewContext, + // and now most recently we've switched to checking variables only during + // planning because root variables are a plan option. Therefore this has + // grown into a plan test rather than a validate test, but it lives on + // here in order to make it easier to navigate through that history in + // version control. + _, diags = c.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { // Error should be: The input variable "maybe_a_map" has not been assigned a value. t.Fatalf("succeeded; want error") @@ -166,7 +144,6 @@ func TestContext2Validate_computedVar(t *testing.T) { m := testModule(t, "validate-computed-var") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), addrs.NewDefaultProvider("test"): testProviderFuncFixed(pt), @@ -182,7 +159,7 @@ func TestContext2Validate_computedVar(t *testing.T) { return } - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -217,13 +194,12 @@ func TestContext2Validate_computedInFunction(t *testing.T) { m := testModule(t, "validate-computed-in-function") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -256,13 +232,12 @@ func TestContext2Validate_countComputed(t *testing.T) { m := testModule(t, "validate-count-computed") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -281,13 +256,12 @@ func TestContext2Validate_countNegative(t *testing.T) { } m := testModule(t, "validate-count-negative") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if !diags.HasErrors() { t.Fatalf("succeeded; want error") } @@ -308,13 +282,12 @@ func TestContext2Validate_countVariable(t *testing.T) { } m := testModule(t, "apply-count-variable") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -334,12 +307,14 @@ func TestContext2Validate_countVariableNoDefault(t *testing.T) { }, }, } - _, diags := NewContext(&ContextOpts{ - Config: m, + c, diags := NewContext(&ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) + assertNoDiagnostics(t, diags) + + _, diags = c.Plan(m, nil, &PlanOpts{}) if !diags.HasErrors() { // Error should be: The input variable "foo" has not been assigned a value. t.Fatalf("succeeded; want error") @@ -361,13 +336,12 @@ func TestContext2Validate_moduleBadOutput(t *testing.T) { } m := testModule(t, "validate-bad-module-output") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if !diags.HasErrors() { t.Fatalf("succeeded; want error") } @@ -388,13 +362,12 @@ func TestContext2Validate_moduleGood(t *testing.T) { } m := testModule(t, "validate-good-module") c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -414,7 +387,6 @@ func TestContext2Validate_moduleBadResource(t *testing.T) { } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -424,7 +396,7 @@ func TestContext2Validate_moduleBadResource(t *testing.T) { Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("bad")), } - diags := c.Validate() + diags := c.Validate(m) if !diags.HasErrors() { t.Fatalf("succeeded; want error") } @@ -446,13 +418,12 @@ func TestContext2Validate_moduleDepsShouldNotCycle(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -481,16 +452,9 @@ func TestContext2Validate_moduleProviderVar(t *testing.T) { } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ - "provider_var": &InputValue{ - Value: cty.StringVal("bar"), - SourceType: ValueFromCaller, - }, - }, }) p.ValidateProviderConfigFn = func(req providers.ValidateProviderConfigRequest) (resp providers.ValidateProviderConfigResponse) { @@ -500,7 +464,7 @@ func TestContext2Validate_moduleProviderVar(t *testing.T) { return } - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -529,7 +493,6 @@ func TestContext2Validate_moduleProviderInheritUnused(t *testing.T) { } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -542,7 +505,7 @@ func TestContext2Validate_moduleProviderInheritUnused(t *testing.T) { return } - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -565,16 +528,10 @@ func TestContext2Validate_orphans(t *testing.T) { m := testModule(t, "validate-good") - state := states.NewState() - root := state.EnsureModule(addrs.RootModuleInstance) - testSetResourceInstanceCurrent(root, "aws_instance.web", `{"id":"bar"}`, `provider["registry.terraform.io/hashicorp/aws"]`) - c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.ValidateResourceConfigFn = func(req providers.ValidateResourceConfigRequest) providers.ValidateResourceConfigResponse { @@ -587,7 +544,7 @@ func TestContext2Validate_orphans(t *testing.T) { } } - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -614,7 +571,6 @@ func TestContext2Validate_providerConfig_bad(t *testing.T) { } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -624,7 +580,7 @@ func TestContext2Validate_providerConfig_bad(t *testing.T) { Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("bad")), } - diags := c.Validate() + diags := c.Validate(m) if len(diags) != 1 { t.Fatalf("wrong number of diagnostics %d; want %d", len(diags), 1) } @@ -654,7 +610,6 @@ func TestContext2Validate_providerConfig_skippedEmpty(t *testing.T) { } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -664,7 +619,7 @@ func TestContext2Validate_providerConfig_skippedEmpty(t *testing.T) { Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("should not be called")), } - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -691,13 +646,12 @@ func TestContext2Validate_providerConfig_good(t *testing.T) { } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -727,13 +681,12 @@ func TestContext2Validate_requiredProviderConfig(t *testing.T) { } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -757,7 +710,6 @@ func TestContext2Validate_provisionerConfig_bad(t *testing.T) { pr := simpleMockProvisioner() c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -770,7 +722,7 @@ func TestContext2Validate_provisionerConfig_bad(t *testing.T) { Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("bad")), } - diags := c.Validate() + diags := c.Validate(m) if !diags.HasErrors() { t.Fatalf("succeeded; want error") } @@ -794,7 +746,6 @@ func TestContext2Validate_badResourceConnection(t *testing.T) { pr := simpleMockProvisioner() c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -803,7 +754,7 @@ func TestContext2Validate_badResourceConnection(t *testing.T) { }, }) - diags := c.Validate() + diags := c.Validate(m) t.Log(diags.Err()) if !diags.HasErrors() { t.Fatalf("succeeded; want error") @@ -828,7 +779,6 @@ func TestContext2Validate_badProvisionerConnection(t *testing.T) { pr := simpleMockProvisioner() c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -837,7 +787,7 @@ func TestContext2Validate_badProvisionerConnection(t *testing.T) { }, }) - diags := c.Validate() + diags := c.Validate(m) t.Log(diags.Err()) if !diags.HasErrors() { t.Fatalf("succeeded; want error") @@ -878,7 +828,6 @@ func TestContext2Validate_provisionerConfig_good(t *testing.T) { } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -887,7 +836,7 @@ func TestContext2Validate_provisionerConfig_good(t *testing.T) { }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -907,12 +856,22 @@ func TestContext2Validate_requiredVar(t *testing.T) { }, }, } - _, diags := NewContext(&ContextOpts{ - Config: m, + c, diags := NewContext(&ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) + assertNoDiagnostics(t, diags) + + // NOTE: This test has grown idiosyncratic because originally Terraform + // would (optionally) check variables during validation, and then in + // Terraform v0.12 we switched to checking variables during NewContext, + // and now most recently we've switched to checking variables only during + // planning because root variables are a plan option. Therefore this has + // grown into a plan test rather than a validate test, but it lives on + // here in order to make it easier to navigate through that history in + // version control. + _, diags = c.Plan(m, states.NewState(), DefaultPlanOpts) if !diags.HasErrors() { // Error should be: The input variable "foo" has not been assigned a value. t.Fatalf("succeeded; want error") @@ -934,7 +893,6 @@ func TestContext2Validate_resourceConfig_bad(t *testing.T) { }, } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -944,7 +902,7 @@ func TestContext2Validate_resourceConfig_bad(t *testing.T) { Diagnostics: tfdiags.Diagnostics{}.Append(fmt.Errorf("bad")), } - diags := c.Validate() + diags := c.Validate(m) if !diags.HasErrors() { t.Fatalf("succeeded; want error") } @@ -965,13 +923,12 @@ func TestContext2Validate_resourceConfig_good(t *testing.T) { }, } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -993,16 +950,10 @@ func TestContext2Validate_tainted(t *testing.T) { } m := testModule(t, "validate-good") - state := states.NewState() - root := state.EnsureModule(addrs.RootModuleInstance) - testSetResourceInstanceTainted(root, "aws_instance.foo", `{"id":"bar"}`, `provider["registry.terraform.io/hashicorp/aws"]`) - c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - State: state, }) p.ValidateResourceConfigFn = func(req providers.ValidateResourceConfigRequest) providers.ValidateResourceConfigResponse { @@ -1015,7 +966,7 @@ func TestContext2Validate_tainted(t *testing.T) { } } - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -1044,23 +995,15 @@ func TestContext2Validate_targetedDestroy(t *testing.T) { testSetResourceInstanceCurrent(root, "aws_instance.bar", `{"id":"i-abc123"}`, `provider["registry.terraform.io/hashicorp/aws"]`) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, Provisioners: map[string]provisioners.Factory{ "shell": testProvisionerFuncFixed(pr), }, - State: state, - Targets: []addrs.Targetable{ - addrs.RootModuleInstance.Resource( - addrs.ManagedResourceMode, "aws_instance", "foo", - ), - }, - PlanMode: plans.DestroyMode, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -1081,16 +1024,9 @@ func TestContext2Validate_varRefUnknown(t *testing.T) { }, } c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ - "foo": &InputValue{ - Value: cty.StringVal("bar"), - SourceType: ValueFromCaller, - }, - }, }) var value cty.Value @@ -1099,7 +1035,7 @@ func TestContext2Validate_varRefUnknown(t *testing.T) { return providers.ValidateResourceConfigResponse{} } - c.Validate() + c.Validate(m) // Input variables are always unknown during the validate walk, because // we're checking for validity of all possible input values. Validity @@ -1129,14 +1065,13 @@ func TestContext2Validate_interpolateVar(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("template"): testProviderFuncFixed(p), }, UIInput: input, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -1162,14 +1097,13 @@ func TestContext2Validate_interpolateComputedModuleVarDef(t *testing.T) { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, UIInput: input, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -1183,14 +1117,13 @@ func TestContext2Validate_interpolateMap(t *testing.T) { p := testProvider("template") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("template"): testProviderFuncFixed(p), }, UIInput: input, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -1235,19 +1168,12 @@ resource "aws_instance" "foo" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - Variables: InputValues{ - "bar": &InputValue{ - Value: cty.StringVal("boop"), - SourceType: ValueFromCaller, - }, - }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -1265,47 +1191,26 @@ resource "aws_instance" "foo" { func TestContext2Validate_PlanGraphBuilder(t *testing.T) { fixture := contextFixtureApplyVars(t) opts := fixture.ContextOpts() - opts.Variables = InputValues{ - "foo": &InputValue{ - Value: cty.StringVal("us-east-1"), - SourceType: ValueFromCaller, - }, - "test_list": &InputValue{ - Value: cty.ListVal([]cty.Value{ - cty.StringVal("Hello"), - cty.StringVal("World"), - }), - SourceType: ValueFromCaller, - }, - "test_map": &InputValue{ - Value: cty.MapVal(map[string]cty.Value{ - "Hello": cty.StringVal("World"), - "Foo": cty.StringVal("Bar"), - "Baz": cty.StringVal("Foo"), - }), - SourceType: ValueFromCaller, - }, - "amis": &InputValue{ - Value: cty.MapVal(map[string]cty.Value{ - "us-east-1": cty.StringVal("override"), - }), - SourceType: ValueFromCaller, - }, - } c := testContext2(t, opts) - graph, diags := (&PlanGraphBuilder{ - Config: c.config, + state := states.NewState() + schemas, diags := c.Schemas(fixture.Config, state) + assertNoDiagnostics(t, diags) + + graph, diags := ValidateGraphBuilder(&PlanGraphBuilder{ + Config: fixture.Config, State: states.NewState(), Components: c.components, - Schemas: c.schemas, - Targets: c.targets, + Schemas: schemas, }).Build(addrs.RootModuleInstance) if diags.HasErrors() { t.Fatalf("errors from PlanGraphBuilder: %s", diags.Err()) } defer c.acquireRun("validate-test")() - walker, diags := c.walk(graph, walkValidate, &graphWalkOpts{}) + walker, diags := c.walk(graph, walkValidate, &graphWalkOpts{ + Config: fixture.Config, + Schemas: schemas, + }) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -1326,13 +1231,12 @@ output "out" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1363,13 +1267,12 @@ resource "aws_instance" "foo" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1402,11 +1305,9 @@ output "root" { }`, }) - ctx := testContext2(t, &ContextOpts{ - Config: m, - }) + ctx := testContext2(t, &ContextOpts{}) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -1424,13 +1325,12 @@ output "out" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1455,13 +1355,12 @@ output "out" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1486,13 +1385,12 @@ output "out" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1516,13 +1414,12 @@ resource "test_instance" "bar" { p := testProvider("test") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1549,13 +1446,12 @@ resource "test_instance" "bar" { p := testProvider("test") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1574,13 +1470,12 @@ func TestContext2Validate_variableCustomValidationsFail(t *testing.T) { p := testProvider("test") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1609,19 +1504,12 @@ variable "test" { p := testProvider("test") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, - Variables: InputValues{ - "test": &InputValue{ - Value: cty.UnknownVal(cty.String), - SourceType: ValueFromCLIArg, - }, - }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error\ngot: %s", diags.Err().Error()) } @@ -1677,13 +1565,12 @@ resource "aws_instance" "foo" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -1705,13 +1592,12 @@ resource "aws_instance" "foo" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1736,13 +1622,12 @@ resource "aws_instance" "foo" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1818,13 +1703,12 @@ output "out" { p := testProvider("aws") ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -1851,9 +1735,7 @@ output "out" { `, }) - diags := testContext2(t, &ContextOpts{ - Config: m, - }).Validate() + diags := testContext2(t, &ContextOpts{}).Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1891,9 +1773,7 @@ output "out" { `, }) - diags := testContext2(t, &ContextOpts{ - Config: m, - }).Validate() + diags := testContext2(t, &ContextOpts{}).Validate(m) if !diags.HasErrors() { t.Fatal("succeeded; want errors") } @@ -1937,12 +1817,11 @@ resource "test_instance" "a" { } ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatal(diags.Err()) } @@ -1977,7 +1856,6 @@ func TestContext2Validate_sensitiveProvisionerConfig(t *testing.T) { pr := simpleMockProvisioner() c := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, @@ -1993,7 +1871,7 @@ func TestContext2Validate_sensitiveProvisionerConfig(t *testing.T) { return pr.ValidateProvisionerConfigResponse } - diags := c.Validate() + diags := c.Validate(m) if diags.HasErrors() { t.Fatalf("unexpected error: %s", diags.Err()) } @@ -2082,13 +1960,12 @@ resource "test_instance" "c" { `}) ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } @@ -2150,13 +2027,12 @@ resource "test_object" "t" { p := simpleMockProvider() ctx := testContext2(t, &ContextOpts{ - Config: m, Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), }, }) - diags := ctx.Validate() + diags := ctx.Validate(m) if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } diff --git a/internal/terraform/context_walk.go b/internal/terraform/context_walk.go new file mode 100644 index 000000000000..b44a5910c5d5 --- /dev/null +++ b/internal/terraform/context_walk.go @@ -0,0 +1,122 @@ +package terraform + +import ( + "log" + + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/instances" + "github.com/hashicorp/terraform/internal/plans" + "github.com/hashicorp/terraform/internal/refactoring" + "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +// graphWalkOpts captures some transient values we use (and possibly mutate) +// during a graph walk. +// +// The way these options get used unfortunately varies between the different +// walkOperation types. This is a historical design wart that dates back to +// us using the same graph structure for all operations; hopefully we'll +// make the necessary differences between the walk types more explicit someday. +type graphWalkOpts struct { + InputState *states.State + Changes *plans.Changes + Config *configs.Config + Schemas *Schemas + + RootVariableValues InputValues + MoveResults map[addrs.UniqueKey]refactoring.MoveResult +} + +func (c *Context) walk(graph *Graph, operation walkOperation, opts *graphWalkOpts) (*ContextGraphWalker, tfdiags.Diagnostics) { + log.Printf("[DEBUG] Starting graph walk: %s", operation.String()) + + walker := c.graphWalker(operation, opts) + + // Watch for a stop so we can call the provider Stop() API. + watchStop, watchWait := c.watchStop(walker) + + // Walk the real graph, this will block until it completes + diags := graph.Walk(walker) + + // Close the channel so the watcher stops, and wait for it to return. + close(watchStop) + <-watchWait + + return walker, diags +} + +func (c *Context) graphWalker(operation walkOperation, opts *graphWalkOpts) *ContextGraphWalker { + var state *states.SyncState + var refreshState *states.SyncState + var prevRunState *states.SyncState + + // NOTE: None of the SyncState objects must directly wrap opts.InputState, + // because we use those to mutate the state object and opts.InputState + // belongs to our caller and thus we must treat it as immutable. + // + // To account for that, most of our SyncState values created below end up + // wrapping a _deep copy_ of opts.InputState instead. + inputState := opts.InputState + if inputState == nil { + // Lots of callers use nil to represent the "empty" case where we've + // not run Apply yet, so we tolerate that. + inputState = states.NewState() + } + + switch operation { + case walkValidate: + // validate should not use any state + state = states.NewState().SyncWrapper() + + // validate currently uses the plan graph, so we have to populate the + // refreshState and the prevRunState. + refreshState = states.NewState().SyncWrapper() + prevRunState = states.NewState().SyncWrapper() + + case walkPlan, walkPlanDestroy: + state = inputState.DeepCopy().SyncWrapper() + refreshState = inputState.DeepCopy().SyncWrapper() + prevRunState = inputState.DeepCopy().SyncWrapper() + + default: + state = inputState.DeepCopy().SyncWrapper() + // Only plan-like walks use refreshState and prevRunState + } + + changes := opts.Changes + if changes == nil { + // Several of our non-plan walks end up sharing codepaths with the + // plan walk and thus expect to generate planned changes even though + // we don't care about them. To avoid those crashing, we'll just + // insert a placeholder changes object which'll get discarded + // afterwards. + changes = plans.NewChanges() + } + + if opts.Schemas == nil { + // Should never happen: caller must always set this one. + // (We catch this here, rather than later, to get a more intelligible + // stack trace when it _does_ panic.) + panic("Context.graphWalker call without Schemas") + } + if opts.Config == nil { + panic("Context.graphWalker call without Config") + } + + return &ContextGraphWalker{ + Context: c, + State: state, + Config: opts.Config, + Schemas: opts.Schemas, + RefreshState: refreshState, + PrevRunState: prevRunState, + Changes: changes.SyncWrapper(), + InstanceExpander: instances.NewExpander(), + MoveResults: opts.MoveResults, + Operation: operation, + StopContext: c.runContext, + RootVariableValues: opts.RootVariableValues, + } +} diff --git a/internal/terraform/graph.go b/internal/terraform/graph.go index 65a3c2003e6e..3d09f329b1c7 100644 --- a/internal/terraform/graph.go +++ b/internal/terraform/graph.go @@ -42,7 +42,17 @@ func (g *Graph) walk(walker GraphWalker) tfdiags.Diagnostics { log.Printf("[TRACE] vertex %q: starting visit (%T)", dag.VertexName(v), v) defer func() { - log.Printf("[TRACE] vertex %q: visit complete", dag.VertexName(v)) + if diags.HasErrors() { + for _, diag := range diags { + if diag.Severity() == tfdiags.Error { + desc := diag.Description() + log.Printf("[ERROR] vertex %q error: %s", dag.VertexName(v), desc.Summary) + } + } + log.Printf("[TRACE] vertex %q: visit complete, with errors", dag.VertexName(v)) + } else { + log.Printf("[TRACE] vertex %q: visit complete", dag.VertexName(v)) + } }() // vertexCtx is the context that we use when evaluating. This diff --git a/internal/terraform/graph_builder_apply_test.go b/internal/terraform/graph_builder_apply_test.go index 9dd66c539d43..b96149bac65d 100644 --- a/internal/terraform/graph_builder_apply_test.go +++ b/internal/terraform/graph_builder_apply_test.go @@ -522,7 +522,7 @@ func TestApplyGraphBuilder_updateFromOrphan(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "test_object", @@ -626,7 +626,7 @@ func TestApplyGraphBuilder_updateFromCBDOrphan(t *testing.T) { Status: states.ObjectReady, AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`), Dependencies: []addrs.ConfigResource{ - addrs.ConfigResource{ + { Resource: addrs.Resource{ Mode: addrs.ManagedResourceMode, Type: "test_object", diff --git a/internal/terraform/graph_walk_context.go b/internal/terraform/graph_walk_context.go index fcda4fa73442..164a2ba2a58c 100644 --- a/internal/terraform/graph_walk_context.go +++ b/internal/terraform/graph_walk_context.go @@ -7,6 +7,7 @@ import ( "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/instances" "github.com/hashicorp/terraform/internal/plans" @@ -33,6 +34,8 @@ type ContextGraphWalker struct { Operation walkOperation StopContext context.Context RootVariableValues InputValues + Schemas *Schemas + Config *configs.Config // This is an output. Do not set this, nor read it while a graph walk // is in progress. @@ -74,11 +77,11 @@ func (w *ContextGraphWalker) EvalContext() EvalContext { // different modules. evaluator := &Evaluator{ Meta: w.Context.meta, - Config: w.Context.config, + Config: w.Config, Operation: w.Operation, State: w.State, Changes: w.Changes, - Schemas: w.Context.schemas, + Schemas: w.Schemas, VariableValues: w.variableValues, VariableValuesLock: &w.variableValuesLock, } @@ -89,7 +92,7 @@ func (w *ContextGraphWalker) EvalContext() EvalContext { InputValue: w.Context.uiInput, InstanceExpanderValue: w.InstanceExpander, Components: w.Context.components, - Schemas: w.Context.schemas, + Schemas: w.Schemas, MoveResultsValue: w.MoveResults, ProviderCache: w.providerCache, ProviderInputConfig: w.Context.providerInputConfig, diff --git a/internal/terraform/graphtype_string.go b/internal/terraform/graphtype_string.go deleted file mode 100644 index ac605ccab3a6..000000000000 --- a/internal/terraform/graphtype_string.go +++ /dev/null @@ -1,29 +0,0 @@ -// Code generated by "stringer -type=GraphType context_graph_type.go"; DO NOT EDIT. - -package terraform - -import "strconv" - -func _() { - // An "invalid array index" compiler error signifies that the constant values have changed. - // Re-run the stringer command to generate them again. - var x [1]struct{} - _ = x[GraphTypeInvalid-0] - _ = x[GraphTypePlan-1] - _ = x[GraphTypePlanDestroy-2] - _ = x[GraphTypePlanRefreshOnly-3] - _ = x[GraphTypeApply-4] - _ = x[GraphTypeValidate-5] - _ = x[GraphTypeEval-6] -} - -const _GraphType_name = "GraphTypeInvalidGraphTypePlanGraphTypePlanDestroyGraphTypePlanRefreshOnlyGraphTypeApplyGraphTypeValidateGraphTypeEval" - -var _GraphType_index = [...]uint8{0, 16, 29, 49, 73, 87, 104, 117} - -func (i GraphType) String() string { - if i >= GraphType(len(_GraphType_index)-1) { - return "GraphType(" + strconv.FormatInt(int64(i), 10) + ")" - } - return _GraphType_name[_GraphType_index[i]:_GraphType_index[i+1]] -} diff --git a/internal/terraform/node_provider.go b/internal/terraform/node_provider.go index fa4c47f32010..33cad198abf8 100644 --- a/internal/terraform/node_provider.go +++ b/internal/terraform/node_provider.go @@ -35,10 +35,13 @@ func (n *NodeApplyableProvider) Execute(ctx EvalContext, op walkOperation) (diag switch op { case walkValidate: + log.Printf("[TRACE] NodeApplyableProvider: validating configuration for %s", n.Addr) return diags.Append(n.ValidateProvider(ctx, provider)) case walkPlan, walkApply, walkDestroy: + log.Printf("[TRACE] NodeApplyableProvider: configuring %s", n.Addr) return diags.Append(n.ConfigureProvider(ctx, provider, false)) case walkImport: + log.Printf("[TRACE] NodeApplyableProvider: configuring %s (requiring that configuration is wholly known)", n.Addr) return diags.Append(n.ConfigureProvider(ctx, provider, true)) } return diags diff --git a/internal/terraform/node_resource_abstract_instance.go b/internal/terraform/node_resource_abstract_instance.go index 89275809bbb5..9f07068e3901 100644 --- a/internal/terraform/node_resource_abstract_instance.go +++ b/internal/terraform/node_resource_abstract_instance.go @@ -150,6 +150,7 @@ func (n *NodeAbstractResourceInstance) AttachResourceState(s *states.Resource) { log.Printf("[WARN] attaching nil state to %s", n.Addr) return } + log.Printf("[TRACE] NodeAbstractResourceInstance.AttachResourceState for %s", n.Addr) n.instanceState = s.Instance(n.Addr.Resource.Key) n.storedProviderConfig = s.ProviderConfig } diff --git a/internal/terraform/variables.go b/internal/terraform/variables.go index f8fd03af29d9..fca392802587 100644 --- a/internal/terraform/variables.go +++ b/internal/terraform/variables.go @@ -227,6 +227,19 @@ func (vv InputValues) Identical(other InputValues) bool { return true } +func mergeDefaultInputVariableValues(setVals InputValues, rootVarsConfig map[string]*configs.Variable) InputValues { + var variables InputValues + + // Default variables from the configuration seed our map. + variables = DefaultVariableValues(rootVarsConfig) + + // Variables provided by the caller (from CLI, environment, etc) can + // override the defaults. + variables = variables.Override(setVals) + + return variables +} + // checkInputVariables ensures that variable values supplied at the UI conform // to their corresponding declarations in configuration. // From 7803f69d42c7ccedc962da308a4d1e033cacb0dd Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 27 Aug 2021 16:52:14 -0700 Subject: [PATCH 022/644] core: Enable TestContext2Plan_movedResourceBasic This is the first test exercising the basic functionality of config-driven move. We previously had it skipped because Terraform's previous design of treating all three of the state artifacts as mutable attributes of terraform.Context meant that it was too late during planning to deal with the move operations, and thus this test was failing. Thanks to the previous commit, which changes the terraform.Context API such that we can defer creating the three state artifacts until we're already doing planning, this test now works and shows Terraform correctly handling a resource that was formerly called "a" and is now called "b", with a "moved" block recording that renaming. --- internal/terraform/context_plan2_test.go | 2 -- 1 file changed, 2 deletions(-) diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 08796a3c1bf2..53053fedadc1 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -701,8 +701,6 @@ provider "test" { } func TestContext2Plan_movedResourceBasic(t *testing.T) { - t.Skip("Context.Plan doesn't properly propagate moves into the prior state yet") - addrA := mustResourceInstanceAddr("test_object.a") addrB := mustResourceInstanceAddr("test_object.b") m := testModuleInline(t, map[string]string{ From 48859417faec8080ba058638c0dc074df205a625 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Mon, 30 Aug 2021 14:12:17 -0700 Subject: [PATCH 023/644] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index c883b7a0fdbc..2d74909b05d7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -3,6 +3,7 @@ UPGRADE NOTES: * Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported. +* The `terraform graph` command no longer supports `-type=validate` and `-type=eval` options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the `terraform console` command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph modes. (Please note that `terraform graph` is not covered by the Terraform v1.0 compatibility promises, because its behavior inherently exposes Terraform Core implementation details, so we recommend it only for interactive debugging tasks and not for use in automation.) NEW FEATURES: From f195ce7fd42975d30d48b7d5758537690aea01b6 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 31 Aug 2021 11:17:32 -0400 Subject: [PATCH 024/644] remove temp test --- internal/plans/objchange/plan_valid_test.go | 82 --------------------- 1 file changed, 82 deletions(-) diff --git a/internal/plans/objchange/plan_valid_test.go b/internal/plans/objchange/plan_valid_test.go index dedb3958b010..834d1046356d 100644 --- a/internal/plans/objchange/plan_valid_test.go +++ b/internal/plans/objchange/plan_valid_test.go @@ -1486,85 +1486,3 @@ func TestAssertPlanValid(t *testing.T) { }) } } - -func TestAssertPlanValidTEST(t *testing.T) { - tests := map[string]struct { - Schema *configschema.Block - Prior cty.Value - Config cty.Value - Planned cty.Value - WantErrs []string - }{ - "computed in map": { - &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "items": { - NestedType: &configschema.Object{ - Nesting: configschema.NestingMap, - Attributes: map[string]*configschema.Attribute{ - "name": { - Type: cty.String, - Computed: true, - Optional: true, - }, - }, - }, - Required: true, - }, - }, - }, - cty.NullVal(cty.Object(map[string]cty.Type{ - "items": cty.Map(cty.Object(map[string]cty.Type{ - "name": cty.String, - })), - })), - cty.ObjectVal(map[string]cty.Value{ - "items": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "name": cty.NullVal(cty.String), - //"name": cty.StringVal("computed"), - }), - }), - }), - cty.ObjectVal(map[string]cty.Value{ - "items": cty.MapVal(map[string]cty.Value{ - "one": cty.ObjectVal(map[string]cty.Value{ - "name": cty.StringVal("computed"), - }), - }), - }), - nil, - }, - } - for name, test := range tests { - t.Run(name, func(t *testing.T) { - errs := AssertPlanValid(test.Schema, test.Prior, test.Config, test.Planned) - - wantErrs := make(map[string]struct{}) - gotErrs := make(map[string]struct{}) - for _, err := range errs { - gotErrs[tfdiags.FormatError(err)] = struct{}{} - } - for _, msg := range test.WantErrs { - wantErrs[msg] = struct{}{} - } - - t.Logf( - "\nprior: %sconfig: %splanned: %s", - dump.Value(test.Planned), - dump.Value(test.Config), - dump.Value(test.Planned), - ) - for msg := range wantErrs { - if _, ok := gotErrs[msg]; !ok { - t.Errorf("missing expected error: %s", msg) - } - } - for msg := range gotErrs { - if _, ok := wantErrs[msg]; !ok { - t.Errorf("unexpected extra error: %s", msg) - } - } - }) - } -} From 5eb7170f70e31d8a25e79ae11a19f6adeafa53b9 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 2 Dec 2020 11:57:29 -0500 Subject: [PATCH 025/644] add staticcheck to tools --- go.mod | 2 ++ go.sum | 2 ++ tools/tools.go | 1 + 3 files changed, 5 insertions(+) diff --git a/go.mod b/go.mod index 89e274d7f7c3..6cbca3b170df 100644 --- a/go.mod +++ b/go.mod @@ -14,6 +14,7 @@ require ( github.com/Azure/go-autorest/logger v0.2.1 // indirect github.com/Azure/go-autorest/tracing v0.6.0 // indirect github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c // indirect + github.com/BurntSushi/toml v0.3.1 // indirect github.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d // indirect github.com/Masterminds/goutils v1.1.0 // indirect github.com/Masterminds/semver v1.5.0 // indirect @@ -172,6 +173,7 @@ require ( gopkg.in/inf.v0 v0.9.0 // indirect gopkg.in/ini.v1 v1.42.0 // indirect gopkg.in/yaml.v2 v2.3.0 // indirect + honnef.co/go/tools v0.0.1-2020.1.4 k8s.io/api v0.0.0-20190620084959-7cf5895f2711 k8s.io/apimachinery v0.0.0-20190913080033-27d36303b655 k8s.io/client-go v10.0.0+incompatible diff --git a/go.sum b/go.sum index 9b2e43bafe4c..b89a0b92cdb7 100644 --- a/go.sum +++ b/go.sum @@ -74,6 +74,7 @@ github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBp github.com/Azure/go-ntlmssp v0.0.0-20180810175552-4a21cbd618b4/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU= github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c h1:/IBSNwUN8+eKzUzbJPqhK839ygXJ82sde8x3ogr6R28= github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU= +github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/ChrisTrenkamp/goxpath v0.0.0-20170922090931-c385f95c6022/go.mod h1:nuWgzSkT5PnyOd+272uUmV0dnAnAn42Mk7PiQC5VzN4= @@ -1061,6 +1062,7 @@ honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWh honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= +honnef.co/go/tools v0.0.1-2020.1.4 h1:UoveltGrhghAA7ePc+e+QYDHXrBps2PqFZiHkGR/xK8= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= k8s.io/api v0.0.0-20190620084959-7cf5895f2711 h1:BblVYz/wE5WtBsD/Gvu54KyBUTJMflolzc5I2DTvh50= k8s.io/api v0.0.0-20190620084959-7cf5895f2711/go.mod h1:TBhBqb1AWbBQbW3XRusr7n7E4v2+5ZY8r8sAMnyFC5A= diff --git a/tools/tools.go b/tools/tools.go index e5e1255bab06..712cd6717c56 100644 --- a/tools/tools.go +++ b/tools/tools.go @@ -10,4 +10,5 @@ import ( _ "golang.org/x/tools/cmd/cover" _ "golang.org/x/tools/cmd/stringer" _ "google.golang.org/grpc/cmd/protoc-gen-go-grpc" + _ "honnef.co/go/tools/cmd/staticcheck" ) From 110d4820336f8363f6d2077aeaf6170dc7d25303 Mon Sep 17 00:00:00 2001 From: Peter Mescalchin Date: Wed, 1 Sep 2021 13:59:08 +1000 Subject: [PATCH 026/644] Update S3 backend documentation - DynamoDB uses Partition keys, not primary keys --- website/docs/language/settings/backends/s3.html.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/language/settings/backends/s3.html.md b/website/docs/language/settings/backends/s3.html.md index 73f668e9468f..f6b9b9b69e0d 100644 --- a/website/docs/language/settings/backends/s3.html.md +++ b/website/docs/language/settings/backends/s3.html.md @@ -200,7 +200,7 @@ The following configuration is optional: The following configuration is optional: * `dynamodb_endpoint` - (Optional) Custom endpoint for the AWS DynamoDB API. This can also be sourced from the `AWS_DYNAMODB_ENDPOINT` environment variable. -* `dynamodb_table` - (Optional) Name of DynamoDB Table to use for state locking and consistency. The table must have a primary key named `LockID` with type of `string`. If not configured, state locking will be disabled. +* `dynamodb_table` - (Optional) Name of DynamoDB Table to use for state locking and consistency. The table must have a partition key named `LockID` with type of `String`. If not configured, state locking will be disabled. ## Multi-account AWS Architecture From f238e9395ab1e1cb7f311943c3a2506c3909d284 Mon Sep 17 00:00:00 2001 From: Peter Mescalchin Date: Wed, 1 Sep 2021 14:02:28 +1000 Subject: [PATCH 027/644] Quality of life: updated all AWS document links to https:// --- website/docs/language/settings/backends/s3.html.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/website/docs/language/settings/backends/s3.html.md b/website/docs/language/settings/backends/s3.html.md index f6b9b9b69e0d..e043701f445b 100644 --- a/website/docs/language/settings/backends/s3.html.md +++ b/website/docs/language/settings/backends/s3.html.md @@ -240,15 +240,15 @@ gain access to the (usually more privileged) administrative infrastructure. Your administrative AWS account will contain at least the following items: -* One or more [IAM user](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) +* One or more [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) for system administrators that will log in to maintain infrastructure in the other accounts. -* Optionally, one or more [IAM groups](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) +* Optionally, one or more [IAM groups](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html) to differentiate between different groups of users that have different levels of access to the other AWS accounts. * An [S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html) that will contain the Terraform state files for each workspace. -* A [DynamoDB table](http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.TablesItemsAttributes) +* A [DynamoDB table](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.TablesItemsAttributes) that will be used for locking to prevent concurrent operations on a single workspace. @@ -266,7 +266,7 @@ administrative account described above. Your environment accounts will eventually contain your own product-specific infrastructure. Along with this it must contain one or more -[IAM roles](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) +[IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that grant sufficient access for Terraform to perform the desired management tasks. @@ -274,7 +274,7 @@ tasks. Each Administrator will run Terraform using credentials for their IAM user in the administrative account. -[IAM Role Delegation](http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html) +[IAM Role Delegation](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html) is used to grant these users access to the roles created in each environment account. @@ -369,7 +369,7 @@ tend to require. When running Terraform in an automation tool running on an Amazon EC2 instance, consider running this instance in the administrative account and using an -[instance profile](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) +[instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) in place of the various administrator IAM users suggested above. An IAM instance profile can also be granted cross-account delegation access via an IAM policy, giving this instance the access it needs to run Terraform. From a3fb07d00818513c902bffd41ea0e740d9c8b768 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 2 Dec 2020 12:02:33 -0500 Subject: [PATCH 028/644] add staticcheck make target cleanup the old fmtcheck script while we're in here --- Makefile | 5 ++++- scripts/gofmtcheck.sh | 8 ++++---- scripts/staticcheck.sh | 16 ++++++++++++++++ 3 files changed, 24 insertions(+), 5 deletions(-) create mode 100755 scripts/staticcheck.sh diff --git a/Makefile b/Makefile index ede1c5cd6441..16e2abaa5b2c 100644 --- a/Makefile +++ b/Makefile @@ -20,6 +20,9 @@ protobuf: fmtcheck: @sh -c "'$(CURDIR)/scripts/gofmtcheck.sh'" +staticcheck: + @sh -c "'$(CURDIR)/scripts/staticcheck.sh'" + website: ifeq (,$(wildcard $(GOPATH)/src/$(WEBSITE_REPO))) echo "$(WEBSITE_REPO) not found in your GOPATH (necessary for layouts and assets), get-ting..." @@ -46,4 +49,4 @@ endif # under parallel conditions. .NOTPARALLEL: -.PHONY: fmtcheck generate protobuf website website-test +.PHONY: fmtcheck generate protobuf website website-test staticcheck diff --git a/scripts/gofmtcheck.sh b/scripts/gofmtcheck.sh index 9a341da94204..00b81a8befde 100755 --- a/scripts/gofmtcheck.sh +++ b/scripts/gofmtcheck.sh @@ -1,12 +1,12 @@ #!/usr/bin/env bash -# Check gofmt -echo "==> Checking that code complies with gofmt requirements..." -gofmt_files=$(gofmt -l `find . -name '*.go' | grep -v vendor`) +# Check go fmt +echo "==> Checking that code complies with go fmt requirements..." +gofmt_files=$(go fmt ./...) if [[ -n ${gofmt_files} ]]; then echo 'gofmt needs running on the following files:' echo "${gofmt_files}" - echo "You can use the command: \`gofmt -w .\` to reformat code." + echo "You can use the command: \`go fmt\` to reformat code." exit 1 fi diff --git a/scripts/staticcheck.sh b/scripts/staticcheck.sh new file mode 100755 index 000000000000..66c47092f9f9 --- /dev/null +++ b/scripts/staticcheck.sh @@ -0,0 +1,16 @@ +#!/usr/bin/env bash + +echo "==> Checking that code complies with static analysis requirements..." +# Skip legacy code which is frozen, and can be removed once we can refactor the +# remote backends to no longer require it. +skip="internal/legacy|backend/remote-state/" + +# Skip generated code for protobufs. +skip=$skip"|internal/planproto|internal/tfplugin5|internal/tfplugin6" + +packages=$(go list ./... | egrep -v ${skip}) + +# We are skipping style-related checks, since terraform intentionally breaks +# some of these. The goal here is to find issues that reduce code clarity, or +# may result in bugs. +staticcheck -checks 'all,-ST*' ${packages} From 34325d5b66a8eb4eda5e435c967061c6c7fbfbc4 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 2 Dec 2020 12:16:14 -0500 Subject: [PATCH 029/644] add staticcheck to circleci --- .circleci/config.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.circleci/config.yml b/.circleci/config.yml index 85ec770d7708..0113ae17c3f4 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -26,7 +26,8 @@ jobs: steps: - checkout - run: go mod verify - - run: make fmtcheck generate + - run: go install honnef.co/go/tools/cmd/staticcheck + - run: make fmtcheck generate staticcheck - run: name: verify no code was generated command: | From 863963e7a6af7a1125c45df523c62d1d92db52e0 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 31 Aug 2021 17:33:26 -0400 Subject: [PATCH 030/644] de-linting --- internal/addrs/resource.go | 4 -- internal/backend/remote/backend.go | 18 +-------- internal/backend/remote/backend_apply_test.go | 2 +- internal/command/apply.go | 3 +- internal/command/apply_test.go | 2 + internal/command/format/diff.go | 4 +- internal/command/jsonplan/values.go | 25 ------------ internal/command/login.go | 8 +--- internal/command/meta.go | 39 ------------------- internal/command/meta_backend.go | 3 +- internal/command/refresh.go | 3 +- internal/command/state_meta.go | 2 +- internal/command/state_mv_test.go | 6 --- internal/command/views/add.go | 4 +- internal/command/views/json/output_test.go | 3 ++ internal/command/views/output.go | 4 -- internal/communicator/ssh/communicator.go | 3 +- internal/configs/provider_requirements.go | 3 +- internal/dag/marshal.go | 4 +- internal/terraform/context_plan.go | 1 + internal/terraform/graph_test.go | 14 ------- internal/terraform/resource_provider.go | 16 -------- .../terraform/resource_provider_mock_test.go | 27 ------------- tools/protobuf-compile/protobuf-compile.go | 4 +- 24 files changed, 24 insertions(+), 178 deletions(-) delete mode 100644 internal/terraform/resource_provider.go diff --git a/internal/addrs/resource.go b/internal/addrs/resource.go index a26f941c9412..2c69d2f70517 100644 --- a/internal/addrs/resource.go +++ b/internal/addrs/resource.go @@ -194,14 +194,10 @@ func (r AbsResource) absMoveableSigil() { // AbsResource is moveable } -type absResourceKey string - func (r AbsResource) UniqueKey() UniqueKey { return absResourceInstanceKey(r.String()) } -func (rk absResourceKey) uniqueKeySigil() {} - // AbsResourceInstance is an absolute address for a resource instance under a // given module path. type AbsResourceInstance struct { diff --git a/internal/backend/remote/backend.go b/internal/backend/remote/backend.go index 5e11872391be..b4aa115a798b 100644 --- a/internal/backend/remote/backend.go +++ b/internal/backend/remote/backend.go @@ -888,7 +888,7 @@ func (b *Remote) VerifyWorkspaceTerraformVersion(workspaceName string) tfdiags.D // If the workspace has remote operations disabled, the remote Terraform // version is effectively meaningless, so we'll skip version verification. - if workspace.Operations == false { + if !workspace.Operations { return nil } @@ -963,22 +963,6 @@ func (b *Remote) IsLocalOperations() bool { return b.forceLocal } -// Colorize returns the Colorize structure that can be used for colorizing -// output. This is guaranteed to always return a non-nil value and so useful -// as a helper to wrap any potentially colored strings. -// -// TODO SvH: Rename this back to Colorize as soon as we can pass -no-color. -func (b *Remote) cliColorize() *colorstring.Colorize { - if b.CLIColor != nil { - return b.CLIColor - } - - return &colorstring.Colorize{ - Colors: colorstring.DefaultColors, - Disable: true, - } -} - func generalError(msg string, err error) error { var diags tfdiags.Diagnostics diff --git a/internal/backend/remote/backend_apply_test.go b/internal/backend/remote/backend_apply_test.go index 8ba2aa271ca7..d54914559717 100644 --- a/internal/backend/remote/backend_apply_test.go +++ b/internal/backend/remote/backend_apply_test.go @@ -1594,7 +1594,7 @@ func TestRemote_applyVersionCheck(t *testing.T) { } // RUN: prepare the apply operation and run it - op, configCleanup, done := testOperationApply(t, "./testdata/apply") + op, configCleanup, _ := testOperationApply(t, "./testdata/apply") defer configCleanup() streams, done := terminal.StreamsForTesting(t) diff --git a/internal/command/apply.go b/internal/command/apply.go index d3fd56066e69..9d481c06b8e1 100644 --- a/internal/command/apply.go +++ b/internal/command/apply.go @@ -45,8 +45,7 @@ func (c *ApplyCommand) Run(rawArgs []string) int { // Instantiate the view, even if there are flag errors, so that we render // diagnostics according to the desired view - var view views.Apply - view = views.NewApply(args.ViewType, c.Destroy, c.View) + view := views.NewApply(args.ViewType, c.Destroy, c.View) if diags.HasErrors() { view.Diagnostics(diags) diff --git a/internal/command/apply_test.go b/internal/command/apply_test.go index c5cc6400fea7..750c6b6bf336 100644 --- a/internal/command/apply_test.go +++ b/internal/command/apply_test.go @@ -1779,6 +1779,7 @@ func TestApply_terraformEnvNonDefault(t *testing.T) { }, } if code := newCmd.Run([]string{"test"}); code != 0 { + t.Fatal("error creating workspace") } } @@ -1792,6 +1793,7 @@ func TestApply_terraformEnvNonDefault(t *testing.T) { }, } if code := selCmd.Run(args); code != 0 { + t.Fatal("error switching workspace") } } diff --git a/internal/command/format/diff.go b/internal/command/format/diff.go index 258c979a5ab7..982b95980bf5 100644 --- a/internal/command/format/diff.go +++ b/internal/command/format/diff.go @@ -70,7 +70,7 @@ func ResourceChange( buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be [bold][red]destroyed", dispAddr))) if change.DeposedKey != states.NotDeposed { // Some extra context about this unusual situation. - buf.WriteString(color.Color(fmt.Sprint("\n # (left over from a partially-failed replacement of this instance)"))) + buf.WriteString(color.Color("\n # (left over from a partially-failed replacement of this instance)")) } default: // should never happen, since the above is exhaustive @@ -755,8 +755,6 @@ func (p *blockBodyDiffPrinter) writeNestedAttrDiff( p.buf.WriteString(" -> (known after apply)") } } - - return } func (p *blockBodyDiffPrinter) writeNestedBlockDiffs(name string, blockS *configschema.NestedBlock, old, new cty.Value, blankBefore bool, indent int, path cty.Path) int { diff --git a/internal/command/jsonplan/values.go b/internal/command/jsonplan/values.go index d79703215fbf..5716aa67e855 100644 --- a/internal/command/jsonplan/values.go +++ b/internal/command/jsonplan/values.go @@ -274,28 +274,3 @@ func marshalPlanModules( return ret, nil } - -// marshalSensitiveValues returns a map of sensitive attributes, with the value -// set to true. It returns nil if the value is nil or if there are no sensitive -// vals. -func marshalSensitiveValues(value cty.Value) map[string]bool { - if value.RawEquals(cty.NilVal) || value.IsNull() { - return nil - } - - ret := make(map[string]bool) - - it := value.ElementIterator() - for it.Next() { - k, v := it.Element() - s := jsonstate.SensitiveAsBool(v) - if !s.RawEquals(cty.False) { - ret[k.AsString()] = true - } - } - - if len(ret) == 0 { - return nil - } - return ret -} diff --git a/internal/command/login.go b/internal/command/login.go index 7233ffd096c2..4ede60ddb5c8 100644 --- a/internal/command/login.go +++ b/internal/command/login.go @@ -311,13 +311,9 @@ func (c *LoginCommand) outputDefaultTFELoginSuccess(dispHostname string) { } func (c *LoginCommand) outputDefaultTFCLoginSuccess() { - c.Ui.Output( - fmt.Sprintf( - c.Colorize().Color(strings.TrimSpace(` + c.Ui.Output(c.Colorize().Color(strings.TrimSpace(` [green][bold]Success![reset] [bold]Logged in to Terraform Cloud[reset] -`)), - ) + "\n", - ) +` + "\n"))) } func (c *LoginCommand) logMOTDError(err error) { diff --git a/internal/command/meta.go b/internal/command/meta.go index 70d317779429..072cfad419ef 100644 --- a/internal/command/meta.go +++ b/internal/command/meta.go @@ -15,8 +15,6 @@ import ( "time" plugin "github.com/hashicorp/go-plugin" - "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/hcl/v2/hclsyntax" "github.com/hashicorp/terraform-svchost/disco" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/backend" @@ -554,43 +552,6 @@ func (m *Meta) extendedFlagSet(n string) *flag.FlagSet { return f } -// parseTargetFlags must be called for any commands supporting -target -// arguments. This method attempts to parse each -target flag into an -// addrs.Target, storing in the Meta.targets slice. -// -// If any flags cannot be parsed, we rewrap the first error diagnostic with a -// custom title to clarify the source of the error. The normal approach of -// directly returning the diags from HCL or the addrs package results in -// confusing incorrect "source" results when presented. -func (m *Meta) parseTargetFlags() tfdiags.Diagnostics { - var diags tfdiags.Diagnostics - m.targets = nil - for _, tf := range m.targetFlags { - traversal, syntaxDiags := hclsyntax.ParseTraversalAbs([]byte(tf), "", hcl.Pos{Line: 1, Column: 1}) - if syntaxDiags.HasErrors() { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - fmt.Sprintf("Invalid target %q", tf), - syntaxDiags[0].Detail, - )) - continue - } - - target, targetDiags := addrs.ParseTarget(traversal) - if targetDiags.HasErrors() { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - fmt.Sprintf("Invalid target %q", tf), - targetDiags[0].Description().Detail, - )) - continue - } - - m.targets = append(m.targets, target.Subject) - } - return diags -} - // process will process any -no-color entries out of the arguments. This // will potentially modify the args in-place. It will return the resulting // slice, and update the Meta and Ui. diff --git a/internal/command/meta_backend.go b/internal/command/meta_backend.go index ab7375107495..71ebfbe26224 100644 --- a/internal/command/meta_backend.go +++ b/internal/command/meta_backend.go @@ -12,7 +12,6 @@ import ( "strconv" "strings" - "github.com/hashicorp/errwrap" "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/hcldec" "github.com/hashicorp/terraform/internal/backend" @@ -244,7 +243,7 @@ func (m *Meta) BackendForPlan(settings plans.Backend) (backend.Enhanced, tfdiags schema := b.ConfigSchema() configVal, err := settings.Config.Decode(schema.ImpliedType()) if err != nil { - diags = diags.Append(errwrap.Wrapf("saved backend configuration is invalid: {{err}}", err)) + diags = diags.Append(fmt.Errorf("saved backend configuration is invalid: %w", err)) return nil, diags } diff --git a/internal/command/refresh.go b/internal/command/refresh.go index 75cfc3458ea7..1bb5d3933c75 100644 --- a/internal/command/refresh.go +++ b/internal/command/refresh.go @@ -26,8 +26,7 @@ func (c *RefreshCommand) Run(rawArgs []string) int { // Instantiate the view, even if there are flag errors, so that we render // diagnostics according to the desired view - var view views.Refresh - view = views.NewRefresh(args.ViewType, c.View) + view := views.NewRefresh(args.ViewType, c.View) if diags.HasErrors() { view.Diagnostics(diags) diff --git a/internal/command/state_meta.go b/internal/command/state_meta.go index 7fd7ce1bf0c8..fa04245a6c37 100644 --- a/internal/command/state_meta.go +++ b/internal/command/state_meta.go @@ -121,7 +121,7 @@ func (c *StateMeta) lookupResourceInstanceAddr(state *states.State, allowMissing } } - if found == false && !allowMissing { + if !found && !allowMissing { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, "Unknown module", diff --git a/internal/command/state_mv_test.go b/internal/command/state_mv_test.go index f9beefd1d719..1106879f4048 100644 --- a/internal/command/state_mv_test.go +++ b/internal/command/state_mv_test.go @@ -9,17 +9,11 @@ import ( "github.com/google/go-cmp/cmp" "github.com/mitchellh/cli" - "github.com/mitchellh/colorstring" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/states" ) -var disabledColorize = &colorstring.Colorize{ - Colors: colorstring.DefaultColors, - Disable: true, -} - func TestStateMv(t *testing.T) { state := states.BuildState(func(s *states.SyncState) { s.SetResourceInstanceCurrent( diff --git a/internal/command/views/add.go b/internal/command/views/add.go index 233ae7f6e4e4..5330bf35ca2a 100644 --- a/internal/command/views/add.go +++ b/internal/command/views/add.go @@ -256,7 +256,7 @@ func (v *addHuman) writeConfigNestedBlock(buf *strings.Builder, name string, sch } func (v *addHuman) writeConfigNestedTypeAttribute(buf *strings.Builder, name string, schema *configschema.Attribute, indent int) error { - if schema.Required == false && v.optional == false { + if !schema.Required && !v.optional { return nil } @@ -521,7 +521,6 @@ func writeAttrTypeConstraint(buf *strings.Builder, schema *configschema.Attribut } else { buf.WriteString(fmt.Sprintf("%s\n", schema.Type.FriendlyName())) } - return } func writeBlockTypeConstraint(buf *strings.Builder, schema *configschema.NestedBlock) { @@ -530,7 +529,6 @@ func writeBlockTypeConstraint(buf *strings.Builder, schema *configschema.NestedB } else { buf.WriteString(" # OPTIONAL block\n") } - return } // copied from command/format/diff diff --git a/internal/command/views/json/output_test.go b/internal/command/views/json/output_test.go index f2c220f86c1f..e3e9495b8cf9 100644 --- a/internal/command/views/json/output_test.go +++ b/internal/command/views/json/output_test.go @@ -74,6 +74,9 @@ func TestOutputsFromMap(t *testing.T) { func TestOutputsFromChanges(t *testing.T) { root := addrs.RootModuleInstance num, err := plans.NewDynamicValue(cty.NumberIntVal(1234), cty.Number) + if err != nil { + t.Fatalf("unexpected error creating dynamic value: %v", err) + } str, err := plans.NewDynamicValue(cty.StringVal("1234"), cty.String) if err != nil { t.Fatalf("unexpected error creating dynamic value: %v", err) diff --git a/internal/command/views/output.go b/internal/command/views/output.go index 98686c66494b..6545aaceec9b 100644 --- a/internal/command/views/output.go +++ b/internal/command/views/output.go @@ -105,10 +105,6 @@ func (v *OutputHuman) Diagnostics(diags tfdiags.Diagnostics) { // type of an output value is not important. type OutputRaw struct { view *View - - // Unit tests may set rawPrint to capture the output from the Output - // method, which would normally go to stdout directly. - rawPrint func(string) } var _ Output = (*OutputRaw)(nil) diff --git a/internal/communicator/ssh/communicator.go b/internal/communicator/ssh/communicator.go index 4de6a9040856..6dff03367cc6 100644 --- a/internal/communicator/ssh/communicator.go +++ b/internal/communicator/ssh/communicator.go @@ -19,7 +19,6 @@ import ( "time" "github.com/apparentlymart/go-shquot/shquot" - "github.com/hashicorp/errwrap" "github.com/hashicorp/terraform/internal/communicator/remote" "github.com/hashicorp/terraform/internal/provisioners" "github.com/zclconf/go-cty/cty" @@ -192,7 +191,7 @@ func (c *Communicator) Connect(o provisioners.UIOutput) (err error) { log.Printf("[DEBUG] Connection established. Handshaking for user %v", c.connInfo.User) sshConn, sshChan, req, err := ssh.NewClientConn(c.conn, hostAndPort, c.config.config) if err != nil { - err = errwrap.Wrapf(fmt.Sprintf("SSH authentication failed (%s@%s): {{err}}", c.connInfo.User, hostAndPort), err) + err = fmt.Errorf("SSH authentication failed (%s@%s): %w", c.connInfo.User, hostAndPort, err) // While in theory this should be a fatal error, some hosts may start // the ssh service before it is properly configured, or before user diff --git a/internal/configs/provider_requirements.go b/internal/configs/provider_requirements.go index 6774fbba40ff..c982c1a37cd8 100644 --- a/internal/configs/provider_requirements.go +++ b/internal/configs/provider_requirements.go @@ -86,6 +86,7 @@ func decodeRequiredProvidersBlock(block *hcl.Block) (*RequiredProviders, hcl.Dia continue } + LOOP: for _, kv := range kvs { key, keyDiags := kv.Key.Value(nil) if keyDiags.HasErrors() { @@ -213,7 +214,7 @@ func decodeRequiredProvidersBlock(block *hcl.Block) (*RequiredProviders, hcl.Dia Detail: `required_providers objects can only contain "version", "source" and "configuration_aliases" attributes. To configure a provider, use a "provider" block.`, Subject: kv.Key.Range().Ptr(), }) - break + break LOOP } } diff --git a/internal/dag/marshal.go b/internal/dag/marshal.go index 0ba52152fbf2..a78032ad3607 100644 --- a/internal/dag/marshal.go +++ b/internal/dag/marshal.go @@ -160,7 +160,9 @@ func marshalVertexID(v Vertex) string { case reflect.Chan, reflect.Func, reflect.Map, reflect.Ptr, reflect.Slice, reflect.UnsafePointer: return strconv.Itoa(int(val.Pointer())) case reflect.Interface: - return strconv.Itoa(int(val.InterfaceData()[1])) + // A vertex shouldn't contain another layer of interface, but handle + // this just in case. + return fmt.Sprintf("%#v", val.Interface()) } if v, ok := v.(Hashable); ok { diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index be5d30e61376..7499e57d882b 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -306,6 +306,7 @@ func (c *Context) prePlanFindAndApplyMoves(config *configs.Config, prevRunState break } } + //lint:ignore SA9003 TODO if !matchesTarget { // TODO: Return an error stating that a targeted plan is // only valid if it includes this address that was moved. diff --git a/internal/terraform/graph_test.go b/internal/terraform/graph_test.go index 6676aa8467f9..5e163a021353 100644 --- a/internal/terraform/graph_test.go +++ b/internal/terraform/graph_test.go @@ -6,20 +6,6 @@ import ( "github.com/hashicorp/terraform/internal/dag" ) -// testGraphContains is an assertion helper that tests that a node is -// contained in the graph. -func testGraphContains(t *testing.T, g *Graph, name string) { - for _, v := range g.Vertices() { - if dag.VertexName(v) == name { - return - } - } - - t.Fatalf( - "Expected %q in:\n\n%s", - name, g.String()) -} - // testGraphnotContains is an assertion helper that tests that a node is // NOT contained in the graph. func testGraphNotContains(t *testing.T, g *Graph, name string) { diff --git a/internal/terraform/resource_provider.go b/internal/terraform/resource_provider.go deleted file mode 100644 index d4bdfcb68abe..000000000000 --- a/internal/terraform/resource_provider.go +++ /dev/null @@ -1,16 +0,0 @@ -package terraform - -const errPluginInit = ` -Plugins are external binaries that Terraform uses to access and manipulate -resources. The configuration provided requires plugins which can't be located, -don't satisfy the version constraints, or are otherwise incompatible. - -Terraform automatically discovers provider requirements from your -configuration, including providers used in child modules. To see the -requirements and constraints, run "terraform providers". - -%s - -Plugin reinitialization required. Please address the above error(s) and run -"terraform init". -` diff --git a/internal/terraform/resource_provider_mock_test.go b/internal/terraform/resource_provider_mock_test.go index a077ba09d8f6..6592b0a96011 100644 --- a/internal/terraform/resource_provider_mock_test.go +++ b/internal/terraform/resource_provider_mock_test.go @@ -46,33 +46,6 @@ func mockProviderWithResourceTypeSchema(name string, schema *configschema.Block) } } -// mockProviderWithProviderSchema is a test helper to create a mock provider -// from an existing ProviderSchema. -func mockProviderWithProviderSchema(providerSchema ProviderSchema) *MockProvider { - p := &MockProvider{ - GetProviderSchemaResponse: &providers.GetProviderSchemaResponse{ - Provider: providers.Schema{ - Block: providerSchema.Provider, - }, - ResourceTypes: map[string]providers.Schema{}, - DataSources: map[string]providers.Schema{}, - }, - } - - for name, schema := range providerSchema.ResourceTypes { - p.GetProviderSchemaResponse.ResourceTypes[name] = providers.Schema{ - Block: schema, - Version: int64(providerSchema.ResourceTypeSchemaVersions[name]), - } - } - - for name, schema := range providerSchema.DataSources { - p.GetProviderSchemaResponse.DataSources[name] = providers.Schema{Block: schema} - } - - return p -} - // getProviderSchemaResponseFromProviderSchema is a test helper to convert a // ProviderSchema to a GetProviderSchemaResponse for use when building a mock provider. func getProviderSchemaResponseFromProviderSchema(providerSchema *ProviderSchema) *providers.GetProviderSchemaResponse { diff --git a/tools/protobuf-compile/protobuf-compile.go b/tools/protobuf-compile/protobuf-compile.go index 19cca308a295..b1efe07890a3 100644 --- a/tools/protobuf-compile/protobuf-compile.go +++ b/tools/protobuf-compile/protobuf-compile.go @@ -80,7 +80,7 @@ func main() { if err != nil { log.Fatal(err) } - protocGenGoGrpcExec, err := buildProtocGenGoGrpc(workDir) + _, err = buildProtocGenGoGrpc(workDir) if err != nil { log.Fatal(err) } @@ -93,7 +93,7 @@ func main() { if err != nil { log.Fatal(err) } - protocGenGoGrpcExec, err = filepath.Abs(protocGenGoExec) + protocGenGoGrpcExec, err := filepath.Abs(protocGenGoExec) if err != nil { log.Fatal(err) } From b60f201eeda9006d4b53142c95744d924d27ae4a Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Wed, 1 Sep 2021 11:17:13 -0400 Subject: [PATCH 031/644] json-output: Release format version 1.0 --- internal/command/jsonplan/plan.go | 2 +- internal/command/jsonprovider/provider.go | 2 +- internal/command/jsonstate/state.go | 2 +- .../testdata/providers-schema/basic/output.json | 2 +- .../testdata/providers-schema/empty/output.json | 4 ++-- .../providers-schema/required/output.json | 2 +- .../testdata/show-json-sensitive/output.json | 4 ++-- .../testdata/show-json-state/basic/output.json | 2 +- .../testdata/show-json-state/empty/output.json | 4 ++-- .../show-json-state/modules/output.json | 2 +- .../sensitive-variables/output.json | 2 +- .../testdata/show-json/basic-create/output.json | 4 ++-- .../testdata/show-json/basic-delete/output.json | 4 ++-- .../testdata/show-json/basic-update/output.json | 4 ++-- .../testdata/show-json/drift/output.json | 4 ++-- .../show-json/module-depends-on/output.json | 2 +- .../testdata/show-json/modules/output.json | 4 ++-- .../show-json/multi-resource-update/output.json | 4 ++-- .../show-json/nested-modules/output.json | 2 +- .../provider-version-no-config/output.json | 4 ++-- .../show-json/provider-version/output.json | 4 ++-- .../show-json/requires-replace/output.json | 4 ++-- .../show-json/sensitive-values/output.json | 4 ++-- .../incorrectmodulename/output.json | 2 +- .../validate-invalid/interpolation/output.json | 2 +- .../missing_defined_var/output.json | 2 +- .../validate-invalid/missing_quote/output.json | 2 +- .../validate-invalid/missing_var/output.json | 2 +- .../multiple_modules/output.json | 2 +- .../multiple_providers/output.json | 2 +- .../multiple_resources/output.json | 2 +- .../testdata/validate-invalid/output.json | 2 +- .../validate-invalid/outputs/output.json | 2 +- .../command/testdata/validate-valid/output.json | 2 +- internal/command/views/validate.go | 2 +- .../docs/cli/commands/providers/schema.html.md | 15 +++++++++++++-- website/docs/cli/commands/validate.html.md | 17 ++++++++++++----- website/docs/internals/json-format.html.md | 15 +++++++++++++-- 38 files changed, 86 insertions(+), 57 deletions(-) diff --git a/internal/command/jsonplan/plan.go b/internal/command/jsonplan/plan.go index 1b90daf2d092..b1d488156fcf 100644 --- a/internal/command/jsonplan/plan.go +++ b/internal/command/jsonplan/plan.go @@ -22,7 +22,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "0.2" +const FormatVersion = "1.0" // Plan is the top-level representation of the json format of a plan. It includes // the complete config and current state. diff --git a/internal/command/jsonprovider/provider.go b/internal/command/jsonprovider/provider.go index b507bc242e9e..4487db4987ae 100644 --- a/internal/command/jsonprovider/provider.go +++ b/internal/command/jsonprovider/provider.go @@ -9,7 +9,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "0.2" +const FormatVersion = "1.0" // providers is the top-level object returned when exporting provider schemas type providers struct { diff --git a/internal/command/jsonstate/state.go b/internal/command/jsonstate/state.go index 341040d2d1c9..46532875c334 100644 --- a/internal/command/jsonstate/state.go +++ b/internal/command/jsonstate/state.go @@ -18,7 +18,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "0.2" +const FormatVersion = "1.0" // state is the top-level representation of the json format of a terraform // state. diff --git a/internal/command/testdata/providers-schema/basic/output.json b/internal/command/testdata/providers-schema/basic/output.json index f14786c3e31e..dfac55b38c35 100644 --- a/internal/command/testdata/providers-schema/basic/output.json +++ b/internal/command/testdata/providers-schema/basic/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "provider_schemas": { "registry.terraform.io/hashicorp/test": { "provider": { diff --git a/internal/command/testdata/providers-schema/empty/output.json b/internal/command/testdata/providers-schema/empty/output.json index 12d30d201356..381450cade5c 100644 --- a/internal/command/testdata/providers-schema/empty/output.json +++ b/internal/command/testdata/providers-schema/empty/output.json @@ -1,3 +1,3 @@ { - "format_version": "0.2" -} \ No newline at end of file + "format_version": "1.0" +} diff --git a/internal/command/testdata/providers-schema/required/output.json b/internal/command/testdata/providers-schema/required/output.json index f14786c3e31e..dfac55b38c35 100644 --- a/internal/command/testdata/providers-schema/required/output.json +++ b/internal/command/testdata/providers-schema/required/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "provider_schemas": { "registry.terraform.io/hashicorp/test": { "provider": { diff --git a/internal/command/testdata/show-json-sensitive/output.json b/internal/command/testdata/show-json-sensitive/output.json index 5f22c4ccf3a2..206fbb7f6e60 100644 --- a/internal/command/testdata/show-json-sensitive/output.json +++ b/internal/command/testdata/show-json-sensitive/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -66,7 +66,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json-state/basic/output.json b/internal/command/testdata/show-json-state/basic/output.json index 3087ad118050..229fa00e7262 100644 --- a/internal/command/testdata/show-json-state/basic/output.json +++ b/internal/command/testdata/show-json-state/basic/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.12.0", "values": { "root_module": { diff --git a/internal/command/testdata/show-json-state/empty/output.json b/internal/command/testdata/show-json-state/empty/output.json index 12d30d201356..381450cade5c 100644 --- a/internal/command/testdata/show-json-state/empty/output.json +++ b/internal/command/testdata/show-json-state/empty/output.json @@ -1,3 +1,3 @@ { - "format_version": "0.2" -} \ No newline at end of file + "format_version": "1.0" +} diff --git a/internal/command/testdata/show-json-state/modules/output.json b/internal/command/testdata/show-json-state/modules/output.json index eeee8f6cffbc..eba163bdbb52 100644 --- a/internal/command/testdata/show-json-state/modules/output.json +++ b/internal/command/testdata/show-json-state/modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.12.0", "values": { "outputs": { diff --git a/internal/command/testdata/show-json-state/sensitive-variables/output.json b/internal/command/testdata/show-json-state/sensitive-variables/output.json index b133aeef13bf..60503cd3ad80 100644 --- a/internal/command/testdata/show-json-state/sensitive-variables/output.json +++ b/internal/command/testdata/show-json-state/sensitive-variables/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.14.0", "values": { "root_module": { diff --git a/internal/command/testdata/show-json/basic-create/output.json b/internal/command/testdata/show-json/basic-create/output.json index 3474443ed386..d1b8aae5361b 100644 --- a/internal/command/testdata/show-json/basic-create/output.json +++ b/internal/command/testdata/show-json/basic-create/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/basic-delete/output.json b/internal/command/testdata/show-json/basic-delete/output.json index 9ebea2058f78..8a0018cd5bff 100644 --- a/internal/command/testdata/show-json/basic-delete/output.json +++ b/internal/command/testdata/show-json/basic-delete/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -88,7 +88,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/basic-update/output.json b/internal/command/testdata/show-json/basic-update/output.json index 2b8bc25e3034..e4b4731426a1 100644 --- a/internal/command/testdata/show-json/basic-update/output.json +++ b/internal/command/testdata/show-json/basic-update/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -68,7 +68,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/drift/output.json b/internal/command/testdata/show-json/drift/output.json index 7badb45e5edd..55c9e3f71cd3 100644 --- a/internal/command/testdata/show-json/drift/output.json +++ b/internal/command/testdata/show-json/drift/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "root_module": { "resources": [ @@ -105,7 +105,7 @@ } ], "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "root_module": { "resources": [ diff --git a/internal/command/testdata/show-json/module-depends-on/output.json b/internal/command/testdata/show-json/module-depends-on/output.json index cc7ed679f0cc..d02efaa22f0d 100644 --- a/internal/command/testdata/show-json/module-depends-on/output.json +++ b/internal/command/testdata/show-json/module-depends-on/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.13.1-dev", "planned_values": { "root_module": { diff --git a/internal/command/testdata/show-json/modules/output.json b/internal/command/testdata/show-json/modules/output.json index 440bebbff891..4ed0ea45d692 100644 --- a/internal/command/testdata/show-json/modules/output.json +++ b/internal/command/testdata/show-json/modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "outputs": { "test": { @@ -74,7 +74,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/multi-resource-update/output.json b/internal/command/testdata/show-json/multi-resource-update/output.json index d84bc5b08789..ba557de69c8c 100644 --- a/internal/command/testdata/show-json/multi-resource-update/output.json +++ b/internal/command/testdata/show-json/multi-resource-update/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.13.0", "variables": { "test_var": { @@ -127,7 +127,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.13.0", "values": { "outputs": { diff --git a/internal/command/testdata/show-json/nested-modules/output.json b/internal/command/testdata/show-json/nested-modules/output.json index 80e7ae3588e9..359ea9ae181c 100644 --- a/internal/command/testdata/show-json/nested-modules/output.json +++ b/internal/command/testdata/show-json/nested-modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "root_module": { "child_modules": [ diff --git a/internal/command/testdata/show-json/provider-version-no-config/output.json b/internal/command/testdata/show-json/provider-version-no-config/output.json index 64b93ec751c0..6a8b1f451dc0 100644 --- a/internal/command/testdata/show-json/provider-version-no-config/output.json +++ b/internal/command/testdata/show-json/provider-version-no-config/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/provider-version/output.json b/internal/command/testdata/show-json/provider-version/output.json index b5369806e933..11fd3bd64c15 100644 --- a/internal/command/testdata/show-json/provider-version/output.json +++ b/internal/command/testdata/show-json/provider-version/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/requires-replace/output.json b/internal/command/testdata/show-json/requires-replace/output.json index 077d900b13b0..e71df784f4f7 100644 --- a/internal/command/testdata/show-json/requires-replace/output.json +++ b/internal/command/testdata/show-json/requires-replace/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "root_module": { "resources": [ @@ -48,7 +48,7 @@ } ], "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "root_module": { "resources": [ diff --git a/internal/command/testdata/show-json/sensitive-values/output.json b/internal/command/testdata/show-json/sensitive-values/output.json index 7cbc9ccf0e75..d7e4719c71f5 100644 --- a/internal/command/testdata/show-json/sensitive-values/output.json +++ b/internal/command/testdata/show-json/sensitive-values/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "boop" @@ -69,7 +69,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/validate-invalid/incorrectmodulename/output.json b/internal/command/testdata/validate-invalid/incorrectmodulename/output.json index 0c2ce68abd37..f144313fa455 100644 --- a/internal/command/testdata/validate-invalid/incorrectmodulename/output.json +++ b/internal/command/testdata/validate-invalid/incorrectmodulename/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 4, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/interpolation/output.json b/internal/command/testdata/validate-invalid/interpolation/output.json index 7845ec0f4e81..2843b19121fc 100644 --- a/internal/command/testdata/validate-invalid/interpolation/output.json +++ b/internal/command/testdata/validate-invalid/interpolation/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 2, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_defined_var/output.json b/internal/command/testdata/validate-invalid/missing_defined_var/output.json index c2a57c5e6a98..40258a98cd27 100644 --- a/internal/command/testdata/validate-invalid/missing_defined_var/output.json +++ b/internal/command/testdata/validate-invalid/missing_defined_var/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": true, "error_count": 0, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_quote/output.json b/internal/command/testdata/validate-invalid/missing_quote/output.json index cdf99d8b2a2b..87aeca8b7817 100644 --- a/internal/command/testdata/validate-invalid/missing_quote/output.json +++ b/internal/command/testdata/validate-invalid/missing_quote/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_var/output.json b/internal/command/testdata/validate-invalid/missing_var/output.json index 2a4e0be71ebd..6f0b9d5d4c8d 100644 --- a/internal/command/testdata/validate-invalid/missing_var/output.json +++ b/internal/command/testdata/validate-invalid/missing_var/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_modules/output.json b/internal/command/testdata/validate-invalid/multiple_modules/output.json index 4cd6dfb9f0ad..1aeaf929a913 100644 --- a/internal/command/testdata/validate-invalid/multiple_modules/output.json +++ b/internal/command/testdata/validate-invalid/multiple_modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_providers/output.json b/internal/command/testdata/validate-invalid/multiple_providers/output.json index 63eb2d193820..309cf0ea7c34 100644 --- a/internal/command/testdata/validate-invalid/multiple_providers/output.json +++ b/internal/command/testdata/validate-invalid/multiple_providers/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_resources/output.json b/internal/command/testdata/validate-invalid/multiple_resources/output.json index 33d5052284e9..ded584e6846c 100644 --- a/internal/command/testdata/validate-invalid/multiple_resources/output.json +++ b/internal/command/testdata/validate-invalid/multiple_resources/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/output.json b/internal/command/testdata/validate-invalid/output.json index 663fe0153071..73254853932f 100644 --- a/internal/command/testdata/validate-invalid/output.json +++ b/internal/command/testdata/validate-invalid/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/outputs/output.json b/internal/command/testdata/validate-invalid/outputs/output.json index d05ed4b77173..f774b458be4c 100644 --- a/internal/command/testdata/validate-invalid/outputs/output.json +++ b/internal/command/testdata/validate-invalid/outputs/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 2, "warning_count": 0, diff --git a/internal/command/testdata/validate-valid/output.json b/internal/command/testdata/validate-valid/output.json index c2a57c5e6a98..40258a98cd27 100644 --- a/internal/command/testdata/validate-valid/output.json +++ b/internal/command/testdata/validate-valid/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": true, "error_count": 0, "warning_count": 0, diff --git a/internal/command/views/validate.go b/internal/command/views/validate.go index 1e597277a478..08ce913f82ce 100644 --- a/internal/command/views/validate.go +++ b/internal/command/views/validate.go @@ -81,7 +81,7 @@ func (v *ValidateJSON) Results(diags tfdiags.Diagnostics) int { // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. - const FormatVersion = "0.1" + const FormatVersion = "1.0" type Output struct { FormatVersion string `json:"format_version"` diff --git a/website/docs/cli/commands/providers/schema.html.md b/website/docs/cli/commands/providers/schema.html.md index e97e50f2305e..2a3dddc1338e 100644 --- a/website/docs/cli/commands/providers/schema.html.md +++ b/website/docs/cli/commands/providers/schema.html.md @@ -23,7 +23,18 @@ The list of available flags are: Please note that, at this time, the `-json` flag is a _required_ option. In future releases, this command will be extended to allow for additional options. --> **Note:** The output includes a `format_version` key, which currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version and make stronger promises about compatibility. We do not anticipate any significant breaking changes to the format before its first major version, however. +The output includes a `format_version` key, which as of Terraform 1.1.0 has +value `"1.0"`. The semantics of this version are: + +- We will increment the minor version, e.g. `"1.1"`, for backward-compatible + changes or additions. Ignore any object properties with unrecognized names to + remain forward-compatible with future minor versions. +- We will increment the major version, e.g. `"2.0"`, for changes that are not + backward-compatible. Reject any input which reports an unsupported major + version. + +We will introduce new major versions only within the bounds of +[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). ## Format Summary @@ -41,7 +52,7 @@ The JSON output format consists of the following objects and sub-objects: ```javascript { - "format_version": "0.1", + "format_version": "1.0", // "provider_schemas" describes the provider schemas for all // providers throughout the configuration tree. diff --git a/website/docs/cli/commands/validate.html.md b/website/docs/cli/commands/validate.html.md index 583186e3d069..e81da01b27a4 100644 --- a/website/docs/cli/commands/validate.html.md +++ b/website/docs/cli/commands/validate.html.md @@ -57,11 +57,18 @@ to the JSON output setting. For that reason, external software consuming Terraform's output should be prepared to find data on stdout that _isn't_ valid JSON, which it should then treat as a generic error case. -**Note:** The output includes a `format_version` key, which currently has major -version zero to indicate that the format is experimental and subject to change. -A future version will assign a non-zero major version and make stronger -promises about compatibility. We do not anticipate any significant breaking -changes to the format before its first major version, however. +The output includes a `format_version` key, which as of Terraform 1.1.0 has +value `"1.0"`. The semantics of this version are: + +- We will increment the minor version, e.g. `"1.1"`, for backward-compatible + changes or additions. Ignore any object properties with unrecognized names to + remain forward-compatible with future minor versions. +- We will increment the major version, e.g. `"2.0"`, for changes that are not + backward-compatible. Reject any input which reports an unsupported major + version. + +We will introduce new major versions only within the bounds of +[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). In the normal case, Terraform will print a JSON object to the standard output stream. The top-level JSON object will have the following properties: diff --git a/website/docs/internals/json-format.html.md b/website/docs/internals/json-format.html.md index 9a3efeff5d46..b5f8daab35c3 100644 --- a/website/docs/internals/json-format.html.md +++ b/website/docs/internals/json-format.html.md @@ -16,7 +16,18 @@ Since the format of plan files isn't suited for use with external tools (and lik Use `terraform show -json ` to generate a JSON representation of a plan or state file. See [the `terraform show` documentation](/docs/cli/commands/show.html) for more details. --> **Note:** The output includes a `format_version` key, which currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version and make stronger promises about compatibility. We do not anticipate any significant breaking changes to the format before its first major version, however. +The output includes a `format_version` key, which as of Terraform 1.1.0 has +value `"1.0"`. The semantics of this version are: + +- We will increment the minor version, e.g. `"1.1"`, for backward-compatible + changes or additions. Ignore any object properties with unrecognized names to + remain forward-compatible with future minor versions. +- We will increment the major version, e.g. `"2.0"`, for changes that are not + backward-compatible. Reject any input which reports an unsupported major + version. + +We will introduce new major versions only within the bounds of +[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). ## Format Summary @@ -60,7 +71,7 @@ For ease of consumption by callers, the plan representation includes a partial r ```javascript { - "format_version": "0.2", + "format_version": "1.0", // "prior_state" is a representation of the state that the configuration is // being applied to, using the state representation described above. From 9e71da61eb683f461a339cb6244e5a02130a7f1f Mon Sep 17 00:00:00 2001 From: Chris Arcand Date: Wed, 1 Sep 2021 12:54:48 -0500 Subject: [PATCH 032/644] Remove several ignore rules The main purpose of this change is to avoid a problem where new golden files added to certain directories for test purposes (like .log) shouldn't be ignored. Cleaning up a bit more and broadening the definition, this removes ignore rules for artifacts of Terraform itself (state files, plans). It's generally not recommended to be using this codebase as your Terraform working directory anyway; build here, test elsewhere. --- .gitignore | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/.gitignore b/.gitignore index 92764de852e3..e4881f29a717 100644 --- a/.gitignore +++ b/.gitignore @@ -1,9 +1,6 @@ *.dll *.exe .DS_Store -example.tf -terraform.tfplan -terraform.tfstate bin/ modules-dev/ /pkg/ @@ -13,9 +10,6 @@ website/build website/node_modules .vagrant/ *.backup -./*.tfstate -.terraform/ -*.log *.bak *~ .*.swp @@ -27,9 +21,5 @@ website/node_modules website/vendor vendor/ -# Test exclusions -!command/testdata/**/*.tfstate -!command/testdata/**/.terraform/ - # Coverage coverage.txt From fa4c590f515a9a5ae40d9122b325a803be975f0c Mon Sep 17 00:00:00 2001 From: Yves Peter Date: Fri, 3 Sep 2021 08:23:17 +0200 Subject: [PATCH 033/644] Apply suggestions from code review Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> --- website/docs/language/meta-arguments/lifecycle.html.md | 3 +-- website/docs/language/resources/provisioners/syntax.html.md | 6 ++---- 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/website/docs/language/meta-arguments/lifecycle.html.md b/website/docs/language/meta-arguments/lifecycle.html.md index afa726cee081..9dec5a5cc8d8 100644 --- a/website/docs/language/meta-arguments/lifecycle.html.md +++ b/website/docs/language/meta-arguments/lifecycle.html.md @@ -46,8 +46,7 @@ The following arguments can be used within a `lifecycle` block: type before using `create_before_destroy` with it. Destroy provisioners of this resource will not run if `create_before_destroy` - is used. This limitation may be addressed in the future, see - [GitHub issue](https://github.com/hashicorp/terraform/issues/13549) for details. + is set to `true`. We may address this in the future, and this [GitHub issue](https://github.com/hashicorp/terraform/issues/13549) contains more details. * `prevent_destroy` (bool) - This meta-argument, when set to `true`, will cause Terraform to reject with an error any plan that would destroy the diff --git a/website/docs/language/resources/provisioners/syntax.html.md b/website/docs/language/resources/provisioners/syntax.html.md index 171f3fe13cdb..9bbb75979695 100644 --- a/website/docs/language/resources/provisioners/syntax.html.md +++ b/website/docs/language/resources/provisioners/syntax.html.md @@ -237,10 +237,8 @@ fail, Terraform will error and rerun the provisioners again on the next `terraform apply`. Due to this behavior, care should be taken for destroy provisioners to be safe to run multiple times. -Destroy provisioners will not run if the lifecycle Meta-Argument -[`create_before_destroy`](/docs/language/meta-arguments/lifecycle.html) is used -in the resource. This limitation may be addressed in the future, see -[GitHub issue](https://github.com/hashicorp/terraform/issues/13549) for details. + Destroy provisioners of this resource will not run if `create_before_destroy` + is set to `true`. We may address this in the future, and this [GitHub issue](https://github.com/hashicorp/terraform/issues/13549) contains more details. Destroy-time provisioners can only run if they remain in the configuration at the time a resource is destroyed. If a resource block with a destroy-time From e3b6c6403c3057f5be12046740556fd3b1dc0363 Mon Sep 17 00:00:00 2001 From: drewmullen Date: Thu, 2 Sep 2021 08:29:43 -0400 Subject: [PATCH 034/644] include sha example and git docs --- website/docs/language/modules/sources.html.md | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/website/docs/language/modules/sources.html.md b/website/docs/language/modules/sources.html.md index aa637aa1186b..b9db802faf3e 100644 --- a/website/docs/language/modules/sources.html.md +++ b/website/docs/language/modules/sources.html.md @@ -237,16 +237,20 @@ only SSH key authentication is supported, and By default, Terraform will clone and use the default branch (referenced by `HEAD`) in the selected repository. You can override this using the -`ref` argument: +`ref` argument. The value of the `ref` argument can be any reference that would be accepted +by the `git checkout` command, such as branch, SHA-1 hash (short or full), or tag names. The [Git documentation](https://git-scm.com/book/en/v2/Git-Tools-Revision-Selection#_single_revisions) contains a complete list. ```hcl +# referencing a specific release module "vpc" { source = "git::https://example.com/vpc.git?ref=v1.2.0" } -``` -The value of the `ref` argument can be any reference that would be accepted -by the `git checkout` command, including branch and tag names. +# referencing a specific commit SHA-1 hash +module "storage" { + source = "git::https://example.com/storage.git?ref=51d462976d84fdea54b47d80dcabbf680badcdb8" +} +``` ### "scp-like" address syntax From 06956773dc6611e59364883fdce8be7f0d5e7a50 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Fri, 3 Sep 2021 11:58:15 -0400 Subject: [PATCH 035/644] configs: Disable experiment warnings at link time This package level variable can be overridden at link time to allow temporarily disabling the UI warning when experimental features are enabled. This makes it easier to understand how UI will render when the feature is no longer experimental. This change is only for those developing Terraform. --- internal/configs/experiments.go | 34 ++++++++++++++++++++++----------- 1 file changed, 23 insertions(+), 11 deletions(-) diff --git a/internal/configs/experiments.go b/internal/configs/experiments.go index d6f200079e6e..8a7e7cb667d1 100644 --- a/internal/configs/experiments.go +++ b/internal/configs/experiments.go @@ -9,6 +9,16 @@ import ( "github.com/zclconf/go-cty/cty" ) +// When developing UI for experimental features, you can temporarily disable +// the experiment warning by setting this package-level variable to a non-empty +// value using a link-time flag: +// +// go install -ldflags="-X 'github.com/hashicorp/terraform/internal/configs.disableExperimentWarnings=yes'" +// +// This functionality is for development purposes only and is not a feature we +// are committing to supporting for end users. +var disableExperimentWarnings = "" + // sniffActiveExperiments does minimal parsing of the given body for // "terraform" blocks with "experiments" attributes, returning the // experiments found. @@ -126,17 +136,19 @@ func decodeExperimentsAttr(attr *hcl.Attribute) (experiments.Set, hcl.Diagnostic // No error at all means it's valid and current. ret.Add(exp) - // However, experimental features are subject to breaking changes - // in future releases, so we'll warn about them to help make sure - // folks aren't inadvertently using them in places where that'd be - // inappropriate, particularly if the experiment is active in a - // shared module they depend on. - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagWarning, - Summary: fmt.Sprintf("Experimental feature %q is active", exp.Keyword()), - Detail: "Experimental features are subject to breaking changes in future minor or patch releases, based on feedback.\n\nIf you have feedback on the design of this feature, please open a GitHub issue to discuss it.", - Subject: expr.Range().Ptr(), - }) + if disableExperimentWarnings == "" { + // However, experimental features are subject to breaking changes + // in future releases, so we'll warn about them to help make sure + // folks aren't inadvertently using them in places where that'd be + // inappropriate, particularly if the experiment is active in a + // shared module they depend on. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagWarning, + Summary: fmt.Sprintf("Experimental feature %q is active", exp.Keyword()), + Detail: "Experimental features are subject to breaking changes in future minor or patch releases, based on feedback.\n\nIf you have feedback on the design of this feature, please open a GitHub issue to discuss it.", + Subject: expr.Range().Ptr(), + }) + } default: // This should never happen, because GetCurrent is not documented From ac2a870ea018f4721dd67ed7cb4e624ee8ab78b3 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 3 Sep 2021 13:53:52 -0400 Subject: [PATCH 036/644] allow json output to marshal ConfigModeAttr blocks In order to marshal config blocks using ConfigModeAttr, we need to insert the fixup body to map hcl blocks to the attribute in the schema. --- internal/command/jsonconfig/expression.go | 4 +++ .../command/jsonconfig/expression_test.go | 28 +++++++++++++++++++ 2 files changed, 32 insertions(+) diff --git a/internal/command/jsonconfig/expression.go b/internal/command/jsonconfig/expression.go index 0244d73d0b36..fa443fc3ea50 100644 --- a/internal/command/jsonconfig/expression.go +++ b/internal/command/jsonconfig/expression.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/lang" + "github.com/hashicorp/terraform/internal/lang/blocktoattr" "github.com/zclconf/go-cty/cty" ctyjson "github.com/zclconf/go-cty/cty/json" ) @@ -96,6 +97,9 @@ func marshalExpressions(body hcl.Body, schema *configschema.Block) expressions { // (lowSchema is an hcl.BodySchema: // https://godoc.org/github.com/hashicorp/hcl/v2/hcl#BodySchema ) + // fix any ConfigModeAttr blocks present from legacy providers + body = blocktoattr.FixUpBlockAttrs(body, schema) + // Use the low-level schema with the body to decode one level We'll just // ignore any additional content that's not covered by the schema, which // will effectively ignore "dynamic" blocks, and may also ignore other diff --git a/internal/command/jsonconfig/expression_test.go b/internal/command/jsonconfig/expression_test.go index 971fb78d4d31..58af11dda53e 100644 --- a/internal/command/jsonconfig/expression_test.go +++ b/internal/command/jsonconfig/expression_test.go @@ -80,6 +80,29 @@ func TestMarshalExpressions(t *testing.T) { }, }, }, + { + hcltest.MockBody(&hcl.BodyContent{ + Blocks: hcl.Blocks{ + { + Type: "block_to_attr", + Body: hcltest.MockBody(&hcl.BodyContent{ + + Attributes: hcl.Attributes{ + "foo": { + Name: "foo", + Expr: hcltest.MockExprTraversalSrc(`module.foo.bar`), + }, + }, + }), + }, + }, + }), + expressions{ + "block_to_attr": expression{ + References: []string{"module.foo.bar", "module.foo"}, + }, + }, + }, } for _, test := range tests { @@ -89,6 +112,11 @@ func TestMarshalExpressions(t *testing.T) { Type: cty.String, Optional: true, }, + "block_to_attr": { + Type: cty.List(cty.Object(map[string]cty.Type{ + "foo": cty.String, + })), + }, }, } From 29999c1d6f0bc58b4d8ce550a33046ce834370fb Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Fri, 3 Sep 2021 12:10:16 -0400 Subject: [PATCH 037/644] command: Render "moved" annotations in plan UI For resources which are planned to move, render the previous run address as additional information in the plan UI. For the case of a move-only resource (which otherwise is unchanged), we also render that as a planned change, but without any corresponding action symbol. If all changes in the plan are moves without changes, the plan is no longer considered "empty". In this case, we skip rendering the action symbols in the UI. --- internal/command/format/diff.go | 16 +++- internal/command/format/diff_test.go | 96 ++++++++++++++++++++-- internal/command/views/plan.go | 20 +++-- internal/command/views/plan_test.go | 13 +-- internal/plans/changes.go | 2 +- internal/plans/changes_src.go | 4 + internal/plans/changes_test.go | 117 +++++++++++++++++++++++++++ 7 files changed, 248 insertions(+), 20 deletions(-) diff --git a/internal/command/format/diff.go b/internal/command/format/diff.go index 982b95980bf5..e111485e0268 100644 --- a/internal/command/format/diff.go +++ b/internal/command/format/diff.go @@ -72,13 +72,27 @@ func ResourceChange( // Some extra context about this unusual situation. buf.WriteString(color.Color("\n # (left over from a partially-failed replacement of this instance)")) } + case plans.NoOp: + if change.Moved() { + buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has moved to [bold]%s[reset]", change.PrevRunAddr.String(), dispAddr))) + break + } + fallthrough default: // should never happen, since the above is exhaustive buf.WriteString(fmt.Sprintf("%s has an action the plan renderer doesn't support (this is a bug)", dispAddr)) } buf.WriteString(color.Color("[reset]\n")) - buf.WriteString(color.Color(DiffActionSymbol(change.Action)) + " ") + if change.Moved() && change.Action != plans.NoOp { + buf.WriteString(color.Color(fmt.Sprintf("[bold] # [reset]([bold]%s[reset] has moved to [bold]%s[reset])\n", change.PrevRunAddr.String(), dispAddr))) + } + + if change.Moved() && change.Action == plans.NoOp { + buf.WriteString(" ") + } else { + buf.WriteString(color.Color(DiffActionSymbol(change.Action)) + " ") + } switch addr.Resource.Resource.Mode { case addrs.ManagedResourceMode: diff --git a/internal/command/format/diff_test.go b/internal/command/format/diff_test.go index 31cb62d7d7c5..7ca289c79ad5 100644 --- a/internal/command/format/diff_test.go +++ b/internal/command/format/diff_test.go @@ -4448,6 +4448,79 @@ func TestResourceChange_sensitiveVariable(t *testing.T) { runTestCases(t, testCases) } +func TestResourceChange_moved(t *testing.T) { + prevRunAddr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_instance", + Name: "previous", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance) + + testCases := map[string]testCase{ + "moved and updated": { + PrevRunAddr: prevRunAddr, + Action: plans.Update, + Mode: addrs.ManagedResourceMode, + Before: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("12345"), + "foo": cty.StringVal("hello"), + "bar": cty.StringVal("baz"), + }), + After: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("12345"), + "foo": cty.StringVal("hello"), + "bar": cty.StringVal("boop"), + }), + Schema: &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "id": {Type: cty.String, Computed: true}, + "foo": {Type: cty.String, Optional: true}, + "bar": {Type: cty.String, Optional: true}, + }, + }, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example will be updated in-place + # (test_instance.previous has moved to test_instance.example) + ~ resource "test_instance" "example" { + ~ bar = "baz" -> "boop" + id = "12345" + # (1 unchanged attribute hidden) + } +`, + }, + "moved without changes": { + PrevRunAddr: prevRunAddr, + Action: plans.NoOp, + Mode: addrs.ManagedResourceMode, + Before: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("12345"), + "foo": cty.StringVal("hello"), + "bar": cty.StringVal("baz"), + }), + After: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("12345"), + "foo": cty.StringVal("hello"), + "bar": cty.StringVal("baz"), + }), + Schema: &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "id": {Type: cty.String, Computed: true}, + "foo": {Type: cty.String, Optional: true}, + "bar": {Type: cty.String, Optional: true}, + }, + }, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.previous has moved to test_instance.example + resource "test_instance" "example" { + id = "12345" + # (2 unchanged attributes hidden) + } +`, + }, + } + + runTestCases(t, testCases) +} + type testCase struct { Action plans.Action ActionReason plans.ResourceInstanceChangeActionReason @@ -4460,6 +4533,7 @@ type testCase struct { Schema *configschema.Block RequiredReplace cty.PathSet ExpectedOutput string + PrevRunAddr addrs.AbsResourceInstance } func runTestCases(t *testing.T, testCases map[string]testCase) { @@ -4493,13 +4567,23 @@ func runTestCases(t *testing.T, testCases map[string]testCase) { t.Fatal(err) } + addr := addrs.Resource{ + Mode: tc.Mode, + Type: "test_instance", + Name: "example", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance) + + prevRunAddr := tc.PrevRunAddr + // If no previous run address is given, reuse the current address + // to make initialization easier + if prevRunAddr.Resource.Resource.Type == "" { + prevRunAddr = addr + } + change := &plans.ResourceInstanceChangeSrc{ - Addr: addrs.Resource{ - Mode: tc.Mode, - Type: "test_instance", - Name: "example", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - DeposedKey: tc.DeposedKey, + Addr: addr, + PrevRunAddr: prevRunAddr, + DeposedKey: tc.DeposedKey, ProviderAddr: addrs.AbsProviderConfig{ Provider: addrs.NewDefaultProvider("test"), Module: addrs.RootModule, diff --git a/internal/command/views/plan.go b/internal/command/views/plan.go index 5d1798d2fbd2..ff794933b420 100644 --- a/internal/command/views/plan.go +++ b/internal/command/views/plan.go @@ -116,7 +116,7 @@ func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { counts := map[plans.Action]int{} var rChanges []*plans.ResourceInstanceChangeSrc for _, change := range plan.Changes.Resources { - if change.Action == plans.NoOp { + if change.Action == plans.NoOp && !change.Moved() { continue // We don't show anything for no-op changes } if change.Action == plans.Delete && change.Addr.Resource.Resource.Mode == addrs.DataResourceMode { @@ -125,7 +125,11 @@ func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { } rChanges = append(rChanges, change) - counts[change.Action]++ + + // Don't count move-only changes + if change.Action != plans.NoOp { + counts[change.Action]++ + } } var changedRootModuleOutputs []*plans.OutputChangeSrc for _, output := range plan.Changes.Outputs { @@ -138,7 +142,7 @@ func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { changedRootModuleOutputs = append(changedRootModuleOutputs, output) } - if len(counts) == 0 && len(changedRootModuleOutputs) == 0 { + if len(rChanges) == 0 && len(changedRootModuleOutputs) == 0 { // If we didn't find any changes to report at all then this is a // "No changes" plan. How we'll present this depends on whether // the plan is "applyable" and, if so, whether it had refresh changes @@ -225,7 +229,7 @@ func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { view.streams.Println("") } - if len(counts) != 0 { + if len(counts) > 0 { headerBuf := &bytes.Buffer{} fmt.Fprintf(headerBuf, "\n%s\n", strings.TrimSpace(format.WordWrap(planHeaderIntro, view.outputColumns()))) if counts[plans.Create] > 0 { @@ -247,9 +251,11 @@ func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { fmt.Fprintf(headerBuf, "%s read (data resources)\n", format.DiffActionSymbol(plans.Read)) } - view.streams.Println(view.colorize.Color(headerBuf.String())) + view.streams.Print(view.colorize.Color(headerBuf.String())) + } - view.streams.Printf("Terraform will perform the following actions:\n\n") + if len(rChanges) > 0 { + view.streams.Printf("\nTerraform will perform the following actions:\n\n") // Note: we're modifying the backing slice of this plan object in-place // here. The ordering of resource changes in a plan is not significant, @@ -265,7 +271,7 @@ func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { }) for _, rcs := range rChanges { - if rcs.Action == plans.NoOp { + if rcs.Action == plans.NoOp && !rcs.Moved() { continue } diff --git a/internal/command/views/plan_test.go b/internal/command/views/plan_test.go index 13cd3c4c3f03..757c7162ddee 100644 --- a/internal/command/views/plan_test.go +++ b/internal/command/views/plan_test.go @@ -63,12 +63,15 @@ func testPlan(t *testing.T) *plans.Plan { } changes := plans.NewChanges() + addr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_resource", + Name: "foo", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance) + changes.SyncWrapper().AppendResourceInstanceChange(&plans.ResourceInstanceChangeSrc{ - Addr: addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "foo", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + Addr: addr, + PrevRunAddr: addr, ProviderAddr: addrs.AbsProviderConfig{ Provider: addrs.NewDefaultProvider("test"), Module: addrs.RootModule, diff --git a/internal/plans/changes.go b/internal/plans/changes.go index ba06244cb92c..c9aa38fd3863 100644 --- a/internal/plans/changes.go +++ b/internal/plans/changes.go @@ -33,7 +33,7 @@ func NewChanges() *Changes { func (c *Changes) Empty() bool { for _, res := range c.Resources { - if res.Action != NoOp { + if res.Action != NoOp || res.Moved() { return false } } diff --git a/internal/plans/changes_src.go b/internal/plans/changes_src.go index 69330a21d897..396493956771 100644 --- a/internal/plans/changes_src.go +++ b/internal/plans/changes_src.go @@ -125,6 +125,10 @@ func (rcs *ResourceInstanceChangeSrc) DeepCopy() *ResourceInstanceChangeSrc { return &ret } +func (rcs *ResourceInstanceChangeSrc) Moved() bool { + return !rcs.Addr.Equal(rcs.PrevRunAddr) +} + // OutputChangeSrc describes a change to an output value. type OutputChangeSrc struct { // Addr is the absolute address of the output value that the change diff --git a/internal/plans/changes_test.go b/internal/plans/changes_test.go index 16062429b853..5dbe10f08a93 100644 --- a/internal/plans/changes_test.go +++ b/internal/plans/changes_test.go @@ -4,10 +4,127 @@ import ( "fmt" "testing" + "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/lang/marks" "github.com/zclconf/go-cty/cty" ) +func TestChangesEmpty(t *testing.T) { + testCases := map[string]struct { + changes *Changes + want bool + }{ + "no changes": { + &Changes{}, + true, + }, + "resource change": { + &Changes{ + Resources: []*ResourceInstanceChangeSrc{ + { + Addr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + PrevRunAddr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + ChangeSrc: ChangeSrc{ + Action: Update, + }, + }, + }, + }, + false, + }, + "resource change with no-op action": { + &Changes{ + Resources: []*ResourceInstanceChangeSrc{ + { + Addr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + PrevRunAddr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + ChangeSrc: ChangeSrc{ + Action: NoOp, + }, + }, + }, + }, + true, + }, + "resource moved with no-op change": { + &Changes{ + Resources: []*ResourceInstanceChangeSrc{ + { + Addr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + PrevRunAddr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "toot", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + ChangeSrc: ChangeSrc{ + Action: NoOp, + }, + }, + }, + }, + false, + }, + "output change": { + &Changes{ + Outputs: []*OutputChangeSrc{ + { + Addr: addrs.OutputValue{ + Name: "result", + }.Absolute(addrs.RootModuleInstance), + ChangeSrc: ChangeSrc{ + Action: Update, + }, + }, + }, + }, + false, + }, + "output change no-op": { + &Changes{ + Outputs: []*OutputChangeSrc{ + { + Addr: addrs.OutputValue{ + Name: "result", + }.Absolute(addrs.RootModuleInstance), + ChangeSrc: ChangeSrc{ + Action: NoOp, + }, + }, + }, + }, + true, + }, + } + + for name, tc := range testCases { + t.Run(name, func(t *testing.T) { + if got, want := tc.changes.Empty(), tc.want; got != want { + t.Fatalf("unexpected result: got %v, want %v", got, want) + } + }) + } +} + func TestChangeEncodeSensitive(t *testing.T) { testVals := []cty.Value{ cty.ObjectVal(map[string]cty.Value{ From 5e8806f397eacab529beb22536b5242a4c68eeaf Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 7 Sep 2021 11:34:25 -0400 Subject: [PATCH 038/644] update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 2d74909b05d7..41523b54e837 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -17,6 +17,7 @@ ENHANCEMENTS: BUG FIXES: * core: Fixed an issue where provider configuration input variables were not properly merging with values in configuration ([#29000](https://github.com/hashicorp/terraform/issues/29000)) +* cli: Blocks using SchemaConfigModeAttr in the provider SDK can now represented in the plan json output [GH-29522] ## Previous Releases From ad634f60a5acbaade1eb8c225564e17ad2267f00 Mon Sep 17 00:00:00 2001 From: Paul Hinze Date: Tue, 7 Sep 2021 21:04:13 -0500 Subject: [PATCH 039/644] Add clarification to message about community PR review --- .github/CONTRIBUTING.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 3080a1d3bea8..6a22f15cf285 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -6,6 +6,8 @@ This repository contains only Terraform core, which includes the command line in **Note:** Due to current low staffing on the Terraform Core team at HashiCorp, **we are not routinely reviewing and merging community-submitted pull requests**. We do hope to begin processing them again soon once we're back up to full staffing again, but for the moment we need to ask for patience. Thanks! +**Additional note:** The intent of the prior comment was to provide clarity for the community around what to expect for a small part of the work related to Terraform. This does not affect other PR reviews, such as those for Terraform providers. We expect that the relevant team will be appropriately staffed within the coming weeks, which should allow us to get back to normal community PR review practices. For the broader context and information on HashiCorp’s continued commitment to and investment in Terraform, see [this blog post](https://www.hashicorp.com/blog/terraform-community-contributions). + --- **All communication on GitHub, the community forum, and other HashiCorp-provided communication channels is subject to [the HashiCorp community guidelines](https://www.hashicorp.com/community-guidelines).** From cb5b1592281d264b5326069677738115a70e3357 Mon Sep 17 00:00:00 2001 From: Omar Ismail Date: Wed, 8 Sep 2021 10:44:46 -0400 Subject: [PATCH 040/644] Update terraform_remote_state data source docs: (#29534) * Add docs for tfe_outputs data source * Add docs for Terraform Cloud usage --- website/docs/language/state/remote-state-data.html.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/website/docs/language/state/remote-state-data.html.md b/website/docs/language/state/remote-state-data.html.md index 717d8fb30804..10e51460068a 100644 --- a/website/docs/language/state/remote-state-data.html.md +++ b/website/docs/language/state/remote-state-data.html.md @@ -48,6 +48,7 @@ limited to) the following: | Google Cloud DNS
(for IP addresses and hostnames) | [`google_dns_record_set` resource type](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/dns_record_set) | Normal DNS lookups, or [the `dns` provider](https://registry.terraform.io/providers/hashicorp/dns/latest/docs) | | Google Cloud Storage | [`google_storage_bucket_object` resource type](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/storage_bucket_object) | [`google_storage_bucket_object` data source](https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/storage_bucket_object) and [`http` data source](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | | HashiCorp Consul | [`consul_key_prefix` resource type](https://registry.terraform.io/providers/hashicorp/consul/latest/docs/resources/key_prefix) | [`consul_key_prefix` data source](https://registry.terraform.io/providers/hashicorp/consul/latest/docs/data-sources/key_prefix) | +| HashiCorp Terraform Cloud | Normal `outputs` terraform block | [`tfe_outputs` data source](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/data-sources/outputs) | | Kubernetes | [`kubernetes_config_map` resource type](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map) | [`kubernetes_config_map` data source](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/config_map) | | OCI Object Storage | [`oci_objectstorage_bucket` resource type](https://registry.terraform.io/providers/hashicorp/oci/latest/docs/resources/objectstorage_object) | [`oci_objectstorage_bucket` data source](https://registry.terraform.io/providers/hashicorp/oci/latest/docs/data-sources/objectstorage_object) | @@ -94,6 +95,12 @@ post-processing such as JSON decoding. You can then change that module later if you switch to a different strategy for sharing data between multiple Terraform configurations. +## Usage with Terraform Cloud/Enterprise + +When trying to access remote state outputs in Terraform Cloud/Enterprise, it is recommended to use the `tfe_outputs` data source in the [Terraform Cloud/Enterprise Provider](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs) instead of relying the `terraform_remote_state` data source. + +See the [full documentation](https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/data-sources/outputs) for the `tfe_outputs` data source for more details. + ## Example Usage (`remote` Backend) ```hcl From 4a0b8d1e099c7cde1bdc68f3693f4034022ebd4b Mon Sep 17 00:00:00 2001 From: hc-github-team-tf-core Date: Wed, 8 Sep 2021 19:05:57 +0000 Subject: [PATCH 041/644] Release v1.1.0-alpha20210908 --- CHANGELOG.md | 2 +- version/version.go | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 41523b54e837..809e5795bfc3 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -17,7 +17,7 @@ ENHANCEMENTS: BUG FIXES: * core: Fixed an issue where provider configuration input variables were not properly merging with values in configuration ([#29000](https://github.com/hashicorp/terraform/issues/29000)) -* cli: Blocks using SchemaConfigModeAttr in the provider SDK can now represented in the plan json output [GH-29522] +* cli: Blocks using SchemaConfigModeAttr in the provider SDK can now represented in the plan json output ([#29522](https://github.com/hashicorp/terraform/issues/29522)) ## Previous Releases diff --git a/version/version.go b/version/version.go index 86f22153dde3..6521dfbcce39 100644 --- a/version/version.go +++ b/version/version.go @@ -16,7 +16,7 @@ var Version = "1.1.0" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release // such as "dev" (in development), "beta", "rc1", etc. -var Prerelease = "dev" +var Prerelease = "alpha20210908" // SemVer is an instance of version.Version. This has the secondary // benefit of verifying during tests and init time that our version is a From 1cd6c9fae21ecdb6f84e190c50b42c640871ef76 Mon Sep 17 00:00:00 2001 From: hc-github-team-tf-core Date: Wed, 8 Sep 2021 19:21:43 +0000 Subject: [PATCH 042/644] Cleanup after v1.1.0-alpha20210908 release --- version/version.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/version/version.go b/version/version.go index 6521dfbcce39..86f22153dde3 100644 --- a/version/version.go +++ b/version/version.go @@ -16,7 +16,7 @@ var Version = "1.1.0" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release // such as "dev" (in development), "beta", "rc1", etc. -var Prerelease = "alpha20210908" +var Prerelease = "dev" // SemVer is an instance of version.Version. This has the secondary // benefit of verifying during tests and init time that our version is a From cd29c3e5fdd4791362ac8a06906bdf987ea3b355 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Thu, 9 Sep 2021 11:25:35 -0400 Subject: [PATCH 043/644] Revert "json-output: Release format version 1.0" --- internal/command/jsonplan/plan.go | 2 +- internal/command/jsonprovider/provider.go | 2 +- internal/command/jsonstate/state.go | 2 +- .../testdata/providers-schema/basic/output.json | 2 +- .../testdata/providers-schema/empty/output.json | 4 ++-- .../providers-schema/required/output.json | 2 +- .../testdata/show-json-sensitive/output.json | 4 ++-- .../testdata/show-json-state/basic/output.json | 2 +- .../testdata/show-json-state/empty/output.json | 4 ++-- .../show-json-state/modules/output.json | 2 +- .../sensitive-variables/output.json | 2 +- .../testdata/show-json/basic-create/output.json | 4 ++-- .../testdata/show-json/basic-delete/output.json | 4 ++-- .../testdata/show-json/basic-update/output.json | 4 ++-- .../testdata/show-json/drift/output.json | 4 ++-- .../show-json/module-depends-on/output.json | 2 +- .../testdata/show-json/modules/output.json | 4 ++-- .../show-json/multi-resource-update/output.json | 4 ++-- .../show-json/nested-modules/output.json | 2 +- .../provider-version-no-config/output.json | 4 ++-- .../show-json/provider-version/output.json | 4 ++-- .../show-json/requires-replace/output.json | 4 ++-- .../show-json/sensitive-values/output.json | 4 ++-- .../incorrectmodulename/output.json | 2 +- .../validate-invalid/interpolation/output.json | 2 +- .../missing_defined_var/output.json | 2 +- .../validate-invalid/missing_quote/output.json | 2 +- .../validate-invalid/missing_var/output.json | 2 +- .../multiple_modules/output.json | 2 +- .../multiple_providers/output.json | 2 +- .../multiple_resources/output.json | 2 +- .../testdata/validate-invalid/output.json | 2 +- .../validate-invalid/outputs/output.json | 2 +- .../command/testdata/validate-valid/output.json | 2 +- internal/command/views/validate.go | 2 +- .../docs/cli/commands/providers/schema.html.md | 15 ++------------- website/docs/cli/commands/validate.html.md | 17 +++++------------ website/docs/internals/json-format.html.md | 15 ++------------- 38 files changed, 57 insertions(+), 86 deletions(-) diff --git a/internal/command/jsonplan/plan.go b/internal/command/jsonplan/plan.go index b1d488156fcf..1b90daf2d092 100644 --- a/internal/command/jsonplan/plan.go +++ b/internal/command/jsonplan/plan.go @@ -22,7 +22,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "1.0" +const FormatVersion = "0.2" // Plan is the top-level representation of the json format of a plan. It includes // the complete config and current state. diff --git a/internal/command/jsonprovider/provider.go b/internal/command/jsonprovider/provider.go index 4487db4987ae..b507bc242e9e 100644 --- a/internal/command/jsonprovider/provider.go +++ b/internal/command/jsonprovider/provider.go @@ -9,7 +9,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "1.0" +const FormatVersion = "0.2" // providers is the top-level object returned when exporting provider schemas type providers struct { diff --git a/internal/command/jsonstate/state.go b/internal/command/jsonstate/state.go index 46532875c334..341040d2d1c9 100644 --- a/internal/command/jsonstate/state.go +++ b/internal/command/jsonstate/state.go @@ -18,7 +18,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "1.0" +const FormatVersion = "0.2" // state is the top-level representation of the json format of a terraform // state. diff --git a/internal/command/testdata/providers-schema/basic/output.json b/internal/command/testdata/providers-schema/basic/output.json index dfac55b38c35..f14786c3e31e 100644 --- a/internal/command/testdata/providers-schema/basic/output.json +++ b/internal/command/testdata/providers-schema/basic/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "provider_schemas": { "registry.terraform.io/hashicorp/test": { "provider": { diff --git a/internal/command/testdata/providers-schema/empty/output.json b/internal/command/testdata/providers-schema/empty/output.json index 381450cade5c..12d30d201356 100644 --- a/internal/command/testdata/providers-schema/empty/output.json +++ b/internal/command/testdata/providers-schema/empty/output.json @@ -1,3 +1,3 @@ { - "format_version": "1.0" -} + "format_version": "0.2" +} \ No newline at end of file diff --git a/internal/command/testdata/providers-schema/required/output.json b/internal/command/testdata/providers-schema/required/output.json index dfac55b38c35..f14786c3e31e 100644 --- a/internal/command/testdata/providers-schema/required/output.json +++ b/internal/command/testdata/providers-schema/required/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "provider_schemas": { "registry.terraform.io/hashicorp/test": { "provider": { diff --git a/internal/command/testdata/show-json-sensitive/output.json b/internal/command/testdata/show-json-sensitive/output.json index 206fbb7f6e60..5f22c4ccf3a2 100644 --- a/internal/command/testdata/show-json-sensitive/output.json +++ b/internal/command/testdata/show-json-sensitive/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "variables": { "test_var": { "value": "bar" @@ -66,7 +66,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json-state/basic/output.json b/internal/command/testdata/show-json-state/basic/output.json index 229fa00e7262..3087ad118050 100644 --- a/internal/command/testdata/show-json-state/basic/output.json +++ b/internal/command/testdata/show-json-state/basic/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "terraform_version": "0.12.0", "values": { "root_module": { diff --git a/internal/command/testdata/show-json-state/empty/output.json b/internal/command/testdata/show-json-state/empty/output.json index 381450cade5c..12d30d201356 100644 --- a/internal/command/testdata/show-json-state/empty/output.json +++ b/internal/command/testdata/show-json-state/empty/output.json @@ -1,3 +1,3 @@ { - "format_version": "1.0" -} + "format_version": "0.2" +} \ No newline at end of file diff --git a/internal/command/testdata/show-json-state/modules/output.json b/internal/command/testdata/show-json-state/modules/output.json index eba163bdbb52..eeee8f6cffbc 100644 --- a/internal/command/testdata/show-json-state/modules/output.json +++ b/internal/command/testdata/show-json-state/modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "terraform_version": "0.12.0", "values": { "outputs": { diff --git a/internal/command/testdata/show-json-state/sensitive-variables/output.json b/internal/command/testdata/show-json-state/sensitive-variables/output.json index 60503cd3ad80..b133aeef13bf 100644 --- a/internal/command/testdata/show-json-state/sensitive-variables/output.json +++ b/internal/command/testdata/show-json-state/sensitive-variables/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "terraform_version": "0.14.0", "values": { "root_module": { diff --git a/internal/command/testdata/show-json/basic-create/output.json b/internal/command/testdata/show-json/basic-create/output.json index d1b8aae5361b..3474443ed386 100644 --- a/internal/command/testdata/show-json/basic-create/output.json +++ b/internal/command/testdata/show-json/basic-create/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/basic-delete/output.json b/internal/command/testdata/show-json/basic-delete/output.json index 8a0018cd5bff..9ebea2058f78 100644 --- a/internal/command/testdata/show-json/basic-delete/output.json +++ b/internal/command/testdata/show-json/basic-delete/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "variables": { "test_var": { "value": "bar" @@ -88,7 +88,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/basic-update/output.json b/internal/command/testdata/show-json/basic-update/output.json index e4b4731426a1..2b8bc25e3034 100644 --- a/internal/command/testdata/show-json/basic-update/output.json +++ b/internal/command/testdata/show-json/basic-update/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "variables": { "test_var": { "value": "bar" @@ -68,7 +68,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/drift/output.json b/internal/command/testdata/show-json/drift/output.json index 55c9e3f71cd3..7badb45e5edd 100644 --- a/internal/command/testdata/show-json/drift/output.json +++ b/internal/command/testdata/show-json/drift/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "planned_values": { "root_module": { "resources": [ @@ -105,7 +105,7 @@ } ], "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "root_module": { "resources": [ diff --git a/internal/command/testdata/show-json/module-depends-on/output.json b/internal/command/testdata/show-json/module-depends-on/output.json index d02efaa22f0d..cc7ed679f0cc 100644 --- a/internal/command/testdata/show-json/module-depends-on/output.json +++ b/internal/command/testdata/show-json/module-depends-on/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "terraform_version": "0.13.1-dev", "planned_values": { "root_module": { diff --git a/internal/command/testdata/show-json/modules/output.json b/internal/command/testdata/show-json/modules/output.json index 4ed0ea45d692..440bebbff891 100644 --- a/internal/command/testdata/show-json/modules/output.json +++ b/internal/command/testdata/show-json/modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "planned_values": { "outputs": { "test": { @@ -74,7 +74,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/multi-resource-update/output.json b/internal/command/testdata/show-json/multi-resource-update/output.json index ba557de69c8c..d84bc5b08789 100644 --- a/internal/command/testdata/show-json/multi-resource-update/output.json +++ b/internal/command/testdata/show-json/multi-resource-update/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "terraform_version": "0.13.0", "variables": { "test_var": { @@ -127,7 +127,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "terraform_version": "0.13.0", "values": { "outputs": { diff --git a/internal/command/testdata/show-json/nested-modules/output.json b/internal/command/testdata/show-json/nested-modules/output.json index 359ea9ae181c..80e7ae3588e9 100644 --- a/internal/command/testdata/show-json/nested-modules/output.json +++ b/internal/command/testdata/show-json/nested-modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "planned_values": { "root_module": { "child_modules": [ diff --git a/internal/command/testdata/show-json/provider-version-no-config/output.json b/internal/command/testdata/show-json/provider-version-no-config/output.json index 6a8b1f451dc0..64b93ec751c0 100644 --- a/internal/command/testdata/show-json/provider-version-no-config/output.json +++ b/internal/command/testdata/show-json/provider-version-no-config/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/provider-version/output.json b/internal/command/testdata/show-json/provider-version/output.json index 11fd3bd64c15..b5369806e933 100644 --- a/internal/command/testdata/show-json/provider-version/output.json +++ b/internal/command/testdata/show-json/provider-version/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/requires-replace/output.json b/internal/command/testdata/show-json/requires-replace/output.json index e71df784f4f7..077d900b13b0 100644 --- a/internal/command/testdata/show-json/requires-replace/output.json +++ b/internal/command/testdata/show-json/requires-replace/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "planned_values": { "root_module": { "resources": [ @@ -48,7 +48,7 @@ } ], "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "root_module": { "resources": [ diff --git a/internal/command/testdata/show-json/sensitive-values/output.json b/internal/command/testdata/show-json/sensitive-values/output.json index d7e4719c71f5..7cbc9ccf0e75 100644 --- a/internal/command/testdata/show-json/sensitive-values/output.json +++ b/internal/command/testdata/show-json/sensitive-values/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.2", "variables": { "test_var": { "value": "boop" @@ -69,7 +69,7 @@ } }, "prior_state": { - "format_version": "1.0", + "format_version": "0.2", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/validate-invalid/incorrectmodulename/output.json b/internal/command/testdata/validate-invalid/incorrectmodulename/output.json index f144313fa455..0c2ce68abd37 100644 --- a/internal/command/testdata/validate-invalid/incorrectmodulename/output.json +++ b/internal/command/testdata/validate-invalid/incorrectmodulename/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 4, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/interpolation/output.json b/internal/command/testdata/validate-invalid/interpolation/output.json index 2843b19121fc..7845ec0f4e81 100644 --- a/internal/command/testdata/validate-invalid/interpolation/output.json +++ b/internal/command/testdata/validate-invalid/interpolation/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 2, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_defined_var/output.json b/internal/command/testdata/validate-invalid/missing_defined_var/output.json index 40258a98cd27..c2a57c5e6a98 100644 --- a/internal/command/testdata/validate-invalid/missing_defined_var/output.json +++ b/internal/command/testdata/validate-invalid/missing_defined_var/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": true, "error_count": 0, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_quote/output.json b/internal/command/testdata/validate-invalid/missing_quote/output.json index 87aeca8b7817..cdf99d8b2a2b 100644 --- a/internal/command/testdata/validate-invalid/missing_quote/output.json +++ b/internal/command/testdata/validate-invalid/missing_quote/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_var/output.json b/internal/command/testdata/validate-invalid/missing_var/output.json index 6f0b9d5d4c8d..2a4e0be71ebd 100644 --- a/internal/command/testdata/validate-invalid/missing_var/output.json +++ b/internal/command/testdata/validate-invalid/missing_var/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_modules/output.json b/internal/command/testdata/validate-invalid/multiple_modules/output.json index 1aeaf929a913..4cd6dfb9f0ad 100644 --- a/internal/command/testdata/validate-invalid/multiple_modules/output.json +++ b/internal/command/testdata/validate-invalid/multiple_modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_providers/output.json b/internal/command/testdata/validate-invalid/multiple_providers/output.json index 309cf0ea7c34..63eb2d193820 100644 --- a/internal/command/testdata/validate-invalid/multiple_providers/output.json +++ b/internal/command/testdata/validate-invalid/multiple_providers/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_resources/output.json b/internal/command/testdata/validate-invalid/multiple_resources/output.json index ded584e6846c..33d5052284e9 100644 --- a/internal/command/testdata/validate-invalid/multiple_resources/output.json +++ b/internal/command/testdata/validate-invalid/multiple_resources/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/output.json b/internal/command/testdata/validate-invalid/output.json index 73254853932f..663fe0153071 100644 --- a/internal/command/testdata/validate-invalid/output.json +++ b/internal/command/testdata/validate-invalid/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/outputs/output.json b/internal/command/testdata/validate-invalid/outputs/output.json index f774b458be4c..d05ed4b77173 100644 --- a/internal/command/testdata/validate-invalid/outputs/output.json +++ b/internal/command/testdata/validate-invalid/outputs/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": false, "error_count": 2, "warning_count": 0, diff --git a/internal/command/testdata/validate-valid/output.json b/internal/command/testdata/validate-valid/output.json index 40258a98cd27..c2a57c5e6a98 100644 --- a/internal/command/testdata/validate-valid/output.json +++ b/internal/command/testdata/validate-valid/output.json @@ -1,5 +1,5 @@ { - "format_version": "1.0", + "format_version": "0.1", "valid": true, "error_count": 0, "warning_count": 0, diff --git a/internal/command/views/validate.go b/internal/command/views/validate.go index 08ce913f82ce..1e597277a478 100644 --- a/internal/command/views/validate.go +++ b/internal/command/views/validate.go @@ -81,7 +81,7 @@ func (v *ValidateJSON) Results(diags tfdiags.Diagnostics) int { // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. - const FormatVersion = "1.0" + const FormatVersion = "0.1" type Output struct { FormatVersion string `json:"format_version"` diff --git a/website/docs/cli/commands/providers/schema.html.md b/website/docs/cli/commands/providers/schema.html.md index 2a3dddc1338e..e97e50f2305e 100644 --- a/website/docs/cli/commands/providers/schema.html.md +++ b/website/docs/cli/commands/providers/schema.html.md @@ -23,18 +23,7 @@ The list of available flags are: Please note that, at this time, the `-json` flag is a _required_ option. In future releases, this command will be extended to allow for additional options. -The output includes a `format_version` key, which as of Terraform 1.1.0 has -value `"1.0"`. The semantics of this version are: - -- We will increment the minor version, e.g. `"1.1"`, for backward-compatible - changes or additions. Ignore any object properties with unrecognized names to - remain forward-compatible with future minor versions. -- We will increment the major version, e.g. `"2.0"`, for changes that are not - backward-compatible. Reject any input which reports an unsupported major - version. - -We will introduce new major versions only within the bounds of -[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). +-> **Note:** The output includes a `format_version` key, which currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version and make stronger promises about compatibility. We do not anticipate any significant breaking changes to the format before its first major version, however. ## Format Summary @@ -52,7 +41,7 @@ The JSON output format consists of the following objects and sub-objects: ```javascript { - "format_version": "1.0", + "format_version": "0.1", // "provider_schemas" describes the provider schemas for all // providers throughout the configuration tree. diff --git a/website/docs/cli/commands/validate.html.md b/website/docs/cli/commands/validate.html.md index e81da01b27a4..583186e3d069 100644 --- a/website/docs/cli/commands/validate.html.md +++ b/website/docs/cli/commands/validate.html.md @@ -57,18 +57,11 @@ to the JSON output setting. For that reason, external software consuming Terraform's output should be prepared to find data on stdout that _isn't_ valid JSON, which it should then treat as a generic error case. -The output includes a `format_version` key, which as of Terraform 1.1.0 has -value `"1.0"`. The semantics of this version are: - -- We will increment the minor version, e.g. `"1.1"`, for backward-compatible - changes or additions. Ignore any object properties with unrecognized names to - remain forward-compatible with future minor versions. -- We will increment the major version, e.g. `"2.0"`, for changes that are not - backward-compatible. Reject any input which reports an unsupported major - version. - -We will introduce new major versions only within the bounds of -[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). +**Note:** The output includes a `format_version` key, which currently has major +version zero to indicate that the format is experimental and subject to change. +A future version will assign a non-zero major version and make stronger +promises about compatibility. We do not anticipate any significant breaking +changes to the format before its first major version, however. In the normal case, Terraform will print a JSON object to the standard output stream. The top-level JSON object will have the following properties: diff --git a/website/docs/internals/json-format.html.md b/website/docs/internals/json-format.html.md index b5f8daab35c3..9a3efeff5d46 100644 --- a/website/docs/internals/json-format.html.md +++ b/website/docs/internals/json-format.html.md @@ -16,18 +16,7 @@ Since the format of plan files isn't suited for use with external tools (and lik Use `terraform show -json ` to generate a JSON representation of a plan or state file. See [the `terraform show` documentation](/docs/cli/commands/show.html) for more details. -The output includes a `format_version` key, which as of Terraform 1.1.0 has -value `"1.0"`. The semantics of this version are: - -- We will increment the minor version, e.g. `"1.1"`, for backward-compatible - changes or additions. Ignore any object properties with unrecognized names to - remain forward-compatible with future minor versions. -- We will increment the major version, e.g. `"2.0"`, for changes that are not - backward-compatible. Reject any input which reports an unsupported major - version. - -We will introduce new major versions only within the bounds of -[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). +-> **Note:** The output includes a `format_version` key, which currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version and make stronger promises about compatibility. We do not anticipate any significant breaking changes to the format before its first major version, however. ## Format Summary @@ -71,7 +60,7 @@ For ease of consumption by callers, the plan representation includes a partial r ```javascript { - "format_version": "1.0", + "format_version": "0.2", // "prior_state" is a representation of the state that the configuration is // being applied to, using the state representation described above. From 64a95fc29f679fba074866bfcd207a4f408c2476 Mon Sep 17 00:00:00 2001 From: Paddy Date: Thu, 9 Sep 2021 11:33:07 -0700 Subject: [PATCH 044/644] Add docs on how to release a new major protocol version. (#29552) Releasing a new major protocol version requires coordination between a few different projects and codebases, and it's easy to forget a step. This commit introduces a doc to keep track of these steps, making it less likely one will be omitted or forgotten. --- docs/plugin-protocol/releasing-new-version.md | 53 +++++++++++++++++++ 1 file changed, 53 insertions(+) create mode 100644 docs/plugin-protocol/releasing-new-version.md diff --git a/docs/plugin-protocol/releasing-new-version.md b/docs/plugin-protocol/releasing-new-version.md new file mode 100644 index 000000000000..2449b5c3d604 --- /dev/null +++ b/docs/plugin-protocol/releasing-new-version.md @@ -0,0 +1,53 @@ +# Releasing a New Version of the Protocol + +Terraform's plugin protocol is the contract between Terraform's plugins and +Terraform, and as such releasing a new version requires some coordination +between those pieces. This document is intended to be a checklist to consult +when adding a new major version of the protocol (X in X.Y) to ensure that +everything that needs to be is aware of it. + +## New Protobuf File + +The protocol is defined in protobuf files that live in the hashicorp/terraform +repository. Adding a new version of the protocol involves creating a new +`.proto` file in that directory. It is recommended that you copy the latest +protocol file, and modify it accordingly. + +## New terraform-plugin-go Package + +The +[hashicorp/terraform-plugin-go](https://github.com/hashicorp/terraform-plugin-go) +repository serves as the foundation for Terraform's plugin ecosystem. It needs +to know about the new major protocol version. Either open an issue in that repo +to have the Plugin SDK team add the new package, or if you would like to +contribute it yourself, open a PR. It is recommended that you copy the package +for the latest protocol version and modify it accordingly. + +## Update the Registry's List of Allowed Versions + +The Terraform Registry validates the protocol versions a provider advertises +support for when ingesting providers. Providers will not be able to advertise +support for the new protocol version until it is added to that list. + +## Update Terraform's Version Constraints + +Terraform only downloads providers that speak protocol versions it is +compatible with from the Registry during `terraform init`. When adding support +for a new protocol, you need to tell Terraform it knows that protocol version. +Modify the `SupportedPluginProtocols` variable in hashicorp/terraform's +`internal/getproviders/registry_client.go` file to include the new protocol. + +## Test Running a Provider With the Test Framework + +Use the provider test framework to test a provider written with the new +protocol. This end-to-end test ensures that providers written with the new +protocol work correctly wtih the test framework, especially in communicating +the protocol version between the test framework and Terraform. + +## Test Retrieving and Running a Provider From the Registry + +Publish a provider, either to the public registry or to the staging registry, +and test running `terraform init` and `terraform apply`, along with exercising +any of the new functionality the protocol version introduces. This end-to-end +test ensures that all the pieces needing to be updated before practitioners can +use providers built with the new protocol have been updated. From 782132da33c45ff257d4cbe43e65be1f6fd0dbd2 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 10 Sep 2021 14:44:03 -0400 Subject: [PATCH 045/644] remove incorrect computed check The config is already validated, and does not need to be checked in AssertPlanValid. Add some more coverage for plan validation. --- internal/plans/objchange/plan_valid.go | 5 - internal/plans/objchange/plan_valid_test.go | 194 +++++++++++++++++++- 2 files changed, 185 insertions(+), 14 deletions(-) diff --git a/internal/plans/objchange/plan_valid.go b/internal/plans/objchange/plan_valid.go index 2706a6c52e60..4f35f53478f1 100644 --- a/internal/plans/objchange/plan_valid.go +++ b/internal/plans/objchange/plan_valid.go @@ -286,11 +286,6 @@ func assertPlannedValueValid(attrS *configschema.Attribute, priorV, configV, pla } return errs } - } else { - if attrS.Computed { - errs = append(errs, path.NewErrorf("configuration present for computed attribute")) - return errs - } } // If this attribute has a NestedType, validate the nested object diff --git a/internal/plans/objchange/plan_valid_test.go b/internal/plans/objchange/plan_valid_test.go index 834d1046356d..5c054cb907e6 100644 --- a/internal/plans/objchange/plan_valid_test.go +++ b/internal/plans/objchange/plan_valid_test.go @@ -1387,6 +1387,7 @@ func TestAssertPlanValid(t *testing.T) { }, }, }, + Optional: true, Computed: true, }, "single": { @@ -1423,9 +1424,11 @@ func TestAssertPlanValid(t *testing.T) { "list": cty.NullVal(cty.List(cty.Object(map[string]cty.Type{ "name": cty.String, }))), - "set": cty.NullVal(cty.Set(cty.Object(map[string]cty.Type{ - "name": cty.String, - }))), + "set": cty.SetVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("from_config"), + }), + }), "single": cty.NullVal(cty.Object(map[string]cty.Type{ "name": cty.String, })), @@ -1437,14 +1440,14 @@ func TestAssertPlanValid(t *testing.T) { })), }), "list": cty.ListVal([]cty.Value{ - cty.UnknownVal(cty.Object(map[string]cty.Type{ - "name": cty.String, - })), + cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("computed"), + }), }), "set": cty.SetVal([]cty.Value{ - cty.UnknownVal(cty.Object(map[string]cty.Type{ - "name": cty.String, - })), + cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("from_config"), + }), }), "single": cty.UnknownVal(cty.Object(map[string]cty.Type{ "name": cty.String, @@ -1452,6 +1455,179 @@ func TestAssertPlanValid(t *testing.T) { }), nil, }, + "optional computed within nested objects": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "map": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Computed: true, + }, + }, + }, + }, + // When an object has dynamic attrs, the map may be + // handled as an object. + "map_as_obj": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Optional: true, + Computed: true, + }, + }, + }, + }, + "list": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingList, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Optional: true, + Computed: true, + }, + }, + }, + }, + "set": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSet, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Optional: true, + Computed: true, + }, + }, + }, + }, + "single": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSingle, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.DynamicPseudoType, + Optional: true, + Computed: true, + }, + }, + }, + }, + }, + }, + cty.NullVal(cty.Object(map[string]cty.Type{ + "map": cty.Map(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "map_as_obj": cty.Map(cty.Object(map[string]cty.Type{ + "name": cty.DynamicPseudoType, + })), + "list": cty.List(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "set": cty.Set(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + "single": cty.Object(map[string]cty.Type{ + "name": cty.String, + }), + })), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("from_config"), + }), + }), + "map_as_obj": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.DynamicPseudoType), + }), + }), + "list": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "set": cty.SetVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "single": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("from_config"), + }), + }), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("from_config"), + }), + }), + "map_as_obj": cty.ObjectVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("computed"), + }), + }), + "list": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("computed"), + }), + }), + "set": cty.SetVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "name": cty.NullVal(cty.String), + }), + }), + "single": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("from_config"), + }), + }), + nil, + }, + "cannot replace config nested attr": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "map": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "name": { + Type: cty.String, + Computed: true, + Optional: true, + }, + }, + }, + }, + }, + }, + cty.NullVal(cty.Object(map[string]cty.Type{ + "map": cty.Map(cty.Object(map[string]cty.Type{ + "name": cty.String, + })), + })), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("from_config"), + }), + }), + }), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "name": cty.StringVal("from_provider"), + }), + }), + }), + []string{`.map.one.name: planned value cty.StringVal("from_provider") does not match config value cty.StringVal("from_config")`}, + }, } for name, test := range tests { From feb219a9c44652eac0ad02ab9272d6246de1443c Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 31 Aug 2021 09:37:29 -0700 Subject: [PATCH 046/644] core: Un-export the LoadSchemas function The public interface for loading schemas is Context.Schemas, which can take into account the context's records of which plugin versions and checksums we're expecting. loadSchemas is an implementation detail of that, representing the part we run only after we've verified all of the plugins. --- internal/terraform/context.go | 2 +- internal/terraform/schemas.go | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/internal/terraform/context.go b/internal/terraform/context.go index abb20528a448..907b68762343 100644 --- a/internal/terraform/context.go +++ b/internal/terraform/context.go @@ -221,7 +221,7 @@ func (c *Context) Schemas(config *configs.Config, state *states.State) (*Schemas } } - ret, err := LoadSchemas(config, state, c.components) + ret, err := loadSchemas(config, state, c.components) if err != nil { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, diff --git a/internal/terraform/schemas.go b/internal/terraform/schemas.go index 8531d118486d..974cbb2eb441 100644 --- a/internal/terraform/schemas.go +++ b/internal/terraform/schemas.go @@ -64,7 +64,7 @@ func (ss *Schemas) ProvisionerConfig(name string) *configschema.Block { return ss.Provisioners[name] } -// LoadSchemas searches the given configuration, state and plan (any of which +// loadSchemas searches the given configuration, state and plan (any of which // may be nil) for constructs that have an associated schema, requests the // necessary schemas from the given component factory (which must _not_ be nil), // and returns a single object representing all of the necessary schemas. @@ -74,7 +74,7 @@ func (ss *Schemas) ProvisionerConfig(name string) *configschema.Block { // either misbehavior on the part of one of the providers or of the provider // protocol itself. When returned with errors, the returned schemas object is // still valid but may be incomplete. -func LoadSchemas(config *configs.Config, state *states.State, components contextComponentFactory) (*Schemas, error) { +func loadSchemas(config *configs.Config, state *states.State, components contextComponentFactory) (*Schemas, error) { schemas := &Schemas{ Providers: map[addrs.Provider]*ProviderSchema{}, Provisioners: map[string]*configschema.Block{}, From d51921f08557644bcb61acf3c46bbd78a3ea3c11 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 31 Aug 2021 10:16:44 -0700 Subject: [PATCH 047/644] core: Provider transformers don't use the set of all available providers In earlier incarnations of these transformers we used the set of all available providers for tasks such as generating implied provider configuration nodes. However, in modern Terraform we can extract all of the information we need from the configuration itself, and so these transformers weren't actually using this set of provider addresses. These also ended up getting left behind as sets of string rather than sets of addrs.Provider in our earlier refactoring work, which didn't really matter because the result wasn't used anywhere anyway. Rather than updating these to use addrs.Provider instead, I've just removed the unused arguments entirely in the hope of making it easier to see what inputs these transformers use to make their decisions. --- internal/terraform/graph_builder_apply.go | 2 +- .../terraform/graph_builder_destroy_plan.go | 2 +- internal/terraform/graph_builder_eval.go | 2 +- internal/terraform/graph_builder_import.go | 2 +- internal/terraform/graph_builder_plan.go | 2 +- internal/terraform/transform_provider.go | 18 +++++--------- internal/terraform/transform_provider_test.go | 24 +++++++++---------- internal/terraform/transform_root_test.go | 4 +--- 8 files changed, 24 insertions(+), 32 deletions(-) diff --git a/internal/terraform/graph_builder_apply.go b/internal/terraform/graph_builder_apply.go index 683353cf9801..bf213d1d0620 100644 --- a/internal/terraform/graph_builder_apply.go +++ b/internal/terraform/graph_builder_apply.go @@ -116,7 +116,7 @@ func (b *ApplyGraphBuilder) Steps() []GraphTransformer { &AttachResourceConfigTransformer{Config: b.Config}, // add providers - TransformProviders(b.Components.ResourceProviders(), concreteProvider, b.Config), + transformProviders(concreteProvider, b.Config), // Remove modules no longer present in the config &RemovedModuleTransformer{Config: b.Config, State: b.State}, diff --git a/internal/terraform/graph_builder_destroy_plan.go b/internal/terraform/graph_builder_destroy_plan.go index 9962442af834..1973237b4bea 100644 --- a/internal/terraform/graph_builder_destroy_plan.go +++ b/internal/terraform/graph_builder_destroy_plan.go @@ -94,7 +94,7 @@ func (b *DestroyPlanGraphBuilder) Steps() []GraphTransformer { // Attach the configuration to any resources &AttachResourceConfigTransformer{Config: b.Config}, - TransformProviders(b.Components.ResourceProviders(), concreteProvider, b.Config), + transformProviders(concreteProvider, b.Config), // Destruction ordering. We require this only so that // targeting below will prune the correct things. diff --git a/internal/terraform/graph_builder_eval.go b/internal/terraform/graph_builder_eval.go index 18b6d5199719..65e663550652 100644 --- a/internal/terraform/graph_builder_eval.go +++ b/internal/terraform/graph_builder_eval.go @@ -75,7 +75,7 @@ func (b *EvalGraphBuilder) Steps() []GraphTransformer { // Attach the state &AttachStateTransformer{State: b.State}, - TransformProviders(b.Components.ResourceProviders(), concreteProvider, b.Config), + transformProviders(concreteProvider, b.Config), // Must attach schemas before ReferenceTransformer so that we can // analyze the configuration to find references. diff --git a/internal/terraform/graph_builder_import.go b/internal/terraform/graph_builder_import.go index af5df1403052..bbef67713f22 100644 --- a/internal/terraform/graph_builder_import.go +++ b/internal/terraform/graph_builder_import.go @@ -67,7 +67,7 @@ func (b *ImportGraphBuilder) Steps() []GraphTransformer { // Add the import steps &ImportStateTransformer{Targets: b.ImportTargets, Config: b.Config}, - TransformProviders(b.Components.ResourceProviders(), concreteProvider, config), + transformProviders(concreteProvider, config), // Must attach schemas before ReferenceTransformer so that we can // analyze the configuration to find references. diff --git a/internal/terraform/graph_builder_plan.go b/internal/terraform/graph_builder_plan.go index 84884c3d4ea0..8d680162492e 100644 --- a/internal/terraform/graph_builder_plan.go +++ b/internal/terraform/graph_builder_plan.go @@ -130,7 +130,7 @@ func (b *PlanGraphBuilder) Steps() []GraphTransformer { &AttachResourceConfigTransformer{Config: b.Config}, // add providers - TransformProviders(b.Components.ResourceProviders(), b.ConcreteProvider, b.Config), + transformProviders(b.ConcreteProvider, b.Config), // Remove modules no longer present in the config &RemovedModuleTransformer{Config: b.Config, State: b.State}, diff --git a/internal/terraform/transform_provider.go b/internal/terraform/transform_provider.go index ee5087eafa75..3e459e7a7b11 100644 --- a/internal/terraform/transform_provider.go +++ b/internal/terraform/transform_provider.go @@ -11,19 +11,17 @@ import ( "github.com/hashicorp/terraform/internal/tfdiags" ) -func TransformProviders(providers []string, concrete ConcreteProviderNodeFunc, config *configs.Config) GraphTransformer { +func transformProviders(concrete ConcreteProviderNodeFunc, config *configs.Config) GraphTransformer { return GraphTransformMulti( // Add providers from the config &ProviderConfigTransformer{ - Config: config, - Providers: providers, - Concrete: concrete, + Config: config, + Concrete: concrete, }, // Add any remaining missing providers &MissingProviderTransformer{ - Config: config, - Providers: providers, - Concrete: concrete, + Config: config, + Concrete: concrete, }, // Connect the providers &ProviderTransformer{ @@ -298,9 +296,6 @@ func (t *CloseProviderTransformer) Transform(g *Graph) error { // PruneProviderTransformer can then remove these once ProviderTransformer // has resolved all of the inheritence, etc. type MissingProviderTransformer struct { - // Providers is the list of providers we support. - Providers []string - // MissingProviderTransformer needs the config to rule out _implied_ default providers Config *configs.Config @@ -478,8 +473,7 @@ func (n *graphNodeProxyProvider) Target() GraphNodeProvider { // ProviderConfigTransformer adds all provider nodes from the configuration and // attaches the configs. type ProviderConfigTransformer struct { - Providers []string - Concrete ConcreteProviderNodeFunc + Concrete ConcreteProviderNodeFunc // each provider node is stored here so that the proxy nodes can look up // their targets by name. diff --git a/internal/terraform/transform_provider_test.go b/internal/terraform/transform_provider_test.go index 596a3973549c..0436fc03248f 100644 --- a/internal/terraform/transform_provider_test.go +++ b/internal/terraform/transform_provider_test.go @@ -31,7 +31,7 @@ func TestProviderTransformer(t *testing.T) { g := testProviderTransformerGraph(t, mod) { - transform := &MissingProviderTransformer{Providers: []string{"aws"}} + transform := &MissingProviderTransformer{} if err := transform.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -79,7 +79,7 @@ func TestProviderTransformer_ImportModuleChild(t *testing.T) { } { - tf := &MissingProviderTransformer{Providers: []string{"foo", "bar"}} + tf := &MissingProviderTransformer{} if err := tf.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -108,7 +108,7 @@ func TestProviderTransformer_fqns(t *testing.T) { g := testProviderTransformerGraph(t, mod) { - transform := &MissingProviderTransformer{Providers: []string{"aws"}, Config: mod} + transform := &MissingProviderTransformer{Config: mod} if err := transform.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -132,7 +132,7 @@ func TestCloseProviderTransformer(t *testing.T) { g := testProviderTransformerGraph(t, mod) { - transform := &MissingProviderTransformer{Providers: []string{"aws"}} + transform := &MissingProviderTransformer{} if err := transform.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -164,7 +164,7 @@ func TestCloseProviderTransformer_withTargets(t *testing.T) { g := testProviderTransformerGraph(t, mod) transforms := []GraphTransformer{ - &MissingProviderTransformer{Providers: []string{"aws"}}, + &MissingProviderTransformer{}, &ProviderTransformer{}, &CloseProviderTransformer{}, &TargetsTransformer{ @@ -194,7 +194,7 @@ func TestMissingProviderTransformer(t *testing.T) { g := testProviderTransformerGraph(t, mod) { - transform := &MissingProviderTransformer{Providers: []string{"aws", "foo", "bar"}} + transform := &MissingProviderTransformer{} if err := transform.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -228,7 +228,7 @@ func TestMissingProviderTransformer_grandchildMissing(t *testing.T) { g := testProviderTransformerGraph(t, mod) { - transform := TransformProviders([]string{"aws", "foo", "bar"}, concrete, mod) + transform := transformProviders(concrete, mod) if err := transform.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -252,7 +252,7 @@ func TestPruneProviderTransformer(t *testing.T) { g := testProviderTransformerGraph(t, mod) { - transform := &MissingProviderTransformer{Providers: []string{"foo"}} + transform := &MissingProviderTransformer{} if err := transform.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -293,7 +293,7 @@ func TestProviderConfigTransformer_parentProviders(t *testing.T) { g := testProviderTransformerGraph(t, mod) { - tf := TransformProviders([]string{"aws"}, concrete, mod) + tf := transformProviders(concrete, mod) if err := tf.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -313,7 +313,7 @@ func TestProviderConfigTransformer_grandparentProviders(t *testing.T) { g := testProviderTransformerGraph(t, mod) { - tf := TransformProviders([]string{"aws"}, concrete, mod) + tf := transformProviders(concrete, mod) if err := tf.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -347,7 +347,7 @@ resource "test_object" "a" { g := testProviderTransformerGraph(t, mod) { - tf := TransformProviders([]string{"registry.terraform.io/hashicorp/test"}, concrete, mod) + tf := transformProviders(concrete, mod) if err := tf.Transform(g); err != nil { t.Fatalf("err: %s", err) } @@ -425,7 +425,7 @@ resource "test_object" "a" { g := testProviderTransformerGraph(t, mod) { - tf := TransformProviders([]string{"registry.terraform.io/hashicorp/test"}, concrete, mod) + tf := transformProviders(concrete, mod) if err := tf.Transform(g); err != nil { t.Fatalf("err: %s", err) } diff --git a/internal/terraform/transform_root_test.go b/internal/terraform/transform_root_test.go index 04b8659ded98..4a426b5e7cc2 100644 --- a/internal/terraform/transform_root_test.go +++ b/internal/terraform/transform_root_test.go @@ -19,9 +19,7 @@ func TestRootTransformer(t *testing.T) { } { - transform := &MissingProviderTransformer{ - Providers: []string{"aws", "do"}, - } + transform := &MissingProviderTransformer{} if err := transform.Transform(&g); err != nil { t.Fatalf("err: %s", err) } From dcfa077adf68efdcf0083280dff554720f7ba468 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 31 Aug 2021 10:22:25 -0700 Subject: [PATCH 048/644] core: contextComponentFactory doesn't need to enumerate components In earlier Terraform versions we used the set of all available plugins of each type to make graph-building decisions, but in modern Terraform we make those decisions based entirely on the configuration. Consequently, we no longer need the methods which can enumerate all of the known plugin components of a given type. Instead, we just try to instantiate each of the plugins that the configuration refers to and then handle the error when that fails, which typically means that the user needs to run "terraform init" to install some new plugins. --- internal/terraform/context_components.go | 19 ------------------- 1 file changed, 19 deletions(-) diff --git a/internal/terraform/context_components.go b/internal/terraform/context_components.go index 8532cfd4a04f..66f5dc664b64 100644 --- a/internal/terraform/context_components.go +++ b/internal/terraform/context_components.go @@ -15,12 +15,10 @@ import ( type contextComponentFactory interface { // ResourceProvider creates a new ResourceProvider with the given type. ResourceProvider(typ addrs.Provider) (providers.Interface, error) - ResourceProviders() []string // ResourceProvisioner creates a new ResourceProvisioner with the given // type. ResourceProvisioner(typ string) (provisioners.Interface, error) - ResourceProvisioners() []string } // basicComponentFactory just calls a factory from a map directly. @@ -29,23 +27,6 @@ type basicComponentFactory struct { provisioners map[string]provisioners.Factory } -func (c *basicComponentFactory) ResourceProviders() []string { - var result []string - for k := range c.providers { - result = append(result, k.String()) - } - return result -} - -func (c *basicComponentFactory) ResourceProvisioners() []string { - var result []string - for k := range c.provisioners { - result = append(result, k) - } - - return result -} - func (c *basicComponentFactory) ResourceProvider(typ addrs.Provider) (providers.Interface, error) { f, ok := c.providers[typ] if !ok { From 80b3fcf93e98678baf223bee31077ca18170fc4b Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 31 Aug 2021 10:58:05 -0700 Subject: [PATCH 049/644] core: Replace contextComponentFactory with contextPlugins In the v0.12 timeframe we made contextComponentFactory an interface with the expectation that we'd write mocks of it for tests, but in practice we ended up just always using the same "basicComponentFactory" implementation throughout. In the interests of simplification then, here we replace that interface and its sole implementation with a new concrete struct type contextPlugins. Along with the general benefit that this removes an unneeded indirection, this also means that we can add additional methods to the struct type without the usual restriction that interface types prefer to be small. In particular, in a future commit I'm planning to add methods for loading provider and provisioner schemas, working with the currently-unused new fields this commit has included in contextPlugins, as compared to its predecessor basicComponentFactory. --- internal/command/import_test.go | 2 +- internal/terraform/context.go | 11 +-- internal/terraform/context_apply.go | 2 +- internal/terraform/context_components.go | 46 --------- internal/terraform/context_eval.go | 8 +- internal/terraform/context_import.go | 2 +- internal/terraform/context_plan.go | 6 +- internal/terraform/context_plugins.go | 62 ++++++++++++ ...onents_test.go => context_plugins_test.go} | 13 +-- internal/terraform/context_validate.go | 10 +- internal/terraform/context_validate_test.go | 8 +- internal/terraform/eval_context_builtin.go | 6 +- .../terraform/eval_context_builtin_test.go | 8 +- internal/terraform/graph_builder_apply.go | 4 +- .../terraform/graph_builder_apply_test.go | 94 +++++++++---------- .../terraform/graph_builder_destroy_plan.go | 4 +- internal/terraform/graph_builder_eval.go | 4 +- internal/terraform/graph_builder_import.go | 5 +- internal/terraform/graph_builder_plan.go | 4 +- internal/terraform/graph_builder_plan_test.go | 56 +++++------ internal/terraform/graph_walk_context.go | 2 +- internal/terraform/schemas.go | 16 ++-- .../terraform/transform_destroy_cbd_test.go | 10 +- 23 files changed, 194 insertions(+), 189 deletions(-) delete mode 100644 internal/terraform/context_components.go create mode 100644 internal/terraform/context_plugins.go rename internal/terraform/{context_components_test.go => context_plugins_test.go} (87%) diff --git a/internal/command/import_test.go b/internal/command/import_test.go index bb56751e72ee..1469ea81d224 100644 --- a/internal/command/import_test.go +++ b/internal/command/import_test.go @@ -332,7 +332,7 @@ func TestImport_initializationErrorShouldUnlock(t *testing.T) { // specifically, it should fail due to a missing provider msg := strings.ReplaceAll(ui.ErrorWriter.String(), "\n", " ") - if want := `unknown provider "registry.terraform.io/hashicorp/unknown"`; !strings.Contains(msg, want) { + if want := `unavailable provider "registry.terraform.io/hashicorp/unknown"`; !strings.Contains(msg, want) { t.Errorf("incorrect message\nwant substring: %s\ngot:\n%s", want, msg) } diff --git a/internal/terraform/context.go b/internal/terraform/context.go index 907b68762343..e05200e1e527 100644 --- a/internal/terraform/context.go +++ b/internal/terraform/context.go @@ -88,7 +88,7 @@ type Context struct { // operations. meta *ContextMeta - components contextComponentFactory + plugins *contextPlugins dependencyLocks *depsfile.Locks providersInDevelopment map[addrs.Provider]struct{} @@ -144,10 +144,7 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { par = 10 } - components := &basicComponentFactory{ - providers: opts.Providers, - provisioners: opts.Provisioners, - } + plugins := newContextPlugins(opts.Providers, opts.Provisioners) log.Printf("[TRACE] terraform.NewContext: complete") @@ -156,7 +153,7 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { meta: opts.Meta, uiInput: opts.UIInput, - components: components, + plugins: plugins, dependencyLocks: opts.LockedDependencies, providersInDevelopment: opts.ProvidersInDevelopment, @@ -221,7 +218,7 @@ func (c *Context) Schemas(config *configs.Config, state *states.State) (*Schemas } } - ret, err := loadSchemas(config, state, c.components) + ret, err := loadSchemas(config, state, c.plugins) if err != nil { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, diff --git a/internal/terraform/context_apply.go b/internal/terraform/context_apply.go index 4ba7e8dc0eb9..4d5dc890f754 100644 --- a/internal/terraform/context_apply.go +++ b/internal/terraform/context_apply.go @@ -95,7 +95,7 @@ func (c *Context) applyGraph(plan *plans.Plan, config *configs.Config, schemas * Config: config, Changes: plan.Changes, State: plan.PriorState, - Components: c.components, + Plugins: c.plugins, Schemas: schemas, Targets: plan.TargetAddrs, ForceReplace: plan.ForceReplaceAddrs, diff --git a/internal/terraform/context_components.go b/internal/terraform/context_components.go deleted file mode 100644 index 66f5dc664b64..000000000000 --- a/internal/terraform/context_components.go +++ /dev/null @@ -1,46 +0,0 @@ -package terraform - -import ( - "fmt" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/providers" - "github.com/hashicorp/terraform/internal/provisioners" -) - -// contextComponentFactory is the interface that Context uses -// to initialize various components such as providers and provisioners. -// This factory gets more information than the raw maps using to initialize -// a Context. This information is used for debugging. -type contextComponentFactory interface { - // ResourceProvider creates a new ResourceProvider with the given type. - ResourceProvider(typ addrs.Provider) (providers.Interface, error) - - // ResourceProvisioner creates a new ResourceProvisioner with the given - // type. - ResourceProvisioner(typ string) (provisioners.Interface, error) -} - -// basicComponentFactory just calls a factory from a map directly. -type basicComponentFactory struct { - providers map[addrs.Provider]providers.Factory - provisioners map[string]provisioners.Factory -} - -func (c *basicComponentFactory) ResourceProvider(typ addrs.Provider) (providers.Interface, error) { - f, ok := c.providers[typ] - if !ok { - return nil, fmt.Errorf("unknown provider %q", typ.String()) - } - - return f() -} - -func (c *basicComponentFactory) ResourceProvisioner(typ string) (provisioners.Interface, error) { - f, ok := c.provisioners[typ] - if !ok { - return nil, fmt.Errorf("unknown provisioner %q", typ) - } - - return f() -} diff --git a/internal/terraform/context_eval.go b/internal/terraform/context_eval.go index 8be9b9367846..ad3deeee1d9d 100644 --- a/internal/terraform/context_eval.go +++ b/internal/terraform/context_eval.go @@ -66,10 +66,10 @@ func (c *Context) Eval(config *configs.Config, state *states.State, moduleAddr a log.Printf("[DEBUG] Building and walking 'eval' graph") graph, moreDiags := (&EvalGraphBuilder{ - Config: config, - State: state, - Components: c.components, - Schemas: schemas, + Config: config, + State: state, + Plugins: c.plugins, + Schemas: schemas, }).Build(addrs.RootModuleInstance) diags = diags.Append(moreDiags) if moreDiags.HasErrors() { diff --git a/internal/terraform/context_import.go b/internal/terraform/context_import.go index 48a5858a3df1..8675e06b5f56 100644 --- a/internal/terraform/context_import.go +++ b/internal/terraform/context_import.go @@ -63,7 +63,7 @@ func (c *Context) Import(config *configs.Config, prevRunState *states.State, opt builder := &ImportGraphBuilder{ ImportTargets: opts.Targets, Config: config, - Components: c.components, + Plugins: c.plugins, Schemas: schemas, } diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 7499e57d882b..9f9170c8aafc 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -371,7 +371,7 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, graph, diags := (&PlanGraphBuilder{ Config: config, State: prevRunState, - Components: c.components, + Plugins: c.plugins, Schemas: schemas, Targets: opts.Targets, ForceReplace: opts.ForceReplace, @@ -383,7 +383,7 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, graph, diags := (&PlanGraphBuilder{ Config: config, State: prevRunState, - Components: c.components, + Plugins: c.plugins, Schemas: schemas, Targets: opts.Targets, Validate: validate, @@ -395,7 +395,7 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, graph, diags := (&DestroyPlanGraphBuilder{ Config: config, State: prevRunState, - Components: c.components, + Plugins: c.plugins, Schemas: schemas, Targets: opts.Targets, Validate: validate, diff --git a/internal/terraform/context_plugins.go b/internal/terraform/context_plugins.go new file mode 100644 index 000000000000..2bd1964d3a66 --- /dev/null +++ b/internal/terraform/context_plugins.go @@ -0,0 +1,62 @@ +package terraform + +import ( + "fmt" + "sync" + + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/configs/configschema" + "github.com/hashicorp/terraform/internal/providers" + "github.com/hashicorp/terraform/internal/provisioners" +) + +// contextPlugins represents a library of available plugins (providers and +// provisioners) which we assume will all be used with the same +// terraform.Context, and thus it'll be safe to cache certain information +// about the providers for performance reasons. +type contextPlugins struct { + providerFactories map[addrs.Provider]providers.Factory + provisionerFactories map[string]provisioners.Factory + + // We memoize the schemas we've previously loaded in here, to avoid + // repeatedly paying the cost of activating the same plugins to access + // their schemas in various different spots. We use schemas for many + // purposes in Terraform, so there isn't a single choke point where + // it makes sense to preload all of them. + providerSchemas map[addrs.Provider]*ProviderSchema + provisionerSchemas map[string]*configschema.Block + schemasLock *sync.Mutex +} + +func newContextPlugins(providerFactories map[addrs.Provider]providers.Factory, provisionerFactories map[string]provisioners.Factory) *contextPlugins { + ret := &contextPlugins{ + providerFactories: providerFactories, + provisionerFactories: provisionerFactories, + } + ret.init() + return ret +} + +func (cp *contextPlugins) init() { + cp.providerSchemas = make(map[addrs.Provider]*ProviderSchema, len(cp.providerFactories)) + cp.provisionerSchemas = make(map[string]*configschema.Block, len(cp.provisionerFactories)) +} + +func (cp *contextPlugins) NewProviderInstance(addr addrs.Provider) (providers.Interface, error) { + f, ok := cp.providerFactories[addr] + if !ok { + return nil, fmt.Errorf("unavailable provider %q", addr.String()) + } + + return f() + +} + +func (cp *contextPlugins) NewProvisionerInstance(typ string) (provisioners.Interface, error) { + f, ok := cp.provisionerFactories[typ] + if !ok { + return nil, fmt.Errorf("unavailable provisioner %q", typ) + } + + return f() +} diff --git a/internal/terraform/context_components_test.go b/internal/terraform/context_plugins_test.go similarity index 87% rename from internal/terraform/context_components_test.go rename to internal/terraform/context_plugins_test.go index f92b41b6ac9e..130de5b52221 100644 --- a/internal/terraform/context_components_test.go +++ b/internal/terraform/context_plugins_test.go @@ -9,7 +9,7 @@ import ( "github.com/hashicorp/terraform/internal/provisioners" ) -// simpleMockComponentFactory returns a component factory pre-configured with +// simpleMockPluginLibrary returns a plugin library pre-configured with // one provider and one provisioner, both called "test". // // The provider is built with simpleMockProvider and the provisioner with @@ -19,26 +19,27 @@ import ( // Each call to this function produces an entirely-separate set of objects, // so the caller can feel free to modify the returned value to further // customize the mocks contained within. -func simpleMockComponentFactory() *basicComponentFactory { +func simpleMockPluginLibrary() *contextPlugins { // We create these out here, rather than in the factory functions below, // because we want each call to the factory to return the _same_ instance, // so that test code can customize it before passing this component // factory into real code under test. provider := simpleMockProvider() provisioner := simpleMockProvisioner() - return &basicComponentFactory{ - providers: map[addrs.Provider]providers.Factory{ + ret := &contextPlugins{ + providerFactories: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("test"): func() (providers.Interface, error) { return provider, nil }, }, - provisioners: map[string]provisioners.Factory{ + provisionerFactories: map[string]provisioners.Factory{ "test": func() (provisioners.Interface, error) { return provisioner, nil }, }, } - + ret.init() // prepare the internal cache data structures + return ret } // simpleTestSchema returns a block schema that contains a few optional diff --git a/internal/terraform/context_validate.go b/internal/terraform/context_validate.go index bda38633ab16..f95127895231 100644 --- a/internal/terraform/context_validate.go +++ b/internal/terraform/context_validate.go @@ -44,11 +44,11 @@ func (c *Context) Validate(config *configs.Config) tfdiags.Diagnostics { log.Printf("[DEBUG] Building and walking validate graph") graph, moreDiags := ValidateGraphBuilder(&PlanGraphBuilder{ - Config: config, - Components: c.components, - Schemas: schemas, - Validate: true, - State: states.NewState(), + Config: config, + Plugins: c.plugins, + Schemas: schemas, + Validate: true, + State: states.NewState(), }).Build(addrs.RootModuleInstance) diags = diags.Append(moreDiags) if moreDiags.HasErrors() { diff --git a/internal/terraform/context_validate_test.go b/internal/terraform/context_validate_test.go index b25bed35e9ee..e4eee13b4303 100644 --- a/internal/terraform/context_validate_test.go +++ b/internal/terraform/context_validate_test.go @@ -1198,10 +1198,10 @@ func TestContext2Validate_PlanGraphBuilder(t *testing.T) { assertNoDiagnostics(t, diags) graph, diags := ValidateGraphBuilder(&PlanGraphBuilder{ - Config: fixture.Config, - State: states.NewState(), - Components: c.components, - Schemas: schemas, + Config: fixture.Config, + State: states.NewState(), + Plugins: c.plugins, + Schemas: schemas, }).Build(addrs.RootModuleInstance) if diags.HasErrors() { t.Fatalf("errors from PlanGraphBuilder: %s", diags.Err()) diff --git a/internal/terraform/eval_context_builtin.go b/internal/terraform/eval_context_builtin.go index 1b971fd6b19c..6bcf91663023 100644 --- a/internal/terraform/eval_context_builtin.go +++ b/internal/terraform/eval_context_builtin.go @@ -62,7 +62,7 @@ type BuiltinEvalContext struct { VariableValues map[string]map[string]cty.Value VariableValuesLock *sync.Mutex - Components contextComponentFactory + Plugins *contextPlugins Hooks []Hook InputValue UIInput ProviderCache map[string]providers.Interface @@ -134,7 +134,7 @@ func (ctx *BuiltinEvalContext) InitProvider(addr addrs.AbsProviderConfig) (provi key := addr.String() - p, err := ctx.Components.ResourceProvider(addr.Provider) + p, err := ctx.Plugins.NewProviderInstance(addr.Provider) if err != nil { return nil, err } @@ -238,7 +238,7 @@ func (ctx *BuiltinEvalContext) Provisioner(n string) (provisioners.Interface, er p, ok := ctx.ProvisionerCache[n] if !ok { var err error - p, err = ctx.Components.ResourceProvisioner(n) + p, err = ctx.Plugins.NewProvisionerInstance(n) if err != nil { return nil, err } diff --git a/internal/terraform/eval_context_builtin_test.go b/internal/terraform/eval_context_builtin_test.go index 0521930b587a..0db0096a75ed 100644 --- a/internal/terraform/eval_context_builtin_test.go +++ b/internal/terraform/eval_context_builtin_test.go @@ -59,11 +59,9 @@ func TestBuildingEvalContextInitProvider(t *testing.T) { ctx = ctx.WithPath(addrs.RootModuleInstance).(*BuiltinEvalContext) ctx.ProviderLock = &lock ctx.ProviderCache = make(map[string]providers.Interface) - ctx.Components = &basicComponentFactory{ - providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): providers.FactoryFixed(testP), - }, - } + ctx.Plugins = newContextPlugins(map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): providers.FactoryFixed(testP), + }, nil) providerAddrDefault := addrs.AbsProviderConfig{ Module: addrs.RootModule, diff --git a/internal/terraform/graph_builder_apply.go b/internal/terraform/graph_builder_apply.go index bf213d1d0620..ca8e5777fa0c 100644 --- a/internal/terraform/graph_builder_apply.go +++ b/internal/terraform/graph_builder_apply.go @@ -26,9 +26,9 @@ type ApplyGraphBuilder struct { // State is the current state State *states.State - // Components is a factory for the plug-in components (providers and + // Plugins is a library of the plug-in components (providers and // provisioners) available for use. - Components contextComponentFactory + Plugins *contextPlugins // Schemas is the repository of schemas we will draw from to analyse // the configuration. diff --git a/internal/terraform/graph_builder_apply_test.go b/internal/terraform/graph_builder_apply_test.go index b96149bac65d..9aaa1650d859 100644 --- a/internal/terraform/graph_builder_apply_test.go +++ b/internal/terraform/graph_builder_apply_test.go @@ -46,10 +46,10 @@ func TestApplyGraphBuilder(t *testing.T) { } b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-apply-basic"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), + Config: testModule(t, "graph-builder-apply-basic"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), } g, err := b.Build(addrs.RootModuleInstance) @@ -110,11 +110,11 @@ func TestApplyGraphBuilder_depCbd(t *testing.T) { ) b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-apply-dep-cbd"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), - State: state, + Config: testModule(t, "graph-builder-apply-dep-cbd"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), + State: state, } g, err := b.Build(addrs.RootModuleInstance) @@ -184,10 +184,10 @@ func TestApplyGraphBuilder_doubleCBD(t *testing.T) { } b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-apply-double-cbd"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), + Config: testModule(t, "graph-builder-apply-double-cbd"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), } g, err := b.Build(addrs.RootModuleInstance) @@ -278,11 +278,11 @@ func TestApplyGraphBuilder_destroyStateOnly(t *testing.T) { ) b := &ApplyGraphBuilder{ - Config: testModule(t, "empty"), - Changes: changes, - State: state, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), + Config: testModule(t, "empty"), + Changes: changes, + State: state, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), } g, diags := b.Build(addrs.RootModuleInstance) @@ -341,11 +341,11 @@ func TestApplyGraphBuilder_destroyCount(t *testing.T) { ) b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-apply-count"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), - State: state, + Config: testModule(t, "graph-builder-apply-count"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), + State: state, } g, err := b.Build(addrs.RootModuleInstance) @@ -404,11 +404,11 @@ func TestApplyGraphBuilder_moduleDestroy(t *testing.T) { ) b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-apply-module-destroy"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), - State: state, + Config: testModule(t, "graph-builder-apply-module-destroy"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), + State: state, } g, err := b.Build(addrs.RootModuleInstance) @@ -442,10 +442,10 @@ func TestApplyGraphBuilder_targetModule(t *testing.T) { } b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-apply-target-module"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), + Config: testModule(t, "graph-builder-apply-target-module"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child2", addrs.NoKey), }, @@ -539,11 +539,11 @@ func TestApplyGraphBuilder_updateFromOrphan(t *testing.T) { ) b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-apply-orphan-update"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: schemas, - State: state, + Config: testModule(t, "graph-builder-apply-orphan-update"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: schemas, + State: state, } g, err := b.Build(addrs.RootModuleInstance) @@ -640,11 +640,11 @@ func TestApplyGraphBuilder_updateFromCBDOrphan(t *testing.T) { ) b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-apply-orphan-update"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: schemas, - State: state, + Config: testModule(t, "graph-builder-apply-orphan-update"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: schemas, + State: state, } g, err := b.Build(addrs.RootModuleInstance) @@ -691,11 +691,11 @@ func TestApplyGraphBuilder_orphanedWithProvider(t *testing.T) { ) b := &ApplyGraphBuilder{ - Config: testModule(t, "graph-builder-orphan-alias"), - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), - State: state, + Config: testModule(t, "graph-builder-orphan-alias"), + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), + State: state, } g, err := b.Build(addrs.RootModuleInstance) diff --git a/internal/terraform/graph_builder_destroy_plan.go b/internal/terraform/graph_builder_destroy_plan.go index 1973237b4bea..55bae82fdd35 100644 --- a/internal/terraform/graph_builder_destroy_plan.go +++ b/internal/terraform/graph_builder_destroy_plan.go @@ -23,9 +23,9 @@ type DestroyPlanGraphBuilder struct { // State is the current state State *states.State - // Components is a factory for the plug-in components (providers and + // Plugins is a library of plug-in components (providers and // provisioners) available for use. - Components contextComponentFactory + Plugins *contextPlugins // Schemas is the repository of schemas we will draw from to analyse // the configuration. diff --git a/internal/terraform/graph_builder_eval.go b/internal/terraform/graph_builder_eval.go index 65e663550652..f413dd7fea52 100644 --- a/internal/terraform/graph_builder_eval.go +++ b/internal/terraform/graph_builder_eval.go @@ -30,9 +30,9 @@ type EvalGraphBuilder struct { // State is the current state State *states.State - // Components is a factory for the plug-in components (providers and + // Plugins is a library of plug-in components (providers and // provisioners) available for use. - Components contextComponentFactory + Plugins *contextPlugins // Schemas is the repository of schemas we will draw from to analyse // the configuration. diff --git a/internal/terraform/graph_builder_import.go b/internal/terraform/graph_builder_import.go index bbef67713f22..d502b131ec69 100644 --- a/internal/terraform/graph_builder_import.go +++ b/internal/terraform/graph_builder_import.go @@ -17,8 +17,9 @@ type ImportGraphBuilder struct { // Module is a configuration to build the graph from. See ImportOpts.Config. Config *configs.Config - // Components is the factory for our available plugin components. - Components contextComponentFactory + // Plugins is a library of plug-in components (providers and + // provisioners) available for use. + Plugins *contextPlugins // Schemas is the repository of schemas we will draw from to analyse // the configuration. diff --git a/internal/terraform/graph_builder_plan.go b/internal/terraform/graph_builder_plan.go index 8d680162492e..e0a355d47f3c 100644 --- a/internal/terraform/graph_builder_plan.go +++ b/internal/terraform/graph_builder_plan.go @@ -28,9 +28,9 @@ type PlanGraphBuilder struct { // State is the current state State *states.State - // Components is a factory for the plug-in components (providers and + // Plugins is a library of plug-in components (providers and // provisioners) available for use. - Components contextComponentFactory + Plugins *contextPlugins // Schemas is the repository of schemas we will draw from to analyse // the configuration. diff --git a/internal/terraform/graph_builder_plan_test.go b/internal/terraform/graph_builder_plan_test.go index 6225e192e6c9..c5aa9a455914 100644 --- a/internal/terraform/graph_builder_plan_test.go +++ b/internal/terraform/graph_builder_plan_test.go @@ -26,16 +26,14 @@ func TestPlanGraphBuilder(t *testing.T) { }, } openstackProvider := mockProviderWithResourceTypeSchema("openstack_floating_ip", simpleTestSchema()) - components := &basicComponentFactory{ - providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): providers.FactoryFixed(awsProvider), - addrs.NewDefaultProvider("openstack"): providers.FactoryFixed(openstackProvider), - }, - } + plugins := newContextPlugins(map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("aws"): providers.FactoryFixed(awsProvider), + addrs.NewDefaultProvider("openstack"): providers.FactoryFixed(openstackProvider), + }, nil) b := &PlanGraphBuilder{ - Config: testModule(t, "graph-builder-plan-basic"), - Components: components, + Config: testModule(t, "graph-builder-plan-basic"), + Plugins: plugins, Schemas: &Schemas{ Providers: map[addrs.Provider]*ProviderSchema{ addrs.NewDefaultProvider("aws"): awsProvider.ProviderSchema(), @@ -77,15 +75,13 @@ func TestPlanGraphBuilder_dynamicBlock(t *testing.T) { }, }, }) - components := &basicComponentFactory{ - providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): providers.FactoryFixed(provider), - }, - } + plugins := newContextPlugins(map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): providers.FactoryFixed(provider), + }, nil) b := &PlanGraphBuilder{ - Config: testModule(t, "graph-builder-plan-dynblock"), - Components: components, + Config: testModule(t, "graph-builder-plan-dynblock"), + Plugins: plugins, Schemas: &Schemas{ Providers: map[addrs.Provider]*ProviderSchema{ addrs.NewDefaultProvider("test"): provider.ProviderSchema(), @@ -142,15 +138,13 @@ func TestPlanGraphBuilder_attrAsBlocks(t *testing.T) { }, }, }) - components := &basicComponentFactory{ - providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): providers.FactoryFixed(provider), - }, - } + plugins := newContextPlugins(map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): providers.FactoryFixed(provider), + }, nil) b := &PlanGraphBuilder{ - Config: testModule(t, "graph-builder-plan-attr-as-blocks"), - Components: components, + Config: testModule(t, "graph-builder-plan-attr-as-blocks"), + Plugins: plugins, Schemas: &Schemas{ Providers: map[addrs.Provider]*ProviderSchema{ addrs.NewDefaultProvider("test"): provider.ProviderSchema(), @@ -194,9 +188,9 @@ test_thing.b (expand) func TestPlanGraphBuilder_targetModule(t *testing.T) { b := &PlanGraphBuilder{ - Config: testModule(t, "graph-builder-plan-target-module-provider"), - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), + Config: testModule(t, "graph-builder-plan-target-module-provider"), + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child2", addrs.NoKey), }, @@ -216,15 +210,13 @@ func TestPlanGraphBuilder_targetModule(t *testing.T) { func TestPlanGraphBuilder_forEach(t *testing.T) { awsProvider := mockProviderWithResourceTypeSchema("aws_instance", simpleTestSchema()) - components := &basicComponentFactory{ - providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): providers.FactoryFixed(awsProvider), - }, - } + plugins := newContextPlugins(map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("aws"): providers.FactoryFixed(awsProvider), + }, nil) b := &PlanGraphBuilder{ - Config: testModule(t, "plan-for-each"), - Components: components, + Config: testModule(t, "plan-for-each"), + Plugins: plugins, Schemas: &Schemas{ Providers: map[addrs.Provider]*ProviderSchema{ addrs.NewDefaultProvider("aws"): awsProvider.ProviderSchema(), diff --git a/internal/terraform/graph_walk_context.go b/internal/terraform/graph_walk_context.go index 164a2ba2a58c..3561ab983b55 100644 --- a/internal/terraform/graph_walk_context.go +++ b/internal/terraform/graph_walk_context.go @@ -91,7 +91,7 @@ func (w *ContextGraphWalker) EvalContext() EvalContext { Hooks: w.Context.hooks, InputValue: w.Context.uiInput, InstanceExpanderValue: w.InstanceExpander, - Components: w.Context.components, + Plugins: w.Context.plugins, Schemas: w.Schemas, MoveResultsValue: w.MoveResults, ProviderCache: w.providerCache, diff --git a/internal/terraform/schemas.go b/internal/terraform/schemas.go index 974cbb2eb441..7b7177dbd001 100644 --- a/internal/terraform/schemas.go +++ b/internal/terraform/schemas.go @@ -74,22 +74,22 @@ func (ss *Schemas) ProvisionerConfig(name string) *configschema.Block { // either misbehavior on the part of one of the providers or of the provider // protocol itself. When returned with errors, the returned schemas object is // still valid but may be incomplete. -func loadSchemas(config *configs.Config, state *states.State, components contextComponentFactory) (*Schemas, error) { +func loadSchemas(config *configs.Config, state *states.State, plugins *contextPlugins) (*Schemas, error) { schemas := &Schemas{ Providers: map[addrs.Provider]*ProviderSchema{}, Provisioners: map[string]*configschema.Block{}, } var diags tfdiags.Diagnostics - newDiags := loadProviderSchemas(schemas.Providers, config, state, components) + newDiags := loadProviderSchemas(schemas.Providers, config, state, plugins) diags = diags.Append(newDiags) - newDiags = loadProvisionerSchemas(schemas.Provisioners, config, components) + newDiags = loadProvisionerSchemas(schemas.Provisioners, config, plugins) diags = diags.Append(newDiags) return schemas, diags.Err() } -func loadProviderSchemas(schemas map[addrs.Provider]*ProviderSchema, config *configs.Config, state *states.State, components contextComponentFactory) tfdiags.Diagnostics { +func loadProviderSchemas(schemas map[addrs.Provider]*ProviderSchema, config *configs.Config, state *states.State, plugins *contextPlugins) tfdiags.Diagnostics { var diags tfdiags.Diagnostics ensure := func(fqn addrs.Provider) { @@ -100,7 +100,7 @@ func loadProviderSchemas(schemas map[addrs.Provider]*ProviderSchema, config *con } log.Printf("[TRACE] LoadSchemas: retrieving schema for provider type %q", name) - provider, err := components.ResourceProvider(fqn) + provider, err := plugins.NewProviderInstance(fqn) if err != nil { // We'll put a stub in the map so we won't re-attempt this on // future calls. @@ -191,7 +191,7 @@ func loadProviderSchemas(schemas map[addrs.Provider]*ProviderSchema, config *con return diags } -func loadProvisionerSchemas(schemas map[string]*configschema.Block, config *configs.Config, components contextComponentFactory) tfdiags.Diagnostics { +func loadProvisionerSchemas(schemas map[string]*configschema.Block, config *configs.Config, plugins *contextPlugins) tfdiags.Diagnostics { var diags tfdiags.Diagnostics ensure := func(name string) { @@ -200,7 +200,7 @@ func loadProvisionerSchemas(schemas map[string]*configschema.Block, config *conf } log.Printf("[TRACE] LoadSchemas: retrieving schema for provisioner %q", name) - provisioner, err := components.ResourceProvisioner(name) + provisioner, err := plugins.NewProvisionerInstance(name) if err != nil { // We'll put a stub in the map so we won't re-attempt this on // future calls. @@ -237,7 +237,7 @@ func loadProvisionerSchemas(schemas map[string]*configschema.Block, config *conf // Must also visit our child modules, recursively. for _, cc := range config.Children { - childDiags := loadProvisionerSchemas(schemas, cc, components) + childDiags := loadProvisionerSchemas(schemas, cc, plugins) diags = diags.Append(childDiags) } } diff --git a/internal/terraform/transform_destroy_cbd_test.go b/internal/terraform/transform_destroy_cbd_test.go index 4e4409623dd1..a66243f540b3 100644 --- a/internal/terraform/transform_destroy_cbd_test.go +++ b/internal/terraform/transform_destroy_cbd_test.go @@ -14,11 +14,11 @@ func cbdTestGraph(t *testing.T, mod string, changes *plans.Changes, state *state module := testModule(t, mod) applyBuilder := &ApplyGraphBuilder{ - Config: module, - Changes: changes, - Components: simpleMockComponentFactory(), - Schemas: simpleTestSchemas(), - State: state, + Config: module, + Changes: changes, + Plugins: simpleMockPluginLibrary(), + Schemas: simpleTestSchemas(), + State: state, } g, err := (&BasicGraphBuilder{ Steps: cbdTestSteps(applyBuilder.Steps()), From 2bf1de1f5dfa09bbe3daf9dd6e73016e7eaf791b Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 31 Aug 2021 11:27:07 -0700 Subject: [PATCH 050/644] core: Context.Schemas in terms of contextPlugins methods The responsibility for actually instantiating a single plugin and reading out its schema now belongs to the contextPlugins type, which memoizes the results by each plugin's unique identifier so that we can avoid retrieving the same schemas multiple times when working with the same context. This doesn't change the API of Context.Schemas but it does restore the spirit of an earlier version of terraform.Context which did all of the schema loading proactively inside terraform.NewContext. In an earlier commit we reduced the scope of terraform.NewContext, making schema loading a separate step, but in the process of doing that removed the effective memoization of the schema results that terraform.NewContext was providing. The memoization here will play out in a different way than before, because we'll be treating each plugin call as separate rather than proactively loading them all up front, but this is effectively the same because all of our operation methods on Context call Context.Schemas early in their work and thus end up forcing all of the necessary schemas to load up front nonetheless. --- internal/terraform/context_plugins.go | 105 ++++++++++++++++++++++++- internal/terraform/schemas.go | 109 +++++--------------------- 2 files changed, 122 insertions(+), 92 deletions(-) diff --git a/internal/terraform/context_plugins.go b/internal/terraform/context_plugins.go index 2bd1964d3a66..5711badbd758 100644 --- a/internal/terraform/context_plugins.go +++ b/internal/terraform/context_plugins.go @@ -2,6 +2,7 @@ package terraform import ( "fmt" + "log" "sync" "github.com/hashicorp/terraform/internal/addrs" @@ -25,7 +26,7 @@ type contextPlugins struct { // it makes sense to preload all of them. providerSchemas map[addrs.Provider]*ProviderSchema provisionerSchemas map[string]*configschema.Block - schemasLock *sync.Mutex + schemasLock sync.Mutex } func newContextPlugins(providerFactories map[addrs.Provider]providers.Factory, provisionerFactories map[string]provisioners.Factory) *contextPlugins { @@ -60,3 +61,105 @@ func (cp *contextPlugins) NewProvisionerInstance(typ string) (provisioners.Inter return f() } + +// ProviderSchema uses a temporary instance of the provider with the given +// address to obtain the full schema for all aspects of that provider. +// +// ProviderSchema memoizes results by unique provider address, so it's fine +// to repeatedly call this method with the same address if various different +// parts of Terraform all need the same schema information. +func (cp *contextPlugins) ProviderSchema(addr addrs.Provider) (*ProviderSchema, error) { + cp.schemasLock.Lock() + defer cp.schemasLock.Unlock() + + if schema, ok := cp.providerSchemas[addr]; ok { + return schema, nil + } + + log.Printf("[TRACE] terraform.contextPlugins: Initializing provider %q to read its schema", addr) + + provider, err := cp.NewProviderInstance(addr) + if err != nil { + return nil, fmt.Errorf("failed to instantiate provider %q to obtain schema: %s", addr, err) + } + defer provider.Close() + + resp := provider.GetProviderSchema() + if resp.Diagnostics.HasErrors() { + return nil, fmt.Errorf("failed to retrieve schema from provider %q: %s", addr, resp.Diagnostics.Err()) + } + + s := &ProviderSchema{ + Provider: resp.Provider.Block, + ResourceTypes: make(map[string]*configschema.Block), + DataSources: make(map[string]*configschema.Block), + + ResourceTypeSchemaVersions: make(map[string]uint64), + } + + if resp.Provider.Version < 0 { + // We're not using the version numbers here yet, but we'll check + // for validity anyway in case we start using them in future. + return nil, fmt.Errorf("provider %s has invalid negative schema version for its configuration blocks,which is a bug in the provider ", addr) + } + + for t, r := range resp.ResourceTypes { + if err := r.Block.InternalValidate(); err != nil { + return nil, fmt.Errorf("provider %s has invalid schema for managed resource type %q, which is a bug in the provider: %q", addr, t, err) + } + s.ResourceTypes[t] = r.Block + s.ResourceTypeSchemaVersions[t] = uint64(r.Version) + if r.Version < 0 { + return nil, fmt.Errorf("provider %s has invalid negative schema version for managed resource type %q, which is a bug in the provider", addr, t) + } + } + + for t, d := range resp.DataSources { + if err := d.Block.InternalValidate(); err != nil { + return nil, fmt.Errorf("provider %s has invalid schema for data resource type %q, which is a bug in the provider: %q", addr, t, err) + } + s.DataSources[t] = d.Block + if d.Version < 0 { + // We're not using the version numbers here yet, but we'll check + // for validity anyway in case we start using them in future. + return nil, fmt.Errorf("provider %s has invalid negative schema version for data resource type %q, which is a bug in the provider", addr, t) + } + } + + if resp.ProviderMeta.Block != nil { + s.ProviderMeta = resp.ProviderMeta.Block + } + + cp.providerSchemas[addr] = s + return s, nil +} + +// ProvisionerSchema uses a temporary instance of the provisioner with the +// given type name to obtain the schema for that provisioner's configuration. +// +// ProvisionerSchema memoizes results by provisioner type name, so it's fine +// to repeatedly call this method with the same name if various different +// parts of Terraform all need the same schema information. +func (cp *contextPlugins) ProvisionerSchema(typ string) (*configschema.Block, error) { + cp.schemasLock.Lock() + defer cp.schemasLock.Unlock() + + if schema, ok := cp.provisionerSchemas[typ]; ok { + return schema, nil + } + + log.Printf("[TRACE] terraform.contextPlugins: Initializing provisioner %q to read its schema", typ) + provisioner, err := cp.NewProvisionerInstance(typ) + if err != nil { + return nil, fmt.Errorf("failed to instantiate provisioner %q to obtain schema: %s", typ, err) + } + defer provisioner.Close() + + resp := provisioner.GetSchema() + if resp.Diagnostics.HasErrors() { + return nil, fmt.Errorf("failed to retrieve schema from provisioner %q: %s", typ, resp.Diagnostics.Err()) + } + + cp.provisionerSchemas[typ] = resp.Provisioner + return resp.Provisioner, nil +} diff --git a/internal/terraform/schemas.go b/internal/terraform/schemas.go index 7b7177dbd001..d09cc2cb2533 100644 --- a/internal/terraform/schemas.go +++ b/internal/terraform/schemas.go @@ -100,79 +100,23 @@ func loadProviderSchemas(schemas map[addrs.Provider]*ProviderSchema, config *con } log.Printf("[TRACE] LoadSchemas: retrieving schema for provider type %q", name) - provider, err := plugins.NewProviderInstance(fqn) + schema, err := plugins.ProviderSchema(fqn) if err != nil { // We'll put a stub in the map so we won't re-attempt this on - // future calls. + // future calls, which would then repeat the same error message + // multiple times. schemas[fqn] = &ProviderSchema{} diags = diags.Append( - fmt.Errorf("failed to instantiate provider %q to obtain schema: %s", name, err), + tfdiags.Sourceless( + tfdiags.Error, + "Failed to obtain provider schema", + fmt.Sprintf("Could not load the schema for provider %s: %s.", fqn, err), + ), ) return } - defer func() { - provider.Close() - }() - resp := provider.GetProviderSchema() - if resp.Diagnostics.HasErrors() { - // We'll put a stub in the map so we won't re-attempt this on - // future calls. - schemas[fqn] = &ProviderSchema{} - diags = diags.Append( - fmt.Errorf("failed to retrieve schema from provider %q: %s", name, resp.Diagnostics.Err()), - ) - return - } - - s := &ProviderSchema{ - Provider: resp.Provider.Block, - ResourceTypes: make(map[string]*configschema.Block), - DataSources: make(map[string]*configschema.Block), - - ResourceTypeSchemaVersions: make(map[string]uint64), - } - - if resp.Provider.Version < 0 { - // We're not using the version numbers here yet, but we'll check - // for validity anyway in case we start using them in future. - diags = diags.Append( - fmt.Errorf("invalid negative schema version provider configuration for provider %q", name), - ) - } - - for t, r := range resp.ResourceTypes { - if err := r.Block.InternalValidate(); err != nil { - diags = diags.Append(fmt.Errorf(errProviderSchemaInvalid, name, "resource", t, err)) - } - s.ResourceTypes[t] = r.Block - s.ResourceTypeSchemaVersions[t] = uint64(r.Version) - if r.Version < 0 { - diags = diags.Append( - fmt.Errorf("invalid negative schema version for resource type %s in provider %q", t, name), - ) - } - } - - for t, d := range resp.DataSources { - if err := d.Block.InternalValidate(); err != nil { - diags = diags.Append(fmt.Errorf(errProviderSchemaInvalid, name, "data source", t, err)) - } - s.DataSources[t] = d.Block - if d.Version < 0 { - // We're not using the version numbers here yet, but we'll check - // for validity anyway in case we start using them in future. - diags = diags.Append( - fmt.Errorf("invalid negative schema version for data source %s in provider %q", t, name), - ) - } - } - - schemas[fqn] = s - - if resp.ProviderMeta.Block != nil { - s.ProviderMeta = resp.ProviderMeta.Block - } + schemas[fqn] = schema } if config != nil { @@ -200,32 +144,23 @@ func loadProvisionerSchemas(schemas map[string]*configschema.Block, config *conf } log.Printf("[TRACE] LoadSchemas: retrieving schema for provisioner %q", name) - provisioner, err := plugins.NewProvisionerInstance(name) + schema, err := plugins.ProvisionerSchema(name) if err != nil { // We'll put a stub in the map so we won't re-attempt this on - // future calls. - schemas[name] = &configschema.Block{} - diags = diags.Append( - fmt.Errorf("failed to instantiate provisioner %q to obtain schema: %s", name, err), - ) - return - } - defer func() { - provisioner.Close() - }() - - resp := provisioner.GetSchema() - if resp.Diagnostics.HasErrors() { - // We'll put a stub in the map so we won't re-attempt this on - // future calls. + // future calls, which would then repeat the same error message + // multiple times. schemas[name] = &configschema.Block{} diags = diags.Append( - fmt.Errorf("failed to retrieve schema from provisioner %q: %s", name, resp.Diagnostics.Err()), + tfdiags.Sourceless( + tfdiags.Error, + "Failed to obtain provisioner schema", + fmt.Sprintf("Could not load the schema for provisioner %q: %s.", name, err), + ), ) return } - schemas[name] = resp.Provisioner + schemas[name] = schema } if config != nil { @@ -280,11 +215,3 @@ func (ps *ProviderSchema) SchemaForResourceType(mode addrs.ResourceMode, typeNam func (ps *ProviderSchema) SchemaForResourceAddr(addr addrs.Resource) (schema *configschema.Block, version uint64) { return ps.SchemaForResourceType(addr.Mode, addr.Type) } - -const errProviderSchemaInvalid = ` -Internal validation of the provider failed! This is always a bug with the -provider itself, and not a user issue. Please report this bug to the -maintainers of the %q provider: - -%s %s: %s -` From a59c2fe1b9c3b5c1742df4699708e1027cdba09e Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 31 Aug 2021 15:44:07 -0700 Subject: [PATCH 051/644] core: EvalContextBuiltin no longer has a "Schemas" By tolerating ProviderSchema and ProvisionerSchema potentially returning errors, we can slightly simplify EvalContextBuiltin by having it retrieve individual schemas when needed directly from the "Plugins" object. EvalContextBuiltin already needs to be holding a contextPlugins instance for other reasons anyway, so this allows us to get the same result with fewer moving parts. --- internal/terraform/eval_context.go | 4 +-- internal/terraform/eval_context_builtin.go | 28 +++++++++---------- internal/terraform/eval_context_mock.go | 10 ++++--- internal/terraform/eval_provider.go | 5 +++- internal/terraform/graph_walk_context.go | 1 - .../node_resource_abstract_instance.go | 9 +++++- internal/terraform/node_resource_validate.go | 7 +++-- 7 files changed, 38 insertions(+), 26 deletions(-) diff --git a/internal/terraform/eval_context.go b/internal/terraform/eval_context.go index 2cee1b0711d9..61b4f2448f88 100644 --- a/internal/terraform/eval_context.go +++ b/internal/terraform/eval_context.go @@ -53,7 +53,7 @@ type EvalContext interface { // // This method expects an _absolute_ provider configuration address, since // resources in one module are able to use providers from other modules. - ProviderSchema(addrs.AbsProviderConfig) *ProviderSchema + ProviderSchema(addrs.AbsProviderConfig) (*ProviderSchema, error) // CloseProvider closes provider connections that aren't needed anymore. // @@ -84,7 +84,7 @@ type EvalContext interface { // ProvisionerSchema retrieves the main configuration schema for a // particular provisioner, which must have already been initialized with // InitProvisioner. - ProvisionerSchema(string) *configschema.Block + ProvisionerSchema(string) (*configschema.Block, error) // CloseProvisioner closes all provisioner plugins. CloseProvisioners() error diff --git a/internal/terraform/eval_context_builtin.go b/internal/terraform/eval_context_builtin.go index 6bcf91663023..ea83e82799b3 100644 --- a/internal/terraform/eval_context_builtin.go +++ b/internal/terraform/eval_context_builtin.go @@ -44,15 +44,6 @@ type BuiltinEvalContext struct { // eval context. Evaluator *Evaluator - // Schemas is a repository of all of the schemas we should need to - // decode configuration blocks and expressions. This must be constructed by - // the caller to include schemas for all of the providers, resource types, - // data sources and provisioners used by the given configuration and - // state. - // - // This must not be mutated during evaluation. - Schemas *Schemas - // VariableValues contains the variable values across all modules. This // structure is shared across the entire containing context, and so it // may be accessed only when holding VariableValuesLock. @@ -62,7 +53,10 @@ type BuiltinEvalContext struct { VariableValues map[string]map[string]cty.Value VariableValuesLock *sync.Mutex - Plugins *contextPlugins + // Plugins is a library of plugin components (providers and provisioners) + // available for use during a graph walk. + Plugins *contextPlugins + Hooks []Hook InputValue UIInput ProviderCache map[string]providers.Interface @@ -152,8 +146,8 @@ func (ctx *BuiltinEvalContext) Provider(addr addrs.AbsProviderConfig) providers. return ctx.ProviderCache[addr.String()] } -func (ctx *BuiltinEvalContext) ProviderSchema(addr addrs.AbsProviderConfig) *ProviderSchema { - return ctx.Schemas.ProviderSchema(addr.Provider) +func (ctx *BuiltinEvalContext) ProviderSchema(addr addrs.AbsProviderConfig) (*ProviderSchema, error) { + return ctx.Plugins.ProviderSchema(addr.Provider) } func (ctx *BuiltinEvalContext) CloseProvider(addr addrs.AbsProviderConfig) error { @@ -184,7 +178,11 @@ func (ctx *BuiltinEvalContext) ConfigureProvider(addr addrs.AbsProviderConfig, c return diags } - providerSchema := ctx.ProviderSchema(addr) + providerSchema, err := ctx.ProviderSchema(addr) + if err != nil { + diags = diags.Append(fmt.Errorf("failed to read schema for %s: %s", addr, err)) + return diags + } if providerSchema == nil { diags = diags.Append(fmt.Errorf("schema for %s is not available", addr)) return diags @@ -249,8 +247,8 @@ func (ctx *BuiltinEvalContext) Provisioner(n string) (provisioners.Interface, er return p, nil } -func (ctx *BuiltinEvalContext) ProvisionerSchema(n string) *configschema.Block { - return ctx.Schemas.ProvisionerConfig(n) +func (ctx *BuiltinEvalContext) ProvisionerSchema(n string) (*configschema.Block, error) { + return ctx.Plugins.ProvisionerSchema(n) } func (ctx *BuiltinEvalContext) CloseProvisioners() error { diff --git a/internal/terraform/eval_context_mock.go b/internal/terraform/eval_context_mock.go index 0e4fd32d6903..52a06c3ee719 100644 --- a/internal/terraform/eval_context_mock.go +++ b/internal/terraform/eval_context_mock.go @@ -43,6 +43,7 @@ type MockEvalContext struct { ProviderSchemaCalled bool ProviderSchemaAddr addrs.AbsProviderConfig ProviderSchemaSchema *ProviderSchema + ProviderSchemaError error CloseProviderCalled bool CloseProviderAddr addrs.AbsProviderConfig @@ -71,6 +72,7 @@ type MockEvalContext struct { ProvisionerSchemaCalled bool ProvisionerSchemaName string ProvisionerSchemaSchema *configschema.Block + ProvisionerSchemaError error CloseProvisionersCalled bool @@ -173,10 +175,10 @@ func (c *MockEvalContext) Provider(addr addrs.AbsProviderConfig) providers.Inter return c.ProviderProvider } -func (c *MockEvalContext) ProviderSchema(addr addrs.AbsProviderConfig) *ProviderSchema { +func (c *MockEvalContext) ProviderSchema(addr addrs.AbsProviderConfig) (*ProviderSchema, error) { c.ProviderSchemaCalled = true c.ProviderSchemaAddr = addr - return c.ProviderSchemaSchema + return c.ProviderSchemaSchema, c.ProviderSchemaError } func (c *MockEvalContext) CloseProvider(addr addrs.AbsProviderConfig) error { @@ -214,10 +216,10 @@ func (c *MockEvalContext) Provisioner(n string) (provisioners.Interface, error) return c.ProvisionerProvisioner, nil } -func (c *MockEvalContext) ProvisionerSchema(n string) *configschema.Block { +func (c *MockEvalContext) ProvisionerSchema(n string) (*configschema.Block, error) { c.ProvisionerSchemaCalled = true c.ProvisionerSchemaName = n - return c.ProvisionerSchemaSchema + return c.ProvisionerSchemaSchema, c.ProvisionerSchemaError } func (c *MockEvalContext) CloseProvisioners() error { diff --git a/internal/terraform/eval_provider.go b/internal/terraform/eval_provider.go index 31f7b0453a09..a97f347e404f 100644 --- a/internal/terraform/eval_provider.go +++ b/internal/terraform/eval_provider.go @@ -51,6 +51,9 @@ func getProvider(ctx EvalContext, addr addrs.AbsProviderConfig) (providers.Inter } // Not all callers require a schema, so we will leave checking for a nil // schema to the callers. - schema := ctx.ProviderSchema(addr) + schema, err := ctx.ProviderSchema(addr) + if err != nil { + return nil, &ProviderSchema{}, fmt.Errorf("failed to read schema for provider %s: %w", addr, err) + } return provider, schema, nil } diff --git a/internal/terraform/graph_walk_context.go b/internal/terraform/graph_walk_context.go index 3561ab983b55..6fd790e92f3b 100644 --- a/internal/terraform/graph_walk_context.go +++ b/internal/terraform/graph_walk_context.go @@ -92,7 +92,6 @@ func (w *ContextGraphWalker) EvalContext() EvalContext { InputValue: w.Context.uiInput, InstanceExpanderValue: w.InstanceExpander, Plugins: w.Context.plugins, - Schemas: w.Schemas, MoveResultsValue: w.MoveResults, ProviderCache: w.providerCache, ProviderInputConfig: w.Context.providerInputConfig, diff --git a/internal/terraform/node_resource_abstract_instance.go b/internal/terraform/node_resource_abstract_instance.go index 9f07068e3901..e73939a8e32d 100644 --- a/internal/terraform/node_resource_abstract_instance.go +++ b/internal/terraform/node_resource_abstract_instance.go @@ -1809,7 +1809,14 @@ func (n *NodeAbstractResourceInstance) applyProvisioners(ctx EvalContext, state return diags.Append(err) } - schema := ctx.ProvisionerSchema(prov.Type) + schema, err := ctx.ProvisionerSchema(prov.Type) + if err != nil { + // This error probably won't be a great diagnostic, but in practice + // we typically catch this problem long before we get here, so + // it should be rare to return via this codepath. + diags = diags.Append(err) + return diags + } config, configDiags := evalScope(ctx, prov.Config, self, schema) diags = diags.Append(configDiags) diff --git a/internal/terraform/node_resource_validate.go b/internal/terraform/node_resource_validate.go index ebf597c066ef..cd4ec91fe80b 100644 --- a/internal/terraform/node_resource_validate.go +++ b/internal/terraform/node_resource_validate.go @@ -77,9 +77,12 @@ func (n *NodeValidatableResource) validateProvisioner(ctx EvalContext, p *config if provisioner == nil { return diags.Append(fmt.Errorf("provisioner %s not initialized", p.Type)) } - provisionerSchema := ctx.ProvisionerSchema(p.Type) + provisionerSchema, err := ctx.ProvisionerSchema(p.Type) + if err != nil { + return diags.Append(fmt.Errorf("failed to read schema for provisioner %s: %s", p.Type, err)) + } if provisionerSchema == nil { - return diags.Append(fmt.Errorf("provisioner %s not initialized", p.Type)) + return diags.Append(fmt.Errorf("provisioner %s has no schema", p.Type)) } // Validate the provisioner's own config first From 38ec730b0e35c5ddd63d64690e123ef6d26371c3 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 31 Aug 2021 16:36:27 -0700 Subject: [PATCH 052/644] core: Opportunistic schema loading during graph construction Previously the graph builders all expected to be given a full manifest of all of the plugin component schemas that they could need during their analysis work. That made sense when terraform.NewContext would always proactively load all of the schemas before doing any other work, but we now have a load-as-needed strategy for schemas. We'll now have the graph builders use the contextPlugins object they each already hold to retrieve individual schemas when needed. This avoids the need to prepare a redundant data structure to pass alongside the contextPlugins object, and leans on the memoization behavior inside contextPlugins to preserve the old behavior of loading each provider's schema only once. --- internal/terraform/context_apply.go | 13 ++---- internal/terraform/context_eval.go | 1 - internal/terraform/context_import.go | 1 - internal/terraform/context_plan.go | 3 -- internal/terraform/context_plugins.go | 34 ++++++++++++++ internal/terraform/context_validate.go | 1 - internal/terraform/context_validate_test.go | 1 - internal/terraform/graph_builder_apply.go | 16 +++---- .../terraform/graph_builder_apply_test.go | 10 ----- .../terraform/graph_builder_destroy_plan.go | 9 +--- internal/terraform/graph_builder_eval.go | 6 +-- internal/terraform/graph_builder_import.go | 6 +-- internal/terraform/graph_builder_plan.go | 6 +-- internal/terraform/graph_builder_plan_test.go | 22 --------- internal/terraform/schemas_test.go | 45 +++++++++++++++++++ internal/terraform/transform_attach_schema.go | 22 ++++++--- internal/terraform/transform_destroy_cbd.go | 5 --- .../terraform/transform_destroy_cbd_test.go | 1 - internal/terraform/transform_destroy_edge.go | 5 --- .../terraform/transform_destroy_edge_test.go | 18 +++----- .../transform_transitive_reduction_test.go | 28 ++++++------ 21 files changed, 126 insertions(+), 127 deletions(-) diff --git a/internal/terraform/context_apply.go b/internal/terraform/context_apply.go index 4d5dc890f754..ff32074ec8b2 100644 --- a/internal/terraform/context_apply.go +++ b/internal/terraform/context_apply.go @@ -32,7 +32,7 @@ func (c *Context) Apply(plan *plans.Plan, config *configs.Config) (*states.State log.Printf("[DEBUG] Building and walking apply graph for %s plan", plan.UIMode) - graph, operation, moreDiags := c.applyGraph(plan, config, schemas, true) + graph, operation, moreDiags := c.applyGraph(plan, config, true) if moreDiags.HasErrors() { return nil, diags } @@ -90,13 +90,12 @@ Note that the -target option is not suitable for routine use, and is provided on return newState, diags } -func (c *Context) applyGraph(plan *plans.Plan, config *configs.Config, schemas *Schemas, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) { +func (c *Context) applyGraph(plan *plans.Plan, config *configs.Config, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) { graph, diags := (&ApplyGraphBuilder{ Config: config, Changes: plan.Changes, State: plan.PriorState, Plugins: c.plugins, - Schemas: schemas, Targets: plan.TargetAddrs, ForceReplace: plan.ForceReplaceAddrs, Validate: validate, @@ -130,13 +129,7 @@ func (c *Context) ApplyGraphForUI(plan *plans.Plan, config *configs.Config) (*Gr var diags tfdiags.Diagnostics - schemas, moreDiags := c.Schemas(config, plan.PriorState) - diags = diags.Append(moreDiags) - if diags.HasErrors() { - return nil, diags - } - - graph, _, moreDiags := c.applyGraph(plan, config, schemas, false) + graph, _, moreDiags := c.applyGraph(plan, config, false) diags = diags.Append(moreDiags) return graph, diags } diff --git a/internal/terraform/context_eval.go b/internal/terraform/context_eval.go index ad3deeee1d9d..8201db4d6153 100644 --- a/internal/terraform/context_eval.go +++ b/internal/terraform/context_eval.go @@ -69,7 +69,6 @@ func (c *Context) Eval(config *configs.Config, state *states.State, moduleAddr a Config: config, State: state, Plugins: c.plugins, - Schemas: schemas, }).Build(addrs.RootModuleInstance) diags = diags.Append(moreDiags) if moreDiags.HasErrors() { diff --git a/internal/terraform/context_import.go b/internal/terraform/context_import.go index 8675e06b5f56..f4bc6d809b83 100644 --- a/internal/terraform/context_import.go +++ b/internal/terraform/context_import.go @@ -64,7 +64,6 @@ func (c *Context) Import(config *configs.Config, prevRunState *states.State, opt ImportTargets: opts.Targets, Config: config, Plugins: c.plugins, - Schemas: schemas, } // Build the graph diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 9f9170c8aafc..208ee771c7c4 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -372,7 +372,6 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, Config: config, State: prevRunState, Plugins: c.plugins, - Schemas: schemas, Targets: opts.Targets, ForceReplace: opts.ForceReplace, Validate: validate, @@ -384,7 +383,6 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, Config: config, State: prevRunState, Plugins: c.plugins, - Schemas: schemas, Targets: opts.Targets, Validate: validate, skipRefresh: opts.SkipRefresh, @@ -396,7 +394,6 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, Config: config, State: prevRunState, Plugins: c.plugins, - Schemas: schemas, Targets: opts.Targets, Validate: validate, skipRefresh: opts.SkipRefresh, diff --git a/internal/terraform/context_plugins.go b/internal/terraform/context_plugins.go index 5711badbd758..4fbdf84d0a44 100644 --- a/internal/terraform/context_plugins.go +++ b/internal/terraform/context_plugins.go @@ -134,6 +134,40 @@ func (cp *contextPlugins) ProviderSchema(addr addrs.Provider) (*ProviderSchema, return s, nil } +// ProviderConfigSchema is a helper wrapper around ProviderSchema which first +// reads the full schema of the given provider and then extracts just the +// provider's configuration schema, which defines what's expected in a +// "provider" block in the configuration when configuring this provider. +func (cp *contextPlugins) ProviderConfigSchema(providerAddr addrs.Provider) (*configschema.Block, error) { + providerSchema, err := cp.ProviderSchema(providerAddr) + if err != nil { + return nil, err + } + + return providerSchema.Provider, nil +} + +// ResourceTypeSchema is a helper wrapper around ProviderSchema which first +// reads the schema of the given provider and then tries to find the schema +// for the resource type of the given resource mode in that provider. +// +// ResourceTypeSchema will return an error if the provider schema lookup +// fails, but will return nil if the provider schema lookup succeeds but then +// the provider doesn't have a resource of the requested type. +// +// Managed resource types have versioned schemas, so the second return value +// is the current schema version number for the requested resource. The version +// is irrelevant for other resource modes. +func (cp *contextPlugins) ResourceTypeSchema(providerAddr addrs.Provider, resourceMode addrs.ResourceMode, resourceType string) (*configschema.Block, uint64, error) { + providerSchema, err := cp.ProviderSchema(providerAddr) + if err != nil { + return nil, 0, err + } + + schema, version := providerSchema.SchemaForResourceType(resourceMode, resourceType) + return schema, version, nil +} + // ProvisionerSchema uses a temporary instance of the provisioner with the // given type name to obtain the schema for that provisioner's configuration. // diff --git a/internal/terraform/context_validate.go b/internal/terraform/context_validate.go index f95127895231..f5aa8540c26c 100644 --- a/internal/terraform/context_validate.go +++ b/internal/terraform/context_validate.go @@ -46,7 +46,6 @@ func (c *Context) Validate(config *configs.Config) tfdiags.Diagnostics { graph, moreDiags := ValidateGraphBuilder(&PlanGraphBuilder{ Config: config, Plugins: c.plugins, - Schemas: schemas, Validate: true, State: states.NewState(), }).Build(addrs.RootModuleInstance) diff --git a/internal/terraform/context_validate_test.go b/internal/terraform/context_validate_test.go index e4eee13b4303..0c5a2a3a5379 100644 --- a/internal/terraform/context_validate_test.go +++ b/internal/terraform/context_validate_test.go @@ -1201,7 +1201,6 @@ func TestContext2Validate_PlanGraphBuilder(t *testing.T) { Config: fixture.Config, State: states.NewState(), Plugins: c.plugins, - Schemas: schemas, }).Build(addrs.RootModuleInstance) if diags.HasErrors() { t.Fatalf("errors from PlanGraphBuilder: %s", diags.Err()) diff --git a/internal/terraform/graph_builder_apply.go b/internal/terraform/graph_builder_apply.go index ca8e5777fa0c..94f1e7699fb5 100644 --- a/internal/terraform/graph_builder_apply.go +++ b/internal/terraform/graph_builder_apply.go @@ -30,10 +30,6 @@ type ApplyGraphBuilder struct { // provisioners) available for use. Plugins *contextPlugins - // Schemas is the repository of schemas we will draw from to analyse - // the configuration. - Schemas *Schemas - // Targets are resources to target. This is only required to make sure // unnecessary outputs aren't included in the apply graph. The plan // builder successfully handles targeting resources. In the future, @@ -123,7 +119,7 @@ func (b *ApplyGraphBuilder) Steps() []GraphTransformer { // Must attach schemas before ReferenceTransformer so that we can // analyze the configuration to find references. - &AttachSchemaTransformer{Schemas: b.Schemas, Config: b.Config}, + &AttachSchemaTransformer{Plugins: b.Plugins, Config: b.Config}, // Create expansion nodes for all of the module calls. This must // come after all other transformers that create nodes representing @@ -140,14 +136,12 @@ func (b *ApplyGraphBuilder) Steps() []GraphTransformer { // Destruction ordering &DestroyEdgeTransformer{ - Config: b.Config, - State: b.State, - Schemas: b.Schemas, + Config: b.Config, + State: b.State, }, &CBDEdgeTransformer{ - Config: b.Config, - State: b.State, - Schemas: b.Schemas, + Config: b.Config, + State: b.State, }, // We need to remove configuration nodes that are not used at all, as diff --git a/internal/terraform/graph_builder_apply_test.go b/internal/terraform/graph_builder_apply_test.go index 9aaa1650d859..e1aba8d2c136 100644 --- a/internal/terraform/graph_builder_apply_test.go +++ b/internal/terraform/graph_builder_apply_test.go @@ -49,7 +49,6 @@ func TestApplyGraphBuilder(t *testing.T) { Config: testModule(t, "graph-builder-apply-basic"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), } g, err := b.Build(addrs.RootModuleInstance) @@ -113,7 +112,6 @@ func TestApplyGraphBuilder_depCbd(t *testing.T) { Config: testModule(t, "graph-builder-apply-dep-cbd"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), State: state, } @@ -187,7 +185,6 @@ func TestApplyGraphBuilder_doubleCBD(t *testing.T) { Config: testModule(t, "graph-builder-apply-double-cbd"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), } g, err := b.Build(addrs.RootModuleInstance) @@ -282,7 +279,6 @@ func TestApplyGraphBuilder_destroyStateOnly(t *testing.T) { Changes: changes, State: state, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), } g, diags := b.Build(addrs.RootModuleInstance) @@ -344,7 +340,6 @@ func TestApplyGraphBuilder_destroyCount(t *testing.T) { Config: testModule(t, "graph-builder-apply-count"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), State: state, } @@ -407,7 +402,6 @@ func TestApplyGraphBuilder_moduleDestroy(t *testing.T) { Config: testModule(t, "graph-builder-apply-module-destroy"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), State: state, } @@ -445,7 +439,6 @@ func TestApplyGraphBuilder_targetModule(t *testing.T) { Config: testModule(t, "graph-builder-apply-target-module"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child2", addrs.NoKey), }, @@ -542,7 +535,6 @@ func TestApplyGraphBuilder_updateFromOrphan(t *testing.T) { Config: testModule(t, "graph-builder-apply-orphan-update"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: schemas, State: state, } @@ -643,7 +635,6 @@ func TestApplyGraphBuilder_updateFromCBDOrphan(t *testing.T) { Config: testModule(t, "graph-builder-apply-orphan-update"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: schemas, State: state, } @@ -694,7 +685,6 @@ func TestApplyGraphBuilder_orphanedWithProvider(t *testing.T) { Config: testModule(t, "graph-builder-orphan-alias"), Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), State: state, } diff --git a/internal/terraform/graph_builder_destroy_plan.go b/internal/terraform/graph_builder_destroy_plan.go index 55bae82fdd35..0bac6305e08f 100644 --- a/internal/terraform/graph_builder_destroy_plan.go +++ b/internal/terraform/graph_builder_destroy_plan.go @@ -27,10 +27,6 @@ type DestroyPlanGraphBuilder struct { // provisioners) available for use. Plugins *contextPlugins - // Schemas is the repository of schemas we will draw from to analyse - // the configuration. - Schemas *Schemas - // Targets are resources to target Targets []addrs.Targetable @@ -99,9 +95,8 @@ func (b *DestroyPlanGraphBuilder) Steps() []GraphTransformer { // Destruction ordering. We require this only so that // targeting below will prune the correct things. &DestroyEdgeTransformer{ - Config: b.Config, - State: b.State, - Schemas: b.Schemas, + Config: b.Config, + State: b.State, }, &TargetsTransformer{Targets: b.Targets}, diff --git a/internal/terraform/graph_builder_eval.go b/internal/terraform/graph_builder_eval.go index f413dd7fea52..ee9d6b8e83de 100644 --- a/internal/terraform/graph_builder_eval.go +++ b/internal/terraform/graph_builder_eval.go @@ -33,10 +33,6 @@ type EvalGraphBuilder struct { // Plugins is a library of plug-in components (providers and // provisioners) available for use. Plugins *contextPlugins - - // Schemas is the repository of schemas we will draw from to analyse - // the configuration. - Schemas *Schemas } // See GraphBuilder @@ -79,7 +75,7 @@ func (b *EvalGraphBuilder) Steps() []GraphTransformer { // Must attach schemas before ReferenceTransformer so that we can // analyze the configuration to find references. - &AttachSchemaTransformer{Schemas: b.Schemas, Config: b.Config}, + &AttachSchemaTransformer{Plugins: b.Plugins, Config: b.Config}, // Create expansion nodes for all of the module calls. This must // come after all other transformers that create nodes representing diff --git a/internal/terraform/graph_builder_import.go b/internal/terraform/graph_builder_import.go index d502b131ec69..9910354cf5f9 100644 --- a/internal/terraform/graph_builder_import.go +++ b/internal/terraform/graph_builder_import.go @@ -20,10 +20,6 @@ type ImportGraphBuilder struct { // Plugins is a library of plug-in components (providers and // provisioners) available for use. Plugins *contextPlugins - - // Schemas is the repository of schemas we will draw from to analyse - // the configuration. - Schemas *Schemas } // Build builds the graph according to the steps returned by Steps. @@ -72,7 +68,7 @@ func (b *ImportGraphBuilder) Steps() []GraphTransformer { // Must attach schemas before ReferenceTransformer so that we can // analyze the configuration to find references. - &AttachSchemaTransformer{Schemas: b.Schemas, Config: b.Config}, + &AttachSchemaTransformer{Plugins: b.Plugins, Config: b.Config}, // Create expansion nodes for all of the module calls. This must // come after all other transformers that create nodes representing diff --git a/internal/terraform/graph_builder_plan.go b/internal/terraform/graph_builder_plan.go index e0a355d47f3c..b267f9c428c7 100644 --- a/internal/terraform/graph_builder_plan.go +++ b/internal/terraform/graph_builder_plan.go @@ -32,10 +32,6 @@ type PlanGraphBuilder struct { // provisioners) available for use. Plugins *contextPlugins - // Schemas is the repository of schemas we will draw from to analyse - // the configuration. - Schemas *Schemas - // Targets are resources to target Targets []addrs.Targetable @@ -137,7 +133,7 @@ func (b *PlanGraphBuilder) Steps() []GraphTransformer { // Must attach schemas before ReferenceTransformer so that we can // analyze the configuration to find references. - &AttachSchemaTransformer{Schemas: b.Schemas, Config: b.Config}, + &AttachSchemaTransformer{Plugins: b.Plugins, Config: b.Config}, // Create expansion nodes for all of the module calls. This must // come after all other transformers that create nodes representing diff --git a/internal/terraform/graph_builder_plan_test.go b/internal/terraform/graph_builder_plan_test.go index c5aa9a455914..689f9faff3db 100644 --- a/internal/terraform/graph_builder_plan_test.go +++ b/internal/terraform/graph_builder_plan_test.go @@ -34,12 +34,6 @@ func TestPlanGraphBuilder(t *testing.T) { b := &PlanGraphBuilder{ Config: testModule(t, "graph-builder-plan-basic"), Plugins: plugins, - Schemas: &Schemas{ - Providers: map[addrs.Provider]*ProviderSchema{ - addrs.NewDefaultProvider("aws"): awsProvider.ProviderSchema(), - addrs.NewDefaultProvider("openstack"): openstackProvider.ProviderSchema(), - }, - }, } g, err := b.Build(addrs.RootModuleInstance) @@ -82,11 +76,6 @@ func TestPlanGraphBuilder_dynamicBlock(t *testing.T) { b := &PlanGraphBuilder{ Config: testModule(t, "graph-builder-plan-dynblock"), Plugins: plugins, - Schemas: &Schemas{ - Providers: map[addrs.Provider]*ProviderSchema{ - addrs.NewDefaultProvider("test"): provider.ProviderSchema(), - }, - }, } g, err := b.Build(addrs.RootModuleInstance) @@ -145,11 +134,6 @@ func TestPlanGraphBuilder_attrAsBlocks(t *testing.T) { b := &PlanGraphBuilder{ Config: testModule(t, "graph-builder-plan-attr-as-blocks"), Plugins: plugins, - Schemas: &Schemas{ - Providers: map[addrs.Provider]*ProviderSchema{ - addrs.NewDefaultProvider("test"): provider.ProviderSchema(), - }, - }, } g, err := b.Build(addrs.RootModuleInstance) @@ -190,7 +174,6 @@ func TestPlanGraphBuilder_targetModule(t *testing.T) { b := &PlanGraphBuilder{ Config: testModule(t, "graph-builder-plan-target-module-provider"), Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), Targets: []addrs.Targetable{ addrs.RootModuleInstance.Child("child2", addrs.NoKey), }, @@ -217,11 +200,6 @@ func TestPlanGraphBuilder_forEach(t *testing.T) { b := &PlanGraphBuilder{ Config: testModule(t, "plan-for-each"), Plugins: plugins, - Schemas: &Schemas{ - Providers: map[addrs.Provider]*ProviderSchema{ - addrs.NewDefaultProvider("aws"): awsProvider.ProviderSchema(), - }, - }, } g, err := b.Build(addrs.RootModuleInstance) diff --git a/internal/terraform/schemas_test.go b/internal/terraform/schemas_test.go index 00b6438eebab..044b795a50a9 100644 --- a/internal/terraform/schemas_test.go +++ b/internal/terraform/schemas_test.go @@ -3,6 +3,7 @@ package terraform import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configschema" + "github.com/hashicorp/terraform/internal/providers" ) func simpleTestSchemas() *Schemas { @@ -18,3 +19,47 @@ func simpleTestSchemas() *Schemas { }, } } + +// schemaOnlyProvidersForTesting is a testing helper that constructs a +// plugin library that contains a set of providers that only know how to +// return schema, and will exhibit undefined behavior if used for any other +// purpose. +// +// The intended use for this is in testing components that use schemas to +// drive other behavior, such as reference analysis during graph construction, +// but that don't actually need to interact with providers otherwise. +func schemaOnlyProvidersForTesting(schemas map[addrs.Provider]*ProviderSchema) *contextPlugins { + factories := make(map[addrs.Provider]providers.Factory, len(schemas)) + + for providerAddr, schema := range schemas { + + resp := &providers.GetProviderSchemaResponse{ + Provider: providers.Schema{ + Block: schema.Provider, + }, + ResourceTypes: make(map[string]providers.Schema), + DataSources: make(map[string]providers.Schema), + } + for t, tSchema := range schema.ResourceTypes { + resp.ResourceTypes[t] = providers.Schema{ + Block: tSchema, + Version: int64(schema.ResourceTypeSchemaVersions[t]), + } + } + for t, tSchema := range schema.DataSources { + resp.DataSources[t] = providers.Schema{ + Block: tSchema, + } + } + + provider := &MockProvider{ + GetProviderSchemaResponse: resp, + } + + factories[providerAddr] = func() (providers.Interface, error) { + return provider, nil + } + } + + return newContextPlugins(factories, nil) +} diff --git a/internal/terraform/transform_attach_schema.go b/internal/terraform/transform_attach_schema.go index b013931f1816..8f7a59083348 100644 --- a/internal/terraform/transform_attach_schema.go +++ b/internal/terraform/transform_attach_schema.go @@ -43,15 +43,15 @@ type GraphNodeAttachProvisionerSchema interface { // GraphNodeAttachProvisionerSchema, looks up the needed schemas for each // and then passes them to a method implemented by the node. type AttachSchemaTransformer struct { - Schemas *Schemas + Plugins *contextPlugins Config *configs.Config } func (t *AttachSchemaTransformer) Transform(g *Graph) error { - if t.Schemas == nil { + if t.Plugins == nil { // Should never happen with a reasonable caller, but we'll return a // proper error here anyway so that we'll fail gracefully. - return fmt.Errorf("AttachSchemaTransformer used with nil Schemas") + return fmt.Errorf("AttachSchemaTransformer used with nil Plugins") } for _, v := range g.Vertices() { @@ -62,7 +62,10 @@ func (t *AttachSchemaTransformer) Transform(g *Graph) error { typeName := addr.Resource.Type providerFqn := tv.Provider() - schema, version := t.Schemas.ResourceTypeConfig(providerFqn, mode, typeName) + schema, version, err := t.Plugins.ResourceTypeSchema(providerFqn, mode, typeName) + if err != nil { + return fmt.Errorf("failed to read schema for %s in %s: %s", addr, providerFqn, err) + } if schema == nil { log.Printf("[ERROR] AttachSchemaTransformer: No resource schema available for %s", addr) continue @@ -73,8 +76,10 @@ func (t *AttachSchemaTransformer) Transform(g *Graph) error { if tv, ok := v.(GraphNodeAttachProviderConfigSchema); ok { providerAddr := tv.ProviderAddr() - schema := t.Schemas.ProviderConfig(providerAddr.Provider) - + schema, err := t.Plugins.ProviderConfigSchema(providerAddr.Provider) + if err != nil { + return fmt.Errorf("failed to read provider configuration schema for %s: %s", providerAddr.Provider, err) + } if schema == nil { log.Printf("[ERROR] AttachSchemaTransformer: No provider config schema available for %s", providerAddr) continue @@ -86,7 +91,10 @@ func (t *AttachSchemaTransformer) Transform(g *Graph) error { if tv, ok := v.(GraphNodeAttachProvisionerSchema); ok { names := tv.ProvisionedBy() for _, name := range names { - schema := t.Schemas.ProvisionerConfig(name) + schema, err := t.Plugins.ProvisionerSchema(name) + if err != nil { + return fmt.Errorf("failed to read provisioner configuration schema for %q: %s", name, err) + } if schema == nil { log.Printf("[ERROR] AttachSchemaTransformer: No schema available for provisioner %q on %q", name, dag.VertexName(v)) continue diff --git a/internal/terraform/transform_destroy_cbd.go b/internal/terraform/transform_destroy_cbd.go index 5650d1a92d71..cc6c2c15db0a 100644 --- a/internal/terraform/transform_destroy_cbd.go +++ b/internal/terraform/transform_destroy_cbd.go @@ -115,11 +115,6 @@ type CBDEdgeTransformer struct { // any way possible. Either can be nil if not availabile. Config *configs.Config State *states.State - - // If configuration is present then Schemas is required in order to - // obtain schema information from providers and provisioners so we can - // properly resolve implicit dependencies. - Schemas *Schemas } func (t *CBDEdgeTransformer) Transform(g *Graph) error { diff --git a/internal/terraform/transform_destroy_cbd_test.go b/internal/terraform/transform_destroy_cbd_test.go index a66243f540b3..629ca54778be 100644 --- a/internal/terraform/transform_destroy_cbd_test.go +++ b/internal/terraform/transform_destroy_cbd_test.go @@ -17,7 +17,6 @@ func cbdTestGraph(t *testing.T, mod string, changes *plans.Changes, state *state Config: module, Changes: changes, Plugins: simpleMockPluginLibrary(), - Schemas: simpleTestSchemas(), State: state, } g, err := (&BasicGraphBuilder{ diff --git a/internal/terraform/transform_destroy_edge.go b/internal/terraform/transform_destroy_edge.go index 83ebb075868d..521acced02e7 100644 --- a/internal/terraform/transform_destroy_edge.go +++ b/internal/terraform/transform_destroy_edge.go @@ -45,11 +45,6 @@ type DestroyEdgeTransformer struct { // to determine what a destroy node depends on. Any of these can be nil. Config *configs.Config State *states.State - - // If configuration is present then Schemas is required in order to - // obtain schema information from providers and provisioners in order - // to properly resolve implicit dependencies. - Schemas *Schemas } func (t *DestroyEdgeTransformer) Transform(g *Graph) error { diff --git a/internal/terraform/transform_destroy_edge_test.go b/internal/terraform/transform_destroy_edge_test.go index 902baec4f3ae..a5176a3cc7d3 100644 --- a/internal/terraform/transform_destroy_edge_test.go +++ b/internal/terraform/transform_destroy_edge_test.go @@ -38,8 +38,7 @@ func TestDestroyEdgeTransformer_basic(t *testing.T) { } tf := &DestroyEdgeTransformer{ - Config: testModule(t, "transform-destroy-edge-basic"), - Schemas: simpleTestSchemas(), + Config: testModule(t, "transform-destroy-edge-basic"), } if err := tf.Transform(&g); err != nil { t.Fatalf("err: %s", err) @@ -95,8 +94,7 @@ func TestDestroyEdgeTransformer_multi(t *testing.T) { } tf := &DestroyEdgeTransformer{ - Config: testModule(t, "transform-destroy-edge-multi"), - Schemas: simpleTestSchemas(), + Config: testModule(t, "transform-destroy-edge-multi"), } if err := tf.Transform(&g); err != nil { t.Fatalf("err: %s", err) @@ -113,8 +111,7 @@ func TestDestroyEdgeTransformer_selfRef(t *testing.T) { g := Graph{Path: addrs.RootModuleInstance} g.Add(testDestroyNode("test_object.A")) tf := &DestroyEdgeTransformer{ - Config: testModule(t, "transform-destroy-edge-self-ref"), - Schemas: simpleTestSchemas(), + Config: testModule(t, "transform-destroy-edge-self-ref"), } if err := tf.Transform(&g); err != nil { t.Fatalf("err: %s", err) @@ -157,8 +154,7 @@ func TestDestroyEdgeTransformer_module(t *testing.T) { } tf := &DestroyEdgeTransformer{ - Config: testModule(t, "transform-destroy-edge-module"), - Schemas: simpleTestSchemas(), + Config: testModule(t, "transform-destroy-edge-module"), } if err := tf.Transform(&g); err != nil { t.Fatalf("err: %s", err) @@ -219,8 +215,7 @@ func TestDestroyEdgeTransformer_moduleOnly(t *testing.T) { } tf := &DestroyEdgeTransformer{ - Config: testModule(t, "transform-destroy-edge-module-only"), - Schemas: simpleTestSchemas(), + Config: testModule(t, "transform-destroy-edge-module-only"), } if err := tf.Transform(&g); err != nil { t.Fatalf("err: %s", err) @@ -297,8 +292,7 @@ resource "test_instance" "a" { `, }) tf := &DestroyEdgeTransformer{ - Config: m, - Schemas: simpleTestSchemas(), + Config: m, } if err := tf.Transform(&g); err != nil { t.Fatalf("err: %s", err) diff --git a/internal/terraform/transform_transitive_reduction_test.go b/internal/terraform/transform_transitive_reduction_test.go index e1e744b15533..1339d071fec2 100644 --- a/internal/terraform/transform_transitive_reduction_test.go +++ b/internal/terraform/transform_transitive_reduction_test.go @@ -30,26 +30,24 @@ func TestTransitiveReductionTransformer(t *testing.T) { { transform := &AttachSchemaTransformer{ - Schemas: &Schemas{ - Providers: map[addrs.Provider]*ProviderSchema{ - addrs.NewDefaultProvider("aws"): { - ResourceTypes: map[string]*configschema.Block{ - "aws_instance": &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "A": { - Type: cty.String, - Optional: true, - }, - "B": { - Type: cty.String, - Optional: true, - }, + Plugins: schemaOnlyProvidersForTesting(map[addrs.Provider]*ProviderSchema{ + addrs.NewDefaultProvider("aws"): { + ResourceTypes: map[string]*configschema.Block{ + "aws_instance": { + Attributes: map[string]*configschema.Attribute{ + "A": { + Type: cty.String, + Optional: true, + }, + "B": { + Type: cty.String, + Optional: true, }, }, }, }, }, - }, + }), } if err := transform.Transform(&g); err != nil { t.Fatalf("err: %s", err) From 343279110a461ce455b30e291768b2522d39bbf8 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 31 Aug 2021 17:53:03 -0700 Subject: [PATCH 053/644] core: Graph walk loads plugin schemas opportunistically Previously our graph walker expected to recieve a data structure containing schemas for all of the provider and provisioner plugins used in the configuration and state. That made sense back when terraform.NewContext was responsible for loading all of the schemas before taking any other action, but it no longer has that responsiblity. Instead, we'll now make sure that the "contextPlugins" object reaches all of the locations where we need schema -- many of which already had access to that object anyway -- and then load the needed schemas just in time. The contextPlugins object memoizes schema lookups, so we can safely call it many times with the same provider address or provisioner type name and know that it'll still only load each distinct plugin once per Context object. As of this commit, the Context.Schemas method is now a public interface only and not used by logic in the "terraform" package at all. However, that does leave us in a rather tenuous situation of relying on the fact that all practical users of terraform.Context end up calling "Schemas" at some point in order to verify that we have all of the expected versions of plugins. That's a non-obvious implicit dependency, and so in subsequent commits we'll gradually move all responsibility for verifying plugin versions into the caller of terraform.NewContext, which'll heal a long-standing architectural wart whereby the caller is responsible for installing and locating the plugin executables but not for verifying that what's installed is conforming to the current configuration and dependency lock file. --- internal/backend/local/backend_plan_test.go | 2 +- internal/command/plan_test.go | 2 +- internal/terraform/context_apply.go | 7 -- internal/terraform/context_eval.go | 7 -- internal/terraform/context_import.go | 7 -- internal/terraform/context_plan.go | 19 +--- internal/terraform/context_test.go | 4 + internal/terraform/context_validate.go | 7 -- internal/terraform/context_validate_test.go | 7 +- internal/terraform/context_walk.go | 8 -- internal/terraform/evaluate.go | 21 ++-- internal/terraform/evaluate_test.go | 108 ++++++++++---------- internal/terraform/evaluate_valid.go | 13 ++- internal/terraform/evaluate_valid_test.go | 22 ++-- internal/terraform/graph_walk_context.go | 3 +- 15 files changed, 99 insertions(+), 138 deletions(-) diff --git a/internal/backend/local/backend_plan_test.go b/internal/backend/local/backend_plan_test.go index 5866048f9f0a..73bd78df4d79 100644 --- a/internal/backend/local/backend_plan_test.go +++ b/internal/backend/local/backend_plan_test.go @@ -136,7 +136,7 @@ func TestLocal_plan_context_error(t *testing.T) { // the backend should be unlocked after a run assertBackendStateUnlocked(t, b) - if got, want := done(t).Stderr(), "Error: Failed to load plugin schemas"; !strings.Contains(got, want) { + if got, want := done(t).Stderr(), "failed to read schema for test_instance.foo in registry.terraform.io/hashicorp/test"; !strings.Contains(got, want) { t.Fatalf("unexpected error output:\n%s\nwant: %s", got, want) } } diff --git a/internal/command/plan_test.go b/internal/command/plan_test.go index 5638a9abfd3f..880f2e971e98 100644 --- a/internal/command/plan_test.go +++ b/internal/command/plan_test.go @@ -1051,7 +1051,7 @@ func TestPlan_init_required(t *testing.T) { t.Fatalf("expected error, got success") } got := output.Stderr() - if !strings.Contains(got, `Please run "terraform init".`) { + if !strings.Contains(got, `failed to read schema for test_instance.foo in registry.terraform.io/hashicorp/test`) { t.Fatal("wrong error message in output:", got) } } diff --git a/internal/terraform/context_apply.go b/internal/terraform/context_apply.go index ff32074ec8b2..e94449b7261d 100644 --- a/internal/terraform/context_apply.go +++ b/internal/terraform/context_apply.go @@ -24,12 +24,6 @@ func (c *Context) Apply(plan *plans.Plan, config *configs.Config) (*states.State defer c.acquireRun("apply")() var diags tfdiags.Diagnostics - schemas, moreDiags := c.Schemas(config, plan.PriorState) - diags = diags.Append(moreDiags) - if moreDiags.HasErrors() { - return nil, diags - } - log.Printf("[DEBUG] Building and walking apply graph for %s plan", plan.UIMode) graph, operation, moreDiags := c.applyGraph(plan, config, true) @@ -58,7 +52,6 @@ func (c *Context) Apply(plan *plans.Plan, config *configs.Config) (*states.State workingState := plan.PriorState.DeepCopy() walker, walkDiags := c.walk(graph, operation, &graphWalkOpts{ Config: config, - Schemas: schemas, InputState: workingState, Changes: plan.Changes, RootVariableValues: variables, diff --git a/internal/terraform/context_eval.go b/internal/terraform/context_eval.go index 8201db4d6153..efc24767c205 100644 --- a/internal/terraform/context_eval.go +++ b/internal/terraform/context_eval.go @@ -40,12 +40,6 @@ func (c *Context) Eval(config *configs.Config, state *states.State, moduleAddr a var diags tfdiags.Diagnostics defer c.acquireRun("eval")() - schemas, moreDiags := c.Schemas(config, state) - diags = diags.Append(moreDiags) - if moreDiags.HasErrors() { - return nil, diags - } - // Start with a copy of state so that we don't affect the instance that // the caller is holding. state = state.DeepCopy() @@ -78,7 +72,6 @@ func (c *Context) Eval(config *configs.Config, state *states.State, moduleAddr a walkOpts := &graphWalkOpts{ InputState: state, Config: config, - Schemas: schemas, RootVariableValues: variables, } diff --git a/internal/terraform/context_import.go b/internal/terraform/context_import.go index f4bc6d809b83..af17cbd62dc8 100644 --- a/internal/terraform/context_import.go +++ b/internal/terraform/context_import.go @@ -48,12 +48,6 @@ func (c *Context) Import(config *configs.Config, prevRunState *states.State, opt // Hold a lock since we can modify our own state here defer c.acquireRun("import")() - schemas, moreDiags := c.Schemas(config, prevRunState) - diags = diags.Append(moreDiags) - if moreDiags.HasErrors() { - return nil, diags - } - // Don't modify our caller's state state := prevRunState.DeepCopy() @@ -78,7 +72,6 @@ func (c *Context) Import(config *configs.Config, prevRunState *states.State, opt // Walk it walker, walkDiags := c.walk(graph, walkImport, &graphWalkOpts{ Config: config, - Schemas: schemas, InputState: state, RootVariableValues: variables, }) diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 208ee771c7c4..248673254cf0 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -324,16 +324,10 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, r var diags tfdiags.Diagnostics log.Printf("[DEBUG] Building and walking plan graph for %s", opts.Mode) - schemas, moreDiags := c.Schemas(config, prevRunState) - diags = diags.Append(moreDiags) - if diags.HasErrors() { - return nil, diags - } - prevRunState = prevRunState.DeepCopy() // don't modify the caller's object when we process the moves moveStmts, moveResults := c.prePlanFindAndApplyMoves(config, prevRunState, opts.Targets) - graph, walkOp, moreDiags := c.planGraph(config, prevRunState, opts, schemas, true) + graph, walkOp, moreDiags := c.planGraph(config, prevRunState, opts, true) diags = diags.Append(moreDiags) if diags.HasErrors() { return nil, diags @@ -344,7 +338,6 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, r changes := plans.NewChanges() walker, walkDiags := c.walk(graph, walkOp, &graphWalkOpts{ Config: config, - Schemas: schemas, InputState: prevRunState, Changes: changes, MoveResults: moveResults, @@ -365,7 +358,7 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, r return plan, diags } -func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, opts *PlanOpts, schemas *Schemas, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) { +func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, opts *PlanOpts, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) { switch mode := opts.Mode; mode { case plans.NormalMode: graph, diags := (&PlanGraphBuilder{ @@ -421,13 +414,7 @@ func (c *Context) PlanGraphForUI(config *configs.Config, prevRunState *states.St opts := &PlanOpts{Mode: mode} - schemas, moreDiags := c.Schemas(config, prevRunState) - diags = diags.Append(moreDiags) - if diags.HasErrors() { - return nil, diags - } - - graph, _, moreDiags := c.planGraph(config, prevRunState, opts, schemas, false) + graph, _, moreDiags := c.planGraph(config, prevRunState, opts, false) diags = diags.Append(moreDiags) return graph, diags } diff --git a/internal/terraform/context_test.go b/internal/terraform/context_test.go index 47fd32f4bc15..02addb4dbf68 100644 --- a/internal/terraform/context_test.go +++ b/internal/terraform/context_test.go @@ -123,6 +123,10 @@ func TestNewContextRequiredVersion(t *testing.T) { } func TestNewContext_lockedDependencies(t *testing.T) { + // TODO: Remove this test altogether once we've factored out the version + // and checksum verification to be exclusively the caller's responsibility. + t.Skip("only one step away from locked dependencies being the caller's responsibility") + configBeepGreaterThanOne := ` terraform { required_providers { diff --git a/internal/terraform/context_validate.go b/internal/terraform/context_validate.go index f5aa8540c26c..b079477bafee 100644 --- a/internal/terraform/context_validate.go +++ b/internal/terraform/context_validate.go @@ -35,12 +35,6 @@ func (c *Context) Validate(config *configs.Config) tfdiags.Diagnostics { return diags } - schemas, moreDiags := c.Schemas(config, nil) - diags = diags.Append(moreDiags) - if moreDiags.HasErrors() { - return diags - } - log.Printf("[DEBUG] Building and walking validate graph") graph, moreDiags := ValidateGraphBuilder(&PlanGraphBuilder{ @@ -74,7 +68,6 @@ func (c *Context) Validate(config *configs.Config) tfdiags.Diagnostics { walker, walkDiags := c.walk(graph, walkValidate, &graphWalkOpts{ Config: config, - Schemas: schemas, RootVariableValues: varValues, }) diags = diags.Append(walker.NonFatalDiagnostics) diff --git a/internal/terraform/context_validate_test.go b/internal/terraform/context_validate_test.go index 0c5a2a3a5379..4f628945e144 100644 --- a/internal/terraform/context_validate_test.go +++ b/internal/terraform/context_validate_test.go @@ -1193,10 +1193,6 @@ func TestContext2Validate_PlanGraphBuilder(t *testing.T) { opts := fixture.ContextOpts() c := testContext2(t, opts) - state := states.NewState() - schemas, diags := c.Schemas(fixture.Config, state) - assertNoDiagnostics(t, diags) - graph, diags := ValidateGraphBuilder(&PlanGraphBuilder{ Config: fixture.Config, State: states.NewState(), @@ -1207,8 +1203,7 @@ func TestContext2Validate_PlanGraphBuilder(t *testing.T) { } defer c.acquireRun("validate-test")() walker, diags := c.walk(graph, walkValidate, &graphWalkOpts{ - Config: fixture.Config, - Schemas: schemas, + Config: fixture.Config, }) if diags.HasErrors() { t.Fatal(diags.Err()) diff --git a/internal/terraform/context_walk.go b/internal/terraform/context_walk.go index b44a5910c5d5..e041f80b2e43 100644 --- a/internal/terraform/context_walk.go +++ b/internal/terraform/context_walk.go @@ -23,7 +23,6 @@ type graphWalkOpts struct { InputState *states.State Changes *plans.Changes Config *configs.Config - Schemas *Schemas RootVariableValues InputValues MoveResults map[addrs.UniqueKey]refactoring.MoveResult @@ -95,12 +94,6 @@ func (c *Context) graphWalker(operation walkOperation, opts *graphWalkOpts) *Con changes = plans.NewChanges() } - if opts.Schemas == nil { - // Should never happen: caller must always set this one. - // (We catch this here, rather than later, to get a more intelligible - // stack trace when it _does_ panic.) - panic("Context.graphWalker call without Schemas") - } if opts.Config == nil { panic("Context.graphWalker call without Config") } @@ -109,7 +102,6 @@ func (c *Context) graphWalker(operation walkOperation, opts *graphWalkOpts) *Con Context: c, State: state, Config: opts.Config, - Schemas: opts.Schemas, RefreshState: refreshState, PrevRunState: prevRunState, Changes: changes.SyncWrapper(), diff --git a/internal/terraform/evaluate.go b/internal/terraform/evaluate.go index efcc9c1f418a..47889a9a477d 100644 --- a/internal/terraform/evaluate.go +++ b/internal/terraform/evaluate.go @@ -46,13 +46,13 @@ type Evaluator struct { VariableValues map[string]map[string]cty.Value VariableValuesLock *sync.Mutex - // Schemas is a repository of all of the schemas we should need to - // evaluate expressions. This must be constructed by the caller to - // include schemas for all of the providers, resource types, data sources - // and provisioners used by the given configuration and state. + // Plugins is the library of available plugin components (providers and + // provisioners) that we have available to help us evaluate expressions + // that interact with plugin-provided objects. // - // This must not be mutated during evaluation. - Schemas *Schemas + // From this we only access the schemas of the plugins, and don't otherwise + // interact with plugin instances. + Plugins *contextPlugins // State is the current state, embedded in a wrapper that ensures that // it can be safely accessed and modified concurrently. @@ -892,8 +892,13 @@ func (d *evaluationStateData) GetResource(addr addrs.Resource, rng tfdiags.Sourc } func (d *evaluationStateData) getResourceSchema(addr addrs.Resource, providerAddr addrs.AbsProviderConfig) *configschema.Block { - schemas := d.Evaluator.Schemas - schema, _ := schemas.ResourceTypeConfig(providerAddr.Provider, addr.Mode, addr.Type) + schema, _, err := d.Evaluator.Plugins.ResourceTypeSchema(providerAddr.Provider, addr.Mode, addr.Type) + if err != nil { + // We have plently other codepaths that will detect and report + // schema lookup errors before we'd reach this point, so we'll just + // treat a failure here the same as having no schema. + return nil + } return schema } diff --git a/internal/terraform/evaluate_test.go b/internal/terraform/evaluate_test.go index f8a46d4fce1e..ffd067f3fe0b 100644 --- a/internal/terraform/evaluate_test.go +++ b/internal/terraform/evaluate_test.go @@ -193,79 +193,77 @@ func TestEvaluatorGetResource(t *testing.T) { }, }, State: stateSync, - Schemas: &Schemas{ - Providers: map[addrs.Provider]*ProviderSchema{ - addrs.NewDefaultProvider("test"): { - Provider: &configschema.Block{}, - ResourceTypes: map[string]*configschema.Block{ - "test_resource": { - Attributes: map[string]*configschema.Attribute{ - "id": { - Type: cty.String, - Computed: true, - }, - "value": { - Type: cty.String, - Computed: true, - Sensitive: true, - }, + Plugins: schemaOnlyProvidersForTesting(map[addrs.Provider]*ProviderSchema{ + addrs.NewDefaultProvider("test"): { + Provider: &configschema.Block{}, + ResourceTypes: map[string]*configschema.Block{ + "test_resource": { + Attributes: map[string]*configschema.Attribute{ + "id": { + Type: cty.String, + Computed: true, }, - BlockTypes: map[string]*configschema.NestedBlock{ - "nesting_list": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "value": {Type: cty.String, Optional: true}, - "sensitive_value": {Type: cty.String, Optional: true, Sensitive: true}, - }, + "value": { + Type: cty.String, + Computed: true, + Sensitive: true, + }, + }, + BlockTypes: map[string]*configschema.NestedBlock{ + "nesting_list": { + Block: configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "value": {Type: cty.String, Optional: true}, + "sensitive_value": {Type: cty.String, Optional: true, Sensitive: true}, }, - Nesting: configschema.NestingList, }, - "nesting_map": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "foo": {Type: cty.String, Optional: true, Sensitive: true}, - }, + Nesting: configschema.NestingList, + }, + "nesting_map": { + Block: configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "foo": {Type: cty.String, Optional: true, Sensitive: true}, }, - Nesting: configschema.NestingMap, }, - "nesting_set": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "baz": {Type: cty.String, Optional: true, Sensitive: true}, - }, + Nesting: configschema.NestingMap, + }, + "nesting_set": { + Block: configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "baz": {Type: cty.String, Optional: true, Sensitive: true}, }, - Nesting: configschema.NestingSet, }, - "nesting_single": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "boop": {Type: cty.String, Optional: true, Sensitive: true}, - }, + Nesting: configschema.NestingSet, + }, + "nesting_single": { + Block: configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "boop": {Type: cty.String, Optional: true, Sensitive: true}, }, - Nesting: configschema.NestingSingle, }, - "nesting_nesting": { - Block: configschema.Block{ - BlockTypes: map[string]*configschema.NestedBlock{ - "nesting_list": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "value": {Type: cty.String, Optional: true}, - "sensitive_value": {Type: cty.String, Optional: true, Sensitive: true}, - }, + Nesting: configschema.NestingSingle, + }, + "nesting_nesting": { + Block: configschema.Block{ + BlockTypes: map[string]*configschema.NestedBlock{ + "nesting_list": { + Block: configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "value": {Type: cty.String, Optional: true}, + "sensitive_value": {Type: cty.String, Optional: true, Sensitive: true}, }, - Nesting: configschema.NestingList, }, + Nesting: configschema.NestingList, }, }, - Nesting: configschema.NestingSingle, }, + Nesting: configschema.NestingSingle, }, }, }, }, }, - }, + }), } data := &evaluationStateData{ @@ -430,7 +428,7 @@ func TestEvaluatorGetResource_changes(t *testing.T) { }, }, State: stateSync, - Schemas: schemas, + Plugins: schemaOnlyProvidersForTesting(schemas.Providers), } data := &evaluationStateData{ diff --git a/internal/terraform/evaluate_valid.go b/internal/terraform/evaluate_valid.go index 68943a84205c..232f6913da77 100644 --- a/internal/terraform/evaluate_valid.go +++ b/internal/terraform/evaluate_valid.go @@ -224,7 +224,18 @@ func (d *evaluationStateData) staticValidateResourceReference(modCfg *configs.Co } providerFqn := modCfg.Module.ProviderForLocalConfig(cfg.ProviderConfigAddr()) - schema, _ := d.Evaluator.Schemas.ResourceTypeConfig(providerFqn, addr.Mode, addr.Type) + schema, _, err := d.Evaluator.Plugins.ResourceTypeSchema(providerFqn, addr.Mode, addr.Type) + if err != nil { + // Prior validation should've taken care of a schema lookup error, + // so we should never get here but we'll handle it here anyway for + // robustness. + diags = diags.Append(&hcl.Diagnostic{ + Severity: hcl.DiagError, + Summary: `Failed provider schema lookup`, + Detail: fmt.Sprintf(`Couldn't load schema for %s resource type %q in %s: %s.`, modeAdjective, addr.Type, providerFqn.String(), err), + Subject: rng.ToHCL().Ptr(), + }) + } if schema == nil { // Prior validation should've taken care of a resource block with an diff --git a/internal/terraform/evaluate_valid_test.go b/internal/terraform/evaluate_valid_test.go index ff8ca4397aec..cfdfdea1f5e1 100644 --- a/internal/terraform/evaluate_valid_test.go +++ b/internal/terraform/evaluate_valid_test.go @@ -69,21 +69,19 @@ For example, to correlate with indices of a referring resource, use: cfg := testModule(t, "static-validate-refs") evaluator := &Evaluator{ Config: cfg, - Schemas: &Schemas{ - Providers: map[addrs.Provider]*ProviderSchema{ - addrs.NewDefaultProvider("aws"): { - ResourceTypes: map[string]*configschema.Block{ - "aws_instance": {}, - }, + Plugins: schemaOnlyProvidersForTesting(map[addrs.Provider]*ProviderSchema{ + addrs.NewDefaultProvider("aws"): { + ResourceTypes: map[string]*configschema.Block{ + "aws_instance": {}, }, - addrs.MustParseProviderSourceString("foobar/beep"): { - ResourceTypes: map[string]*configschema.Block{ - // intentional mismatch between resource type prefix and provider type - "boop_instance": {}, - }, + }, + addrs.MustParseProviderSourceString("foobar/beep"): { + ResourceTypes: map[string]*configschema.Block{ + // intentional mismatch between resource type prefix and provider type + "boop_instance": {}, }, }, - }, + }), } for _, test := range tests { diff --git a/internal/terraform/graph_walk_context.go b/internal/terraform/graph_walk_context.go index 6fd790e92f3b..39a97032c555 100644 --- a/internal/terraform/graph_walk_context.go +++ b/internal/terraform/graph_walk_context.go @@ -34,7 +34,6 @@ type ContextGraphWalker struct { Operation walkOperation StopContext context.Context RootVariableValues InputValues - Schemas *Schemas Config *configs.Config // This is an output. Do not set this, nor read it while a graph walk @@ -81,7 +80,7 @@ func (w *ContextGraphWalker) EvalContext() EvalContext { Operation: w.Operation, State: w.State, Changes: w.Changes, - Schemas: w.Schemas, + Plugins: w.Context.plugins, VariableValues: w.variableValues, VariableValuesLock: &w.variableValuesLock, } From 65e0c448a0e38307d1a08e1af062f03c0d16a12d Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 1 Sep 2021 17:01:44 -0700 Subject: [PATCH 054/644] workdir: Start of a new package for working directory state management Thus far our various interactions with the bits of state we keep associated with a working directory have all been implemented directly inside the "command" package -- often in the huge command.Meta type -- and not managed collectively via a single component. There's too many little codepaths reading and writing from the working directory and data directory to refactor it all in one step, but this is an attempt at a first step towards a future where everything that reads and writes from the current working directory would do so via an object that encapsulates the implementation details and offers a high-level API to read and write all of these session-persistent settings. The design here continues our gradual path towards using a dependency injection style where "package main" is solely responsible for directly interacting with the OS command line, the OS environment, the OS working directory, the stdio streams, and the CLI configuration, and then communicating the resulting information to the rest of Terraform by wiring together objects. It seems likely that eventually we'll have enough wiring code in package main to justify a more explicit organization of that code, but for this commit the new "workdir.Dir" object is just wired directly in place of its predecessors, without any significant change of code organization at that top layer. This first commit focuses on the main files and directories we use to find provider plugins, because a subsequent commit will lightly reorganize the separation of concerns for plugin launching with a similar goal of collecting all of the relevant logic together into one spot. --- commands.go | 9 +- internal/command/command_test.go | 69 ++++++++- internal/command/get_test.go | 31 ++-- internal/command/meta.go | 61 ++++---- internal/command/meta_backend_test.go | 14 +- internal/command/meta_config.go | 23 +-- internal/command/meta_providers.go | 4 +- internal/command/plugins.go | 38 +---- internal/command/workdir/dir.go | 149 +++++++++++++++++++ internal/command/workdir/doc.go | 16 ++ internal/command/workdir/normalize_path.go | 52 +++++++ internal/command/workdir/plugin_dirs.go | 83 +++++++++++ internal/command/workdir/plugin_dirs_test.go | 60 ++++++++ working_dir.go | 12 ++ 14 files changed, 507 insertions(+), 114 deletions(-) create mode 100644 internal/command/workdir/dir.go create mode 100644 internal/command/workdir/doc.go create mode 100644 internal/command/workdir/normalize_path.go create mode 100644 internal/command/workdir/plugin_dirs.go create mode 100644 internal/command/workdir/plugin_dirs_test.go create mode 100644 working_dir.go diff --git a/commands.go b/commands.go index e5d7a3605e08..2c1cb90ee1ae 100644 --- a/commands.go +++ b/commands.go @@ -77,12 +77,12 @@ func initCommands( configDir = "" // No config dir available (e.g. looking up a home directory failed) } - dataDir := os.Getenv("TF_DATA_DIR") + wd := WorkingDir(originalWorkingDir, os.Getenv("TF_DATA_DIR")) meta := command.Meta{ - OriginalWorkingDir: originalWorkingDir, - Streams: streams, - View: views.NewView(streams).SetRunningInAutomation(inAutomation), + WorkingDir: wd, + Streams: streams, + View: views.NewView(streams).SetRunningInAutomation(inAutomation), Color: true, GlobalPluginDirs: globalPluginDirs(), @@ -94,7 +94,6 @@ func initCommands( RunningInAutomation: inAutomation, CLIConfigDir: configDir, PluginCacheDir: config.PluginCacheDir, - OverrideDataDir: dataDir, ShutdownCh: makeShutdownCh(), diff --git a/internal/command/command_test.go b/internal/command/command_test.go index 5bc136ffe15a..e182807a0cfe 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -24,6 +24,7 @@ import ( backendInit "github.com/hashicorp/terraform/internal/backend/init" backendLocal "github.com/hashicorp/terraform/internal/backend/local" "github.com/hashicorp/terraform/internal/command/views" + "github.com/hashicorp/terraform/internal/command/workdir" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configload" "github.com/hashicorp/terraform/internal/configs/configschema" @@ -108,6 +109,65 @@ func tempDir(t *testing.T) string { return dir } +// tempWorkingDir constructs a workdir.Dir object referring to a newly-created +// temporary directory, and returns that object along with a cleanup function +// to call once the calling test is complete. +// +// Although workdir.Dir is built to support arbitrary base directories, the +// not-yet-migrated behaviors in command.Meta tend to expect the root module +// directory to be the real process working directory, and so if you intend +// to use the result inside a command.Meta object you must use a pattern +// similar to the following when initializing your test: +// +// wd, cleanup := tempWorkingDir(t) +// defer cleanup() +// defer testChdir(t, wd.RootModuleDir())() +// +// Note that testChdir modifies global state for the test process, and so a +// test using this pattern must never call t.Parallel(). +func tempWorkingDir(t *testing.T) (*workdir.Dir, func() error) { + t.Helper() + + dirPath, err := os.MkdirTemp("", "tf-command-test-") + if err != nil { + t.Fatal(err) + } + done := func() error { + return os.RemoveAll(dirPath) + } + t.Logf("temporary directory %s", dirPath) + + return workdir.NewDir(dirPath), done +} + +// tempWorkingDirFixture is like tempWorkingDir but it also copies the content +// from a fixture directory into the temporary directory before returning it. +// +// The same caveats about working directory apply as for testWorkingDir. See +// the testWorkingDir commentary for an example of how to use this function +// along with testChdir to meet the expectations of command.Meta legacy +// functionality. +func tempWorkingDirFixture(t *testing.T, fixtureName string) (*workdir.Dir, func() error) { + t.Helper() + + dirPath, err := os.MkdirTemp("", "tf-command-test-"+fixtureName) + if err != nil { + t.Fatal(err) + } + done := func() error { + return os.RemoveAll(dirPath) + } + t.Logf("temporary directory %s with fixture %q", dirPath, fixtureName) + + fixturePath := testFixturePath(fixtureName) + testCopyDir(t, fixturePath, dirPath) + // NOTE: Unfortunately because testCopyDir immediately aborts the test + // on failure, a failure to copy will prevent us from cleaning up the + // temporary directory. Oh well. :( + + return workdir.NewDir(dirPath), done +} + func testFixturePath(name string) string { return filepath.Join(fixtureDir, name) } @@ -853,8 +913,10 @@ func testLockState(sourceDir, path string) (func(), error) { } // testCopyDir recursively copies a directory tree, attempting to preserve -// permissions. Source directory must exist, destination directory must *not* -// exist. Symlinks are ignored and skipped. +// permissions. Source directory must exist, destination directory may exist +// but will be created if not; it should typically be a temporary directory, +// and thus already created using os.MkdirTemp or similar. +// Symlinks are ignored and skipped. func testCopyDir(t *testing.T, src, dst string) { t.Helper() @@ -873,9 +935,6 @@ func testCopyDir(t *testing.T, src, dst string) { if err != nil && !os.IsNotExist(err) { t.Fatal(err) } - if err == nil { - t.Fatal("destination already exists") - } err = os.MkdirAll(dst, si.Mode()) if err != nil { diff --git a/internal/command/get_test.go b/internal/command/get_test.go index 7d9137425ad8..2e9f04611ad1 100644 --- a/internal/command/get_test.go +++ b/internal/command/get_test.go @@ -1,7 +1,6 @@ package command import ( - "os" "strings" "testing" @@ -9,17 +8,16 @@ import ( ) func TestGet(t *testing.T) { - td := tempDir(t) - testCopyDir(t, testFixturePath("get"), td) - defer os.RemoveAll(td) - defer testChdir(t, td)() + wd, cleanup := tempWorkingDirFixture(t, "get") + defer cleanup() + defer testChdir(t, wd.RootModuleDir())() - ui := new(cli.MockUi) + ui := cli.NewMockUi() c := &GetCommand{ Meta: Meta{ testingOverrides: metaOverridesForProvider(testProvider()), Ui: ui, - dataDir: tempDir(t), + WorkingDir: wd, }, } @@ -35,12 +33,16 @@ func TestGet(t *testing.T) { } func TestGet_multipleArgs(t *testing.T) { - ui := new(cli.MockUi) + wd, cleanup := tempWorkingDir(t) + defer cleanup() + defer testChdir(t, wd.RootModuleDir())() + + ui := cli.NewMockUi() c := &GetCommand{ Meta: Meta{ testingOverrides: metaOverridesForProvider(testProvider()), Ui: ui, - dataDir: tempDir(t), + WorkingDir: wd, }, } @@ -54,17 +56,16 @@ func TestGet_multipleArgs(t *testing.T) { } func TestGet_update(t *testing.T) { - td := tempDir(t) - testCopyDir(t, testFixturePath("get"), td) - defer os.RemoveAll(td) - defer testChdir(t, td)() + wd, cleanup := tempWorkingDirFixture(t, "get") + defer cleanup() + defer testChdir(t, wd.RootModuleDir())() - ui := new(cli.MockUi) + ui := cli.NewMockUi() c := &GetCommand{ Meta: Meta{ testingOverrides: metaOverridesForProvider(testProvider()), Ui: ui, - dataDir: tempDir(t), + WorkingDir: wd, }, } diff --git a/internal/command/meta.go b/internal/command/meta.go index 072cfad419ef..79e1b6f843a2 100644 --- a/internal/command/meta.go +++ b/internal/command/meta.go @@ -16,6 +16,9 @@ import ( plugin "github.com/hashicorp/go-plugin" "github.com/hashicorp/terraform-svchost/disco" + "github.com/mitchellh/cli" + "github.com/mitchellh/colorstring" + "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/backend/local" @@ -23,17 +26,15 @@ import ( "github.com/hashicorp/terraform/internal/command/format" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/command/webbrowser" + "github.com/hashicorp/terraform/internal/command/workdir" "github.com/hashicorp/terraform/internal/configs/configload" "github.com/hashicorp/terraform/internal/getproviders" + legacy "github.com/hashicorp/terraform/internal/legacy/terraform" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/provisioners" "github.com/hashicorp/terraform/internal/terminal" "github.com/hashicorp/terraform/internal/terraform" "github.com/hashicorp/terraform/internal/tfdiags" - "github.com/mitchellh/cli" - "github.com/mitchellh/colorstring" - - legacy "github.com/hashicorp/terraform/internal/legacy/terraform" ) // Meta are the meta-options that are available on all or most commands. @@ -42,16 +43,19 @@ type Meta struct { // command with a Meta field. These are expected to be set externally // (not from within the command itself). - // OriginalWorkingDir, if set, is the actual working directory where - // Terraform was run from. This might not be the _actual_ current working - // directory, because users can add the -chdir=... option to the beginning - // of their command line to ask Terraform to switch. + // WorkingDir is an object representing the "working directory" where we're + // running commands. In the normal case this literally refers to the + // working directory of the Terraform process, though this can take on + // a more symbolic meaning when the user has overridden default behavior + // to specify a different working directory or to override the special + // data directory where we'll persist settings that must survive between + // consecutive commands. // - // Most things should just use the current working directory in order to - // respect the user's override, but we retain this for exceptional - // situations where we need to refer back to the original working directory - // for some reason. - OriginalWorkingDir string + // We're currently gradually migrating the various bits of state that + // must persist between consecutive commands in a session to be encapsulated + // in here, but we're not there yet and so there are also some methods on + // Meta which directly read and modify paths inside the data directory. + WorkingDir *workdir.Dir // Streams tracks the raw Stdout, Stderr, and Stdin handles along with // some basic metadata about them, such as whether each is connected to @@ -102,11 +106,6 @@ type Meta struct { // provider version can be obtained. ProviderSource getproviders.Source - // OverrideDataDir, if non-empty, overrides the return value of the - // DataDir method for situations where the local .terraform/ directory - // is not suitable, e.g. because of a read-only filesystem. - OverrideDataDir string - // BrowserLauncher is used by commands that need to open a URL in a // web browser. BrowserLauncher webbrowser.Launcher @@ -135,10 +134,6 @@ type Meta struct { // Protected: commands can set these //---------------------------------------------------------- - // Modify the data directory location. This should be accessed through the - // DataDir method. - dataDir string - // pluginPath is a user defined set of directories to look for plugins. // This is set during init with the `-plugin-dir` flag, saved to a file in // the data directory. @@ -265,13 +260,25 @@ func (m *Meta) Colorize() *colorstring.Colorize { } } +// fixupMissingWorkingDir is a compensation for various existing tests which +// directly construct incomplete "Meta" objects. Specifically, it deals with +// a test that omits a WorkingDir value by constructing one just-in-time. +// +// We shouldn't ever rely on this in any real codepath, because it doesn't +// take into account the various ways users can override our default +// directory selection behaviors. +func (m *Meta) fixupMissingWorkingDir() { + if m.WorkingDir == nil { + log.Printf("[WARN] This 'Meta' object is missing its WorkingDir, so we're creating a default one suitable only for tests") + m.WorkingDir = workdir.NewDir(".") + } +} + // DataDir returns the directory where local data will be stored. // Defaults to DefaultDataDir in the current working directory. func (m *Meta) DataDir() string { - if m.OverrideDataDir != "" { - return m.OverrideDataDir - } - return DefaultDataDir + m.fixupMissingWorkingDir() + return m.WorkingDir.DataDir() } const ( @@ -499,7 +506,7 @@ func (m *Meta) contextOpts() (*terraform.ContextOpts, error) { opts.Meta = &terraform.ContextMeta{ Env: workspace, - OriginalWorkingDir: m.OriginalWorkingDir, + OriginalWorkingDir: m.WorkingDir.OriginalWorkingDir(), } return &opts, nil diff --git a/internal/command/meta_backend_test.go b/internal/command/meta_backend_test.go index c489de08830c..23f021b8245d 100644 --- a/internal/command/meta_backend_test.go +++ b/internal/command/meta_backend_test.go @@ -1855,17 +1855,19 @@ func TestMetaBackend_configToExtra(t *testing.T) { // no config; return inmem backend stored in state func TestBackendFromState(t *testing.T) { - td := tempDir(t) - testCopyDir(t, testFixturePath("backend-from-state"), td) - defer os.RemoveAll(td) - defer testChdir(t, td)() + wd, cleanup := tempWorkingDirFixture(t, "backend-from-state") + defer cleanup() + defer testChdir(t, wd.RootModuleDir())() // Setup the meta m := testMetaBackend(t, nil) + m.WorkingDir = wd // terraform caches a small "state" file that stores the backend config. // This test must override m.dataDir so it loads the "terraform.tfstate" file in the - // test directory as the backend config cache - m.OverrideDataDir = td + // test directory as the backend config cache. This fixture is really a + // fixture for the data dir rather than the module dir, so we'll override + // them to match just for this test. + wd.OverrideDataDir(".") stateBackend, diags := m.backendFromState() if diags.HasErrors() { diff --git a/internal/command/meta_config.go b/internal/command/meta_config.go index 3c91a936494e..439df6b9130b 100644 --- a/internal/command/meta_config.go +++ b/internal/command/meta_config.go @@ -27,27 +27,8 @@ import ( // paths used to load configuration, because we want to prefer recording // relative paths in source code references within the configuration. func (m *Meta) normalizePath(path string) string { - var err error - - // First we will make it absolute so that we have a consistent place - // to start. - path, err = filepath.Abs(path) - if err != nil { - // We'll just accept what we were given, then. - return path - } - - cwd, err := os.Getwd() - if err != nil || !filepath.IsAbs(cwd) { - return path - } - - ret, err := filepath.Rel(cwd, path) - if err != nil { - return path - } - - return ret + m.fixupMissingWorkingDir() + return m.WorkingDir.NormalizePath(path) } // loadConfig reads a configuration from the given directory, which should diff --git a/internal/command/meta_providers.go b/internal/command/meta_providers.go index 42a3acc6a9b3..84ccc89d87a4 100644 --- a/internal/command/meta_providers.go +++ b/internal/command/meta_providers.go @@ -6,7 +6,6 @@ import ( "log" "os" "os/exec" - "path/filepath" "strings" "github.com/hashicorp/go-multierror" @@ -109,7 +108,8 @@ func (m *Meta) providerCustomLocalDirectorySource(dirs []string) getproviders.So // Only one object returned from this method should be live at any time, // because objects inside contain caches that must be maintained properly. func (m *Meta) providerLocalCacheDir() *providercache.Dir { - dir := filepath.Join(m.DataDir(), "providers") + m.fixupMissingWorkingDir() + dir := m.WorkingDir.ProviderLocalCacheDir() return providercache.NewDir(dir) } diff --git a/internal/command/plugins.go b/internal/command/plugins.go index dba535137ec1..7467b09db96a 100644 --- a/internal/command/plugins.go +++ b/internal/command/plugins.go @@ -1,11 +1,8 @@ package command import ( - "encoding/json" "fmt" - "io/ioutil" "log" - "os" "os/exec" "path/filepath" "runtime" @@ -36,46 +33,21 @@ func (m *Meta) storePluginPath(pluginPath []string) error { return nil } - path := filepath.Join(m.DataDir(), PluginPathFile) + m.fixupMissingWorkingDir() // remove the plugin dir record if the path was set to an empty string if len(pluginPath) == 1 && (pluginPath[0] == "") { - err := os.Remove(path) - if !os.IsNotExist(err) { - return err - } - return nil - } - - js, err := json.MarshalIndent(pluginPath, "", " ") - if err != nil { - return err + return m.WorkingDir.SetForcedPluginDirs(nil) } - // if this fails, so will WriteFile - os.MkdirAll(m.DataDir(), 0755) - - return ioutil.WriteFile(path, js, 0644) + return m.WorkingDir.SetForcedPluginDirs(pluginPath) } // Load the user-defined plugin search path into Meta.pluginPath if the file // exists. func (m *Meta) loadPluginPath() ([]string, error) { - js, err := ioutil.ReadFile(filepath.Join(m.DataDir(), PluginPathFile)) - if os.IsNotExist(err) { - return nil, nil - } - - if err != nil { - return nil, err - } - - var pluginPath []string - if err := json.Unmarshal(js, &pluginPath); err != nil { - return nil, err - } - - return pluginPath, nil + m.fixupMissingWorkingDir() + return m.WorkingDir.ForcedPluginDirs() } // the default location for automatically installed plugins diff --git a/internal/command/workdir/dir.go b/internal/command/workdir/dir.go new file mode 100644 index 000000000000..1af5b8ed0cce --- /dev/null +++ b/internal/command/workdir/dir.go @@ -0,0 +1,149 @@ +package workdir + +import ( + "fmt" + "os" + "path/filepath" +) + +// Dir represents a single Terraform working directory. +// +// "Working directory" is unfortunately a slight misnomer, because non-default +// options can potentially stretch the definition such that multiple working +// directories end up appearing to share a data directory, or other similar +// anomolies, but we continue to use this terminology both for historical +// reasons and because it reflects the common case without any special +// overrides. +// +// The naming convention for methods on this type is that methods whose names +// begin with "Override" affect only characteristics of the particular object +// they're called on, changing where it looks for data, while methods whose +// names begin with "Set" will write settings to disk such that other instances +// referring to the same directories will also see them. Given that, the +// "Override" methods should be used only during the initialization steps +// for a Dir object, typically only inside "package main", so that all +// subsequent work elsewhere will access consistent locations on disk. +// +// We're gradually transitioning to using this type to manage working directory +// settings, and so not everything in the working directory "data dir" is +// encapsulated here yet, but hopefully we'll gradually migrate all of those +// settings here over time. The working directory state not yet managed in here +// is typically managed directly in the "command" package, either directly +// inside commands or in methods of the giant command.Meta type. +type Dir struct { + // mainDir is the path to the directory that we present as the + // "working directory" in the user model, which is typically the + // current working directory when running Terraform CLI, or the + // directory explicitly chosen by the user using the -chdir=... + // global option. + mainDir string + + // originalDir is the path to the working directory that was + // selected when creating the Terraform CLI process, regardless of + // -chdir=... being set. This is only for very limited purposes + // related to backward compatibility; most functionality should + // use mainDir instead. + originalDir string + + // dataDir is the path to the directory where we will store our + // working directory settings and artifacts. This is typically a + // directory named ".terraform" within mainDir, but users may + // override it. + dataDir string +} + +// NewDir constructs a new working directory, anchored at the given path. +// +// In normal use, mainPath should be "." to reflect the current working +// directory, with "package main" having switched the process's current +// working directory if necessary prior to calling this function. However, +// unusual situations in tests may set mainPath to a temporary directory, or +// similar. +// +// WARNING: Although the logic in this package is intended to work regardless +// of whether mainPath is actually the current working directory, we're +// currently in a transitional state where this package shares responsibility +// for the working directory with various command.Meta methods, and those +// often assume that the main path of the working directory will always be +// ".". If you're writing test code that spans across both areas of +// responsibility then you must ensure that the test temporarily changes the +// test process's working directory to the directory returned by RootModuleDir +// before using the result inside a command.Meta. +func NewDir(mainPath string) *Dir { + mainPath = filepath.Clean(mainPath) + return &Dir{ + mainDir: mainPath, + originalDir: mainPath, + dataDir: filepath.Join(mainPath, ".terraform"), + } +} + +// OverrideOriginalWorkingDir records a different path as the +// "original working directory" for the reciever. +// +// Use this only to record the original working directory when Terraform is run +// with the -chdir=... global option. In that case, the directory given in +// -chdir=... is the "main path" to pass in to NewDir, while the original +// working directory should be sent to this method. +func (d *Dir) OverrideOriginalWorkingDir(originalPath string) { + d.originalDir = filepath.Clean(originalPath) +} + +// OverrideDataDir chooses a specific alternative directory to read and write +// the persistent working directory settings. +// +// "package main" can call this if it detects that the user has overridden +// the default location by setting the relevant environment variable. Don't +// call this when that environment variable isn't set, in order to preserve +// the default setting of a dot-prefixed directory directly inside the main +// working directory. +func (d *Dir) OverrideDataDir(dataDir string) { + d.dataDir = filepath.Clean(dataDir) +} + +// RootModuleDir returns the directory where we expect to find the root module +// configuration for this working directory. +func (d *Dir) RootModuleDir() string { + // The root module configuration is just directly inside the main directory. + return d.mainDir +} + +// OriginalWorkingDir returns the true, operating-system-originated working +// directory that the current Terraform process was launched from. +// +// This is usually the same as the main working directory, but differs in the +// special case where the user ran Terraform with the global -chdir=... +// option. This is here only for a few backward compatibility affordances +// from before we had the -chdir=... option, so should typically not be used +// for anything new. +func (d *Dir) OriginalWorkingDir() string { + return d.originalDir +} + +// DataDir returns the base path where the reciever keeps all of the settings +// and artifacts that must persist between consecutive commands in a single +// session. +// +// This is exported only to allow the legacy behaviors in command.Meta to +// continue accessing this directory directly. Over time we should replace +// all of those direct accesses with methods on this type, and then remove +// this method. Avoid using this method for new use-cases. +func (d *Dir) DataDir() string { + return d.dataDir +} + +// ensureDataDir creates the data directory and all of the necessary parent +// directories that lead to it, if they don't already exist. +// +// For directories that already exist ensureDataDir will preserve their +// permissions, while it'll create any new directories to be owned by the user +// running Terraform, readable and writable by that user, and readable by +// all other users, or some approximation of that on non-Unix platforms which +// have a different permissions model. +func (d *Dir) ensureDataDir() error { + err := os.MkdirAll(d.dataDir, 0755) + if err != nil { + return fmt.Errorf("failed to prepare working directory: %w", err) + } + return nil +} diff --git a/internal/command/workdir/doc.go b/internal/command/workdir/doc.go new file mode 100644 index 000000000000..d645e4f09d20 --- /dev/null +++ b/internal/command/workdir/doc.go @@ -0,0 +1,16 @@ +// Package workdir models the various local artifacts and state we keep inside +// a Terraform "working directory". +// +// The working directory artifacts and settings are typically initialized or +// modified by "terraform init", after which they persist for use by other +// commands in the same directory, but are not visible to commands run in +// other working directories or on other computers. +// +// Although "terraform init" is the main command which modifies a workdir, +// other commands do sometimes make more focused modifications for settings +// which can typically change multiple times during a session, such as the +// currently-selected workspace name. Any command which modifies the working +// directory settings must discard and reload any objects which derived from +// those settings, because otherwise the existing objects will often continue +// to follow the settings that were present when they were created. +package workdir diff --git a/internal/command/workdir/normalize_path.go b/internal/command/workdir/normalize_path.go new file mode 100644 index 000000000000..4bef076882e4 --- /dev/null +++ b/internal/command/workdir/normalize_path.go @@ -0,0 +1,52 @@ +package workdir + +import ( + "path/filepath" +) + +// NormalizePath attempts to transform the given path so that it's relative +// to the working directory, which is our preferred way to present and store +// paths to files and directories within a configuration so that they can +// be portable to operations in other working directories. +// +// It isn't always possible to produce a relative path. For example, on Windows +// the given path might be on a different volume (e.g. drive letter or network +// share) than the working directory. +// +// Note that the result will be relative to the main directory of the receiver, +// which should always be the actual process working directory in normal code, +// but might be some other temporary working directory when in test code. +// If you need to access the file or directory that the result refers to with +// functions that aren't aware of our base directory, you can use something +// like the following, which again should be needed only in test code which +// might need to inspect the filesystem in order to make assertions: +// +// filepath.Join(d.RootModuleDir(), normalizePathResult) +// +// The above is suitable only for situations where the given path is known +// to be beneath the working directory, which is the typical situation for +// temporary working directories created for automated tests. +func (d *Dir) NormalizePath(given string) string { + // We need an absolute version of d.mainDir in order for our "Rel" + // result to be reliable. + absMain, err := filepath.Abs(d.mainDir) + if err != nil { + // Weird, but okay... + return filepath.Clean(given) + } + + if !filepath.IsAbs(given) { + given = filepath.Join(absMain, given) + } + + ret, err := filepath.Rel(absMain, given) + if err != nil { + // It's not always possible to find a relative path. For example, + // the given path might be on an entirely separate volume + // (e.g. drive letter or network share) on a Windows system, which + // always requires an absolute path. + return filepath.Clean(given) + } + + return ret +} diff --git a/internal/command/workdir/plugin_dirs.go b/internal/command/workdir/plugin_dirs.go new file mode 100644 index 000000000000..017b0ffc16da --- /dev/null +++ b/internal/command/workdir/plugin_dirs.go @@ -0,0 +1,83 @@ +package workdir + +import ( + "encoding/json" + "io/ioutil" + "os" + "path/filepath" +) + +const PluginPathFilename = "plugin_path" + +// ProviderLocalCacheDir returns the directory we'll use as the +// working-directory-specific local cache of providers. +// +// The provider installer's job is to make sure that all providers needed for +// a particular working directory are available in this cache directory. No +// other component may write here, and in particular a Dir object itself +// never reads or writes into this directory, instead just delegating all of +// that responsibility to other components. +// +// Typically, the caller will ultimately pass the result of this method either +// directly or indirectly into providercache.NewDir, to get an object +// responsible for managing the contents. +func (d *Dir) ProviderLocalCacheDir() string { + return filepath.Join(d.dataDir, "providers") +} + +// ForcedPluginDirs returns a list of directories to use to find plugins, +// instead of the default locations. +// +// Returns an zero-length list and no error in the normal case where there +// are no overridden search directories. If ForcedPluginDirs returns a +// non-empty list with no errors then the result totally replaces the default +// search directories. +func (d *Dir) ForcedPluginDirs() ([]string, error) { + raw, err := ioutil.ReadFile(filepath.Join(d.dataDir, PluginPathFilename)) + if os.IsNotExist(err) { + return nil, nil + } + + if err != nil { + return nil, err + } + + var pluginPath []string + if err := json.Unmarshal(raw, &pluginPath); err != nil { + return nil, err + } + return pluginPath, nil +} + +// SetForcedPluginDirs records an overridden list of directories to search +// to find plugins, instead of the default locations. See ForcePluginDirs +// for more information. +// +// Pass a zero-length list to deactivate forced plugin directories altogether, +// thus allowing the working directory to return to using the default +// search directories. +func (d *Dir) SetForcedPluginDirs(dirs []string) error { + + filePath := filepath.Join(d.dataDir, PluginPathFilename) + switch { + case len(dirs) == 0: + err := os.Remove(filePath) + if !os.IsNotExist(err) { + return err + } + return nil + default: + // We'll ignore errors from this one, because if we fail to create + // the directory then we'll fail to create the file below too, + // and that subsequent error will more directly reflect what we + // are trying to do here. + d.ensureDataDir() + + raw, err := json.MarshalIndent(dirs, "", " ") + if err != nil { + return err + } + + return ioutil.WriteFile(filePath, raw, 0644) + } +} diff --git a/internal/command/workdir/plugin_dirs_test.go b/internal/command/workdir/plugin_dirs_test.go new file mode 100644 index 000000000000..5ed224ab3f36 --- /dev/null +++ b/internal/command/workdir/plugin_dirs_test.go @@ -0,0 +1,60 @@ +package workdir + +import ( + "os" + "path/filepath" + "testing" + + "github.com/google/go-cmp/cmp" +) + +func TestDirForcedPluginDirs(t *testing.T) { + tmpDir, err := os.MkdirTemp("", "terraform-workdir-") + if err != nil { + t.Fatal(err) + } + defer os.RemoveAll(tmpDir) + + dir := NewDir(tmpDir) + // We'll use the default convention of a data dir nested inside the + // working directory, so we don't need to override anything on "dir". + + want := []string(nil) + got, err := dir.ForcedPluginDirs() + if err != nil { + t.Fatal(err) + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong initial settings\n%s", diff) + } + + fakeDir1 := filepath.Join(tmpDir, "boop1") + fakeDir2 := filepath.Join(tmpDir, "boop2") + err = dir.SetForcedPluginDirs([]string{fakeDir1, fakeDir2}) + if err != nil { + t.Fatal(err) + } + + want = []string{fakeDir1, fakeDir2} + got, err = dir.ForcedPluginDirs() + if err != nil { + t.Fatal(err) + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong updated settings\n%s", diff) + } + + err = dir.SetForcedPluginDirs(nil) + if err != nil { + t.Fatal(err) + } + + want = nil + got, err = dir.ForcedPluginDirs() + if err != nil { + t.Fatal(err) + } + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong final settings, after reverting back to defaults\n%s", diff) + } +} diff --git a/working_dir.go b/working_dir.go new file mode 100644 index 000000000000..6d9945c0c5f5 --- /dev/null +++ b/working_dir.go @@ -0,0 +1,12 @@ +package main + +import "github.com/hashicorp/terraform/internal/command/workdir" + +func WorkingDir(originalDir string, overrideDataDir string) *workdir.Dir { + ret := workdir.NewDir(".") // caller should already have used os.Chdir in "-chdir=..." mode + ret.OverrideOriginalWorkingDir(originalDir) + if overrideDataDir != "" { + ret.OverrideDataDir(overrideDataDir) + } + return ret +} From c8e2be76d2572282d4bde28afdaeb03fdfba8764 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dra=C5=A1ko=20Radovanovi=C4=87?= Date: Sat, 11 Sep 2021 15:33:14 +0200 Subject: [PATCH 055/644] Fix a documentation typo --- website/docs/language/expressions/function-calls.html.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/language/expressions/function-calls.html.md b/website/docs/language/expressions/function-calls.html.md index b55d6a16db02..f01a6563ca9c 100644 --- a/website/docs/language/expressions/function-calls.html.md +++ b/website/docs/language/expressions/function-calls.html.md @@ -98,8 +98,8 @@ dynamically on disk as part of the plan or apply steps. The `timestamp` function returns a representation of the current system time at the point when Terraform calls it, and the `uuid` function returns a random -result which differs on each call. Without any special behavior these would -would both cause the final configuration during the apply step not to match the +result which differs on each call. Without any special behavior, these would +both cause the final configuration during the apply step not to match the actions shown in the plan, which violates the Terraform execution model. For that reason, Terraform arranges for both of those functions to produce From af4f4540a95fcb03144023531c6144bc20900c2c Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 10 Sep 2021 09:27:28 -0400 Subject: [PATCH 056/644] configschema: do not expose optional attributes Objects with optional attributes are only used for the decoding of HCL, and those types should never be exposed elsewhere within terraform. Separate the external ImpliedType method from the cty.Type generated internally for the decoder spec. This unfortunately causes our ImpliedType method to return a different type than the hcldec.ImpliedType function, but the former is only used within terraform for concrete values, while the latter is used to decode HCL. Renaming the ImpliedType methods could be done to further differentiate them, but that does cause fairly large diff in the codebase that does not seem worth the effort at this time. --- internal/configs/configschema/decoder_spec.go | 6 +- internal/configs/configschema/implied_type.go | 24 ++- .../configs/configschema/implied_type_test.go | 195 ++++++++++++++++-- 3 files changed, 207 insertions(+), 18 deletions(-) diff --git a/internal/configs/configschema/decoder_spec.go b/internal/configs/configschema/decoder_spec.go index 19bea49170c7..d127ccd6f4ae 100644 --- a/internal/configs/configschema/decoder_spec.go +++ b/internal/configs/configschema/decoder_spec.go @@ -121,7 +121,7 @@ func (b *Block) DecoderSpec() hcldec.Spec { // implied type more complete, but if there are any // dynamically-typed attributes inside we must use a tuple // instead, at the expense of our type then not being predictable. - if blockS.Block.ImpliedType().HasDynamicTypes() { + if blockS.Block.specType().HasDynamicTypes() { ret[name] = &hcldec.BlockTupleSpec{ TypeName: name, Nested: childSpec, @@ -155,7 +155,7 @@ func (b *Block) DecoderSpec() hcldec.Spec { // implied type more complete, but if there are any // dynamically-typed attributes inside we must use a tuple // instead, at the expense of our type then not being predictable. - if blockS.Block.ImpliedType().HasDynamicTypes() { + if blockS.Block.specType().HasDynamicTypes() { ret[name] = &hcldec.BlockObjectSpec{ TypeName: name, Nested: childSpec, @@ -195,7 +195,7 @@ func (a *Attribute) decoderSpec(name string) hcldec.Spec { panic("Invalid attribute schema: NestedType and Type cannot both be set. This is a bug in the provider.") } - ty := a.NestedType.ImpliedType() + ty := a.NestedType.specType() ret.Type = ty ret.Required = a.Required || a.NestedType.MinItems > 0 return ret diff --git a/internal/configs/configschema/implied_type.go b/internal/configs/configschema/implied_type.go index 58b9951101c4..1a19a8c07a28 100644 --- a/internal/configs/configschema/implied_type.go +++ b/internal/configs/configschema/implied_type.go @@ -8,11 +8,23 @@ import ( // ImpliedType returns the cty.Type that would result from decoding a // configuration block using the receiving block schema. // +// The type returned from Block.ImpliedType differs from the type returned by +// hcldec.ImpliedType in that there will be no objects with optional +// attributes, since this value is not to be used for the decoding of +// configuration. +// // ImpliedType always returns a result, even if the given schema is // inconsistent. Code that creates configschema.Block objects should be // tested using the InternalValidate method to detect any inconsistencies // that would cause this method to fall back on defaults and assumptions. func (b *Block) ImpliedType() cty.Type { + return b.specType().WithoutOptionalAttributesDeep() +} + +// specType returns the cty.Type used for decoding a configuration +// block using the receiving block schema. This is the type used internally by +// hcldec to decode configuration. +func (b *Block) specType() cty.Type { if b == nil { return cty.EmptyObject } @@ -41,14 +53,20 @@ func (b *Block) ContainsSensitive() bool { return false } -// ImpliedType returns the cty.Type that would result from decoding a NestedType -// Attribute using the receiving block schema. +// ImpliedType returns the cty.Type that would result from decoding a +// NestedType Attribute using the receiving block schema. // // ImpliedType always returns a result, even if the given schema is // inconsistent. Code that creates configschema.Object objects should be tested // using the InternalValidate method to detect any inconsistencies that would // cause this method to fall back on defaults and assumptions. func (o *Object) ImpliedType() cty.Type { + return o.specType().WithoutOptionalAttributesDeep() +} + +// specType returns the cty.Type used for decoding a NestedType Attribute using +// the receiving block schema. +func (o *Object) specType() cty.Type { if o == nil { return cty.EmptyObject } @@ -56,7 +74,7 @@ func (o *Object) ImpliedType() cty.Type { attrTys := make(map[string]cty.Type, len(o.Attributes)) for name, attrS := range o.Attributes { if attrS.NestedType != nil { - attrTys[name] = attrS.NestedType.ImpliedType() + attrTys[name] = attrS.NestedType.specType() } else { attrTys[name] = attrS.Type } diff --git a/internal/configs/configschema/implied_type_test.go b/internal/configs/configschema/implied_type_test.go index d36239615c48..0fd8b01b5772 100644 --- a/internal/configs/configschema/implied_type_test.go +++ b/internal/configs/configschema/implied_type_test.go @@ -112,6 +112,36 @@ func TestBlockImpliedType(t *testing.T) { }), }), }, + "nested objects with optional attrs": { + &Block{ + Attributes: map[string]*Attribute{ + "map": { + Optional: true, + NestedType: &Object{ + Nesting: NestingMap, + Attributes: map[string]*Attribute{ + "optional": {Type: cty.String, Optional: true}, + "required": {Type: cty.Number, Required: true}, + "computed": {Type: cty.List(cty.Bool), Computed: true}, + "optional_computed": {Type: cty.Map(cty.Bool), Optional: true, Computed: true}, + }, + }, + }, + }, + }, + // The ImpliedType from the type-level block should not contain any + // optional attributes. + cty.Object(map[string]cty.Type{ + "map": cty.Map(cty.Object( + map[string]cty.Type{ + "optional": cty.String, + "required": cty.Number, + "computed": cty.List(cty.Bool), + "optional_computed": cty.Map(cty.Bool), + }, + )), + }), + }, } for name, test := range tests { @@ -147,14 +177,13 @@ func TestObjectImpliedType(t *testing.T) { "optional_computed": {Type: cty.Map(cty.Bool), Optional: true, Computed: true}, }, }, - cty.ObjectWithOptionalAttrs( + cty.Object( map[string]cty.Type{ "optional": cty.String, "required": cty.Number, "computed": cty.List(cty.Bool), "optional_computed": cty.Map(cty.Bool), }, - []string{"optional", "computed", "optional_computed"}, ), }, "nested attributes": { @@ -175,14 +204,14 @@ func TestObjectImpliedType(t *testing.T) { }, }, }, - cty.ObjectWithOptionalAttrs(map[string]cty.Type{ - "nested_type": cty.ObjectWithOptionalAttrs(map[string]cty.Type{ + cty.Object(map[string]cty.Type{ + "nested_type": cty.Object(map[string]cty.Type{ "optional": cty.String, "required": cty.Number, "computed": cty.List(cty.Bool), "optional_computed": cty.Map(cty.Bool), - }, []string{"optional", "computed", "optional_computed"}), - }, []string{"nested_type"}), + }), + }), }, "nested object-type attributes": { &Object{ @@ -208,15 +237,15 @@ func TestObjectImpliedType(t *testing.T) { }, }, }, - cty.ObjectWithOptionalAttrs(map[string]cty.Type{ - "nested_type": cty.ObjectWithOptionalAttrs(map[string]cty.Type{ + cty.Object(map[string]cty.Type{ + "nested_type": cty.Object(map[string]cty.Type{ "optional": cty.String, "required": cty.Number, "computed": cty.List(cty.Bool), "optional_computed": cty.Map(cty.Bool), - "object": cty.ObjectWithOptionalAttrs(map[string]cty.Type{"optional": cty.String, "required": cty.Number}, []string{"optional"}), - }, []string{"optional", "computed", "optional_computed"}), - }, []string{"nested_type"}), + "object": cty.Object(map[string]cty.Type{"optional": cty.String, "required": cty.Number}), + }), + }), }, "NestingList": { &Object{ @@ -225,7 +254,7 @@ func TestObjectImpliedType(t *testing.T) { "foo": {Type: cty.String, Optional: true}, }, }, - cty.List(cty.ObjectWithOptionalAttrs(map[string]cty.Type{"foo": cty.String}, []string{"foo"})), + cty.List(cty.Object(map[string]cty.Type{"foo": cty.String})), }, "NestingMap": { &Object{ @@ -336,3 +365,145 @@ func TestObjectContainsSensitive(t *testing.T) { } } + +// Nested attribute should return optional object attributes for decoding. +func TestObjectSpecType(t *testing.T) { + tests := map[string]struct { + Schema *Object + Want cty.Type + }{ + "attributes": { + &Object{ + Nesting: NestingSingle, + Attributes: map[string]*Attribute{ + "optional": {Type: cty.String, Optional: true}, + "required": {Type: cty.Number, Required: true}, + "computed": {Type: cty.List(cty.Bool), Computed: true}, + "optional_computed": {Type: cty.Map(cty.Bool), Optional: true, Computed: true}, + }, + }, + cty.ObjectWithOptionalAttrs( + map[string]cty.Type{ + "optional": cty.String, + "required": cty.Number, + "computed": cty.List(cty.Bool), + "optional_computed": cty.Map(cty.Bool), + }, + []string{"optional", "computed", "optional_computed"}, + ), + }, + "nested attributes": { + &Object{ + Nesting: NestingSingle, + Attributes: map[string]*Attribute{ + "nested_type": { + NestedType: &Object{ + Nesting: NestingSingle, + Attributes: map[string]*Attribute{ + "optional": {Type: cty.String, Optional: true}, + "required": {Type: cty.Number, Required: true}, + "computed": {Type: cty.List(cty.Bool), Computed: true}, + "optional_computed": {Type: cty.Map(cty.Bool), Optional: true, Computed: true}, + }, + }, + Optional: true, + }, + }, + }, + cty.ObjectWithOptionalAttrs(map[string]cty.Type{ + "nested_type": cty.ObjectWithOptionalAttrs(map[string]cty.Type{ + "optional": cty.String, + "required": cty.Number, + "computed": cty.List(cty.Bool), + "optional_computed": cty.Map(cty.Bool), + }, []string{"optional", "computed", "optional_computed"}), + }, []string{"nested_type"}), + }, + "nested object-type attributes": { + &Object{ + Nesting: NestingSingle, + Attributes: map[string]*Attribute{ + "nested_type": { + NestedType: &Object{ + Nesting: NestingSingle, + Attributes: map[string]*Attribute{ + "optional": {Type: cty.String, Optional: true}, + "required": {Type: cty.Number, Required: true}, + "computed": {Type: cty.List(cty.Bool), Computed: true}, + "optional_computed": {Type: cty.Map(cty.Bool), Optional: true, Computed: true}, + "object": { + Type: cty.ObjectWithOptionalAttrs(map[string]cty.Type{ + "optional": cty.String, + "required": cty.Number, + }, []string{"optional"}), + }, + }, + }, + Optional: true, + }, + }, + }, + cty.ObjectWithOptionalAttrs(map[string]cty.Type{ + "nested_type": cty.ObjectWithOptionalAttrs(map[string]cty.Type{ + "optional": cty.String, + "required": cty.Number, + "computed": cty.List(cty.Bool), + "optional_computed": cty.Map(cty.Bool), + "object": cty.ObjectWithOptionalAttrs(map[string]cty.Type{"optional": cty.String, "required": cty.Number}, []string{"optional"}), + }, []string{"optional", "computed", "optional_computed"}), + }, []string{"nested_type"}), + }, + "NestingList": { + &Object{ + Nesting: NestingList, + Attributes: map[string]*Attribute{ + "foo": {Type: cty.String, Optional: true}, + }, + }, + cty.List(cty.ObjectWithOptionalAttrs(map[string]cty.Type{"foo": cty.String}, []string{"foo"})), + }, + "NestingMap": { + &Object{ + Nesting: NestingMap, + Attributes: map[string]*Attribute{ + "foo": {Type: cty.String}, + }, + }, + cty.Map(cty.Object(map[string]cty.Type{"foo": cty.String})), + }, + "NestingSet": { + &Object{ + Nesting: NestingSet, + Attributes: map[string]*Attribute{ + "foo": {Type: cty.String}, + }, + }, + cty.Set(cty.Object(map[string]cty.Type{"foo": cty.String})), + }, + "deeply nested NestingList": { + &Object{ + Nesting: NestingList, + Attributes: map[string]*Attribute{ + "foo": { + NestedType: &Object{ + Nesting: NestingList, + Attributes: map[string]*Attribute{ + "bar": {Type: cty.String}, + }, + }, + }, + }, + }, + cty.List(cty.Object(map[string]cty.Type{"foo": cty.List(cty.Object(map[string]cty.Type{"bar": cty.String}))})), + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + got := test.Schema.specType() + if !got.Equals(test.Want) { + t.Errorf("wrong result\ngot: %#v\nwant: %#v", got, test.Want) + } + }) + } +} From 43123d284e45327c5cf686dc2e477e0a919d0333 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 10 Sep 2021 09:38:59 -0400 Subject: [PATCH 057/644] optional attrs should not be in ProposedNew --- internal/plans/objchange/objchange_test.go | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/internal/plans/objchange/objchange_test.go b/internal/plans/objchange/objchange_test.go index 7facea9e5378..77147989b885 100644 --- a/internal/plans/objchange/objchange_test.go +++ b/internal/plans/objchange/objchange_test.go @@ -1452,12 +1452,12 @@ func TestProposedNew(t *testing.T) { "map": cty.NullVal(cty.Map(cty.Object(map[string]cty.Type{"bar": cty.String}))), "set": cty.NullVal(cty.Set(cty.Object(map[string]cty.Type{"bar": cty.String}))), "nested_map": cty.NullVal(cty.Map(cty.Object(map[string]cty.Type{ - "inner": cty.ObjectWithOptionalAttrs(map[string]cty.Type{ + "inner": cty.Object(map[string]cty.Type{ "optional": cty.String, "computed": cty.String, "optional_computed": cty.String, "required": cty.String, - }, []string{"computed", "optional", "optional_computed"}), + }), }))), }), }, From 53a73a8ab67bfff3d3b5e78359c808dd88979a4d Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 10 Sep 2021 10:58:44 -0400 Subject: [PATCH 058/644] configs: add ConstraintType to config.Variable In order to handle optional attributes, the Variable type needs to keep track of the type constraint for decoding and conversion, as well as the concrete type for creating values and type comparison. Since the Type field is referenced throughout the codebase, and for future refactoring if the handling of optional attributes changes significantly, the constraint is now loaded into an entirely new field called ConstraintType. This prevents types containing ObjectWithOptionalAttrs from escaping the decode/conversion codepaths into the rest of the codebase. --- internal/backend/unparsed_value_test.go | 23 +++++++++++-------- internal/configs/experiments.go | 2 +- internal/configs/module_merge.go | 3 ++- internal/configs/module_merge_test.go | 2 ++ internal/configs/named_values.go | 16 +++++++++---- internal/terraform/evaluate.go | 11 ++++++++- internal/terraform/evaluate_test.go | 16 ++++++++----- internal/terraform/node_module_variable.go | 5 ++-- .../terraform/node_module_variable_test.go | 17 ++++++++++---- internal/terraform/node_root_variable_test.go | 5 +++- internal/terraform/variables.go | 4 +--- 11 files changed, 70 insertions(+), 34 deletions(-) diff --git a/internal/backend/unparsed_value_test.go b/internal/backend/unparsed_value_test.go index 7d392e0cebad..6df7c226a47b 100644 --- a/internal/backend/unparsed_value_test.go +++ b/internal/backend/unparsed_value_test.go @@ -24,9 +24,10 @@ func TestParseVariableValuesUndeclared(t *testing.T) { } decls := map[string]*configs.Variable{ "declared1": { - Name: "declared1", - Type: cty.String, - ParsingMode: configs.VariableParseLiteral, + Name: "declared1", + Type: cty.String, + ConstraintType: cty.String, + ParsingMode: configs.VariableParseLiteral, DeclRange: hcl.Range{ Filename: "fake.tf", Start: hcl.Pos{Line: 2, Column: 1, Byte: 0}, @@ -34,9 +35,10 @@ func TestParseVariableValuesUndeclared(t *testing.T) { }, }, "missing1": { - Name: "missing1", - Type: cty.String, - ParsingMode: configs.VariableParseLiteral, + Name: "missing1", + Type: cty.String, + ConstraintType: cty.String, + ParsingMode: configs.VariableParseLiteral, DeclRange: hcl.Range{ Filename: "fake.tf", Start: hcl.Pos{Line: 3, Column: 1, Byte: 0}, @@ -44,10 +46,11 @@ func TestParseVariableValuesUndeclared(t *testing.T) { }, }, "missing2": { - Name: "missing1", - Type: cty.String, - ParsingMode: configs.VariableParseLiteral, - Default: cty.StringVal("default for missing2"), + Name: "missing1", + Type: cty.String, + ConstraintType: cty.String, + ParsingMode: configs.VariableParseLiteral, + Default: cty.StringVal("default for missing2"), DeclRange: hcl.Range{ Filename: "fake.tf", Start: hcl.Pos{Line: 4, Column: 1, Byte: 0}, diff --git a/internal/configs/experiments.go b/internal/configs/experiments.go index 8a7e7cb667d1..4b8f10c412f1 100644 --- a/internal/configs/experiments.go +++ b/internal/configs/experiments.go @@ -198,7 +198,7 @@ func checkModuleExperiments(m *Module) hcl.Diagnostics { if !m.ActiveExperiments.Has(experiments.ModuleVariableOptionalAttrs) { for _, v := range m.Variables { - if typeConstraintHasOptionalAttrs(v.Type) { + if typeConstraintHasOptionalAttrs(v.ConstraintType) { diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, Summary: "Optional object type attributes are experimental", diff --git a/internal/configs/module_merge.go b/internal/configs/module_merge.go index d9b21cacce3a..014d2329cd3b 100644 --- a/internal/configs/module_merge.go +++ b/internal/configs/module_merge.go @@ -51,6 +51,7 @@ func (v *Variable) merge(ov *Variable) hcl.Diagnostics { } if ov.Type != cty.NilType { v.Type = ov.Type + v.ConstraintType = ov.ConstraintType } if ov.ParsingMode != 0 { v.ParsingMode = ov.ParsingMode @@ -67,7 +68,7 @@ func (v *Variable) merge(ov *Variable) hcl.Diagnostics { // constraint but the converted value cannot. In practice, this situation // should be rare since most of our conversions are interchangable. if v.Default != cty.NilVal { - val, err := convert.Convert(v.Default, v.Type) + val, err := convert.Convert(v.Default, v.ConstraintType) if err != nil { // What exactly we'll say in the error message here depends on whether // it was Default or Type that was overridden here. diff --git a/internal/configs/module_merge_test.go b/internal/configs/module_merge_test.go index c7db323163bc..0a57e06bc560 100644 --- a/internal/configs/module_merge_test.go +++ b/internal/configs/module_merge_test.go @@ -25,6 +25,7 @@ func TestModuleOverrideVariable(t *testing.T) { DescriptionSet: true, Default: cty.StringVal("b_override"), Type: cty.String, + ConstraintType: cty.String, ParsingMode: VariableParseLiteral, DeclRange: hcl.Range{ Filename: "testdata/valid-modules/override-variable/primary.tf", @@ -46,6 +47,7 @@ func TestModuleOverrideVariable(t *testing.T) { DescriptionSet: true, Default: cty.StringVal("b_override partial"), Type: cty.String, + ConstraintType: cty.String, ParsingMode: VariableParseLiteral, DeclRange: hcl.Range{ Filename: "testdata/valid-modules/override-variable/primary.tf", diff --git a/internal/configs/named_values.go b/internal/configs/named_values.go index 40c45685f052..21abd33c2253 100644 --- a/internal/configs/named_values.go +++ b/internal/configs/named_values.go @@ -22,7 +22,13 @@ type Variable struct { Name string Description string Default cty.Value - Type cty.Type + + // Type is the concrete type of the variable value. + Type cty.Type + // ConstraintType is used for decoding and type conversions, and may + // contain nested ObjectWithOptionalAttr types. + ConstraintType cty.Type + ParsingMode VariableParsingMode Validations []*VariableValidation Sensitive bool @@ -45,6 +51,7 @@ func decodeVariableBlock(block *hcl.Block, override bool) (*Variable, hcl.Diagno // or not they are set when we merge. if !override { v.Type = cty.DynamicPseudoType + v.ConstraintType = cty.DynamicPseudoType v.ParsingMode = VariableParseLiteral } @@ -92,7 +99,8 @@ func decodeVariableBlock(block *hcl.Block, override bool) (*Variable, hcl.Diagno if attr, exists := content.Attributes["type"]; exists { ty, parseMode, tyDiags := decodeVariableType(attr.Expr) diags = append(diags, tyDiags...) - v.Type = ty + v.ConstraintType = ty + v.Type = ty.WithoutOptionalAttributesDeep() v.ParsingMode = parseMode } @@ -112,9 +120,9 @@ func decodeVariableBlock(block *hcl.Block, override bool) (*Variable, hcl.Diagno // attribute above. // However, we can't do this if we're in an override file where // the type might not be set; we'll catch that during merge. - if v.Type != cty.NilType { + if v.ConstraintType != cty.NilType { var err error - val, err = convert.Convert(val, v.Type) + val, err = convert.Convert(val, v.ConstraintType) if err != nil { diags = append(diags, &hcl.Diagnostic{ Severity: hcl.DiagError, diff --git a/internal/terraform/evaluate.go b/internal/terraform/evaluate.go index efcc9c1f418a..b7dbe68f0a10 100644 --- a/internal/terraform/evaluate.go +++ b/internal/terraform/evaluate.go @@ -238,7 +238,16 @@ func (d *evaluationStateData) GetInputVariable(addr addrs.InputVariable, rng tfd return cty.DynamicVal, diags } + // wantType is the concrete value type to be returned. wantType := cty.DynamicPseudoType + + // converstionType is the type used for conversion, which may include + // optional attributes. + conversionType := cty.DynamicPseudoType + + if config.ConstraintType != cty.NilType { + conversionType = config.ConstraintType + } if config.Type != cty.NilType { wantType = config.Type } @@ -282,7 +291,7 @@ func (d *evaluationStateData) GetInputVariable(addr addrs.InputVariable, rng tfd } var err error - val, err = convert.Convert(val, wantType) + val, err = convert.Convert(val, conversionType) if err != nil { // We should never get here because this problem should've been caught // during earlier validation, but we'll do something reasonable anyway. diff --git a/internal/terraform/evaluate_test.go b/internal/terraform/evaluate_test.go index f8a46d4fce1e..922f916c9eb4 100644 --- a/internal/terraform/evaluate_test.go +++ b/internal/terraform/evaluate_test.go @@ -95,15 +95,19 @@ func TestEvaluatorGetInputVariable(t *testing.T) { Module: &configs.Module{ Variables: map[string]*configs.Variable{ "some_var": { - Name: "some_var", - Sensitive: true, - Default: cty.StringVal("foo"), + Name: "some_var", + Sensitive: true, + Default: cty.StringVal("foo"), + Type: cty.String, + ConstraintType: cty.String, }, // Avoid double marking a value "some_other_var": { - Name: "some_other_var", - Sensitive: true, - Default: cty.StringVal("bar"), + Name: "some_other_var", + Sensitive: true, + Default: cty.StringVal("bar"), + Type: cty.String, + ConstraintType: cty.String, }, }, }, diff --git a/internal/terraform/node_module_variable.go b/internal/terraform/node_module_variable.go index 38ac62ac0596..3487f6d08807 100644 --- a/internal/terraform/node_module_variable.go +++ b/internal/terraform/node_module_variable.go @@ -200,7 +200,6 @@ func (n *nodeModuleVariable) DotNode(name string, opts *dag.DotOpts) *dag.DotNod // validation, and we will not have any expansion module instance // repetition data. func (n *nodeModuleVariable) evalModuleCallArgument(ctx EvalContext, validateOnly bool) (map[string]cty.Value, error) { - wantType := n.Config.Type name := n.Addr.Variable.Name expr := n.Expr @@ -238,7 +237,7 @@ func (n *nodeModuleVariable) evalModuleCallArgument(ctx EvalContext, validateOnl // now we can do our own local type conversion and produce an error message // with better context if it fails. var convErr error - val, convErr = convert.Convert(val, wantType) + val, convErr = convert.Convert(val, n.Config.ConstraintType) if convErr != nil { diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, @@ -251,7 +250,7 @@ func (n *nodeModuleVariable) evalModuleCallArgument(ctx EvalContext, validateOnl }) // We'll return a placeholder unknown value to avoid producing // redundant downstream errors. - val = cty.UnknownVal(wantType) + val = cty.UnknownVal(n.Config.Type) } vals := make(map[string]cty.Value) diff --git a/internal/terraform/node_module_variable_test.go b/internal/terraform/node_module_variable_test.go index f060d3ea2868..e2b458cdbbbe 100644 --- a/internal/terraform/node_module_variable_test.go +++ b/internal/terraform/node_module_variable_test.go @@ -7,6 +7,7 @@ import ( "github.com/go-test/deep" "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/hclsyntax" + "github.com/zclconf/go-cty/cty" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" @@ -16,7 +17,9 @@ func TestNodeModuleVariablePath(t *testing.T) { n := &nodeModuleVariable{ Addr: addrs.RootModuleInstance.InputVariable("foo"), Config: &configs.Variable{ - Name: "foo", + Name: "foo", + Type: cty.String, + ConstraintType: cty.String, }, } @@ -31,7 +34,9 @@ func TestNodeModuleVariableReferenceableName(t *testing.T) { n := &nodeExpandModuleVariable{ Addr: addrs.InputVariable{Name: "foo"}, Config: &configs.Variable{ - Name: "foo", + Name: "foo", + Type: cty.String, + ConstraintType: cty.String, }, } @@ -64,7 +69,9 @@ func TestNodeModuleVariableReference(t *testing.T) { Addr: addrs.InputVariable{Name: "foo"}, Module: addrs.RootModule.Child("bar"), Config: &configs.Variable{ - Name: "foo", + Name: "foo", + Type: cty.String, + ConstraintType: cty.String, }, Expr: &hclsyntax.ScopeTraversalExpr{ Traversal: hcl.Traversal{ @@ -90,7 +97,9 @@ func TestNodeModuleVariableReference_grandchild(t *testing.T) { Addr: addrs.InputVariable{Name: "foo"}, Module: addrs.RootModule.Child("bar"), Config: &configs.Variable{ - Name: "foo", + Name: "foo", + Type: cty.String, + ConstraintType: cty.String, }, Expr: &hclsyntax.ScopeTraversalExpr{ Traversal: hcl.Traversal{ diff --git a/internal/terraform/node_root_variable_test.go b/internal/terraform/node_root_variable_test.go index 7a94f4b951ac..bd3d9c2d65c4 100644 --- a/internal/terraform/node_root_variable_test.go +++ b/internal/terraform/node_root_variable_test.go @@ -5,6 +5,7 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" + "github.com/zclconf/go-cty/cty" ) func TestNodeRootVariableExecute(t *testing.T) { @@ -13,7 +14,9 @@ func TestNodeRootVariableExecute(t *testing.T) { n := &NodeRootVariable{ Addr: addrs.InputVariable{Name: "foo"}, Config: &configs.Variable{ - Name: "foo", + Name: "foo", + Type: cty.String, + ConstraintType: cty.String, }, } diff --git a/internal/terraform/variables.go b/internal/terraform/variables.go index fca392802587..7a6ace0eee5c 100644 --- a/internal/terraform/variables.go +++ b/internal/terraform/variables.go @@ -262,10 +262,8 @@ func checkInputVariables(vcs map[string]*configs.Variable, vs InputValues) tfdia continue } - wantType := vc.Type - // A given value is valid if it can convert to the desired type. - _, err := convert.Convert(val.Value, wantType) + _, err := convert.Convert(val.Value, vc.ConstraintType) if err != nil { switch val.SourceType { case ValueFromConfig, ValueFromAutoFile, ValueFromNamedFile: From 7f26531d4fa7c064e5cca345d6574e0ee9b76400 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 10 Sep 2021 11:11:06 -0400 Subject: [PATCH 059/644] variable types should always be populated --- internal/terraform/evaluate.go | 27 ++++++--------------------- 1 file changed, 6 insertions(+), 21 deletions(-) diff --git a/internal/terraform/evaluate.go b/internal/terraform/evaluate.go index b7dbe68f0a10..62dbb791d7d4 100644 --- a/internal/terraform/evaluate.go +++ b/internal/terraform/evaluate.go @@ -237,21 +237,6 @@ func (d *evaluationStateData) GetInputVariable(addr addrs.InputVariable, rng tfd }) return cty.DynamicVal, diags } - - // wantType is the concrete value type to be returned. - wantType := cty.DynamicPseudoType - - // converstionType is the type used for conversion, which may include - // optional attributes. - conversionType := cty.DynamicPseudoType - - if config.ConstraintType != cty.NilType { - conversionType = config.ConstraintType - } - if config.Type != cty.NilType { - wantType = config.Type - } - d.Evaluator.VariableValuesLock.Lock() defer d.Evaluator.VariableValuesLock.Unlock() @@ -271,15 +256,15 @@ func (d *evaluationStateData) GetInputVariable(addr addrs.InputVariable, rng tfd if d.Operation == walkValidate { // Ensure variable sensitivity is captured in the validate walk if config.Sensitive { - return cty.UnknownVal(wantType).Mark(marks.Sensitive), diags + return cty.UnknownVal(config.Type).Mark(marks.Sensitive), diags } - return cty.UnknownVal(wantType), diags + return cty.UnknownVal(config.Type), diags } moduleAddrStr := d.ModulePath.String() vals := d.Evaluator.VariableValues[moduleAddrStr] if vals == nil { - return cty.UnknownVal(wantType), diags + return cty.UnknownVal(config.Type), diags } val, isSet := vals[addr.Name] @@ -287,11 +272,11 @@ func (d *evaluationStateData) GetInputVariable(addr addrs.InputVariable, rng tfd if config.Default != cty.NilVal { return config.Default, diags } - return cty.UnknownVal(wantType), diags + return cty.UnknownVal(config.Type), diags } var err error - val, err = convert.Convert(val, conversionType) + val, err = convert.Convert(val, config.ConstraintType) if err != nil { // We should never get here because this problem should've been caught // during earlier validation, but we'll do something reasonable anyway. @@ -303,7 +288,7 @@ func (d *evaluationStateData) GetInputVariable(addr addrs.InputVariable, rng tfd }) // Stub out our return value so that the semantic checker doesn't // produce redundant downstream errors. - val = cty.UnknownVal(wantType) + val = cty.UnknownVal(config.Type) } // Mark if sensitive From d0993b0e80d31acf6a4e8c1bb14c3080b63daa08 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Mon, 13 Sep 2021 13:22:36 -0400 Subject: [PATCH 060/644] fix temp directory handling in some tests Cleanup some more test fixtures to use t.TempDir Use EvalSymlinks with temp dir paths to help with MacOS errors from various terraform components. --- internal/backend/local/backend_local_test.go | 2 +- internal/backend/local/backend_plan_test.go | 22 +++++++------------- internal/backend/local/testing.go | 12 +---------- internal/command/command_test.go | 21 +++++-------------- internal/command/get_test.go | 6 ++---- internal/command/meta_backend_test.go | 3 +-- 6 files changed, 18 insertions(+), 48 deletions(-) diff --git a/internal/backend/local/backend_local_test.go b/internal/backend/local/backend_local_test.go index 67314d730c5c..fae2b1ae0953 100644 --- a/internal/backend/local/backend_local_test.go +++ b/internal/backend/local/backend_local_test.go @@ -132,7 +132,7 @@ func TestLocalRun_stalePlan(t *testing.T) { stateFile := statefile.New(plan.PriorState, "boop", 2) // Roundtrip through serialization as expected by the operation - outDir := testTempDir(t) + outDir := t.TempDir() defer os.RemoveAll(outDir) planPath := filepath.Join(outDir, "plan.tfplan") if err := planfile.Create(planPath, configload.NewEmptySnapshot(), prevStateFile, stateFile, plan); err != nil { diff --git a/internal/backend/local/backend_plan_test.go b/internal/backend/local/backend_plan_test.go index 73bd78df4d79..0e7992111124 100644 --- a/internal/backend/local/backend_plan_test.go +++ b/internal/backend/local/backend_plan_test.go @@ -174,7 +174,7 @@ func TestLocal_planOutputsChanged(t *testing.T) { // unknown" situation because that's already common for printing out // resource changes and we already have many tests for that. })) - outDir := testTempDir(t) + outDir := t.TempDir() defer os.RemoveAll(outDir) planPath := filepath.Join(outDir, "plan.tfplan") op, configCleanup, done := testOperationPlan(t, "./testdata/plan-outputs-changed") @@ -232,7 +232,7 @@ func TestLocal_planModuleOutputsChanged(t *testing.T) { OutputValue: addrs.OutputValue{Name: "changed"}, }, cty.StringVal("before"), false) })) - outDir := testTempDir(t) + outDir := t.TempDir() defer os.RemoveAll(outDir) planPath := filepath.Join(outDir, "plan.tfplan") op, configCleanup, done := testOperationPlan(t, "./testdata/plan-module-outputs-changed") @@ -275,8 +275,7 @@ func TestLocal_planTainted(t *testing.T) { defer cleanup() p := TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState_tainted()) - outDir := testTempDir(t) - defer os.RemoveAll(outDir) + outDir := t.TempDir() planPath := filepath.Join(outDir, "plan.tfplan") op, configCleanup, done := testOperationPlan(t, "./testdata/plan") defer configCleanup() @@ -356,8 +355,7 @@ func TestLocal_planDeposedOnly(t *testing.T) { }, ) })) - outDir := testTempDir(t) - defer os.RemoveAll(outDir) + outDir := t.TempDir() planPath := filepath.Join(outDir, "plan.tfplan") op, configCleanup, done := testOperationPlan(t, "./testdata/plan") defer configCleanup() @@ -448,8 +446,7 @@ func TestLocal_planTainted_createBeforeDestroy(t *testing.T) { defer cleanup() p := TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState_tainted()) - outDir := testTempDir(t) - defer os.RemoveAll(outDir) + outDir := t.TempDir() planPath := filepath.Join(outDir, "plan.tfplan") op, configCleanup, done := testOperationPlan(t, "./testdata/plan-cbd") defer configCleanup() @@ -540,8 +537,7 @@ func TestLocal_planDestroy(t *testing.T) { TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState()) - outDir := testTempDir(t) - defer os.RemoveAll(outDir) + outDir := t.TempDir() planPath := filepath.Join(outDir, "plan.tfplan") op, configCleanup, done := testOperationPlan(t, "./testdata/plan") @@ -594,8 +590,7 @@ func TestLocal_planDestroy_withDataSources(t *testing.T) { TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState_withDataSource()) - outDir := testTempDir(t) - defer os.RemoveAll(outDir) + outDir := t.TempDir() planPath := filepath.Join(outDir, "plan.tfplan") op, configCleanup, done := testOperationPlan(t, "./testdata/destroy-with-ds") @@ -670,8 +665,7 @@ func TestLocal_planOutPathNoChange(t *testing.T) { TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState()) - outDir := testTempDir(t) - defer os.RemoveAll(outDir) + outDir := t.TempDir() planPath := filepath.Join(outDir, "plan.tfplan") op, configCleanup, done := testOperationPlan(t, "./testdata/plan") diff --git a/internal/backend/local/testing.go b/internal/backend/local/testing.go index d8230403b36f..bfff7f003584 100644 --- a/internal/backend/local/testing.go +++ b/internal/backend/local/testing.go @@ -1,7 +1,6 @@ package local import ( - "io/ioutil" "os" "path/filepath" "testing" @@ -24,7 +23,7 @@ import ( // public fields without any locks. func TestLocal(t *testing.T) (*Local, func()) { t.Helper() - tempDir := testTempDir(t) + tempDir := t.TempDir() local := New() local.StatePath = filepath.Join(tempDir, "state.tfstate") @@ -189,15 +188,6 @@ func (b *TestLocalNoDefaultState) StateMgr(name string) (statemgr.Full, error) { return b.Local.StateMgr(name) } -func testTempDir(t *testing.T) string { - d, err := ioutil.TempDir("", "tf") - if err != nil { - t.Fatalf("err: %s", err) - } - - return d -} - func testStateFile(t *testing.T, path string, s *states.State) { stateFile := statemgr.NewFilesystem(path) stateFile.WriteState(s) diff --git a/internal/command/command_test.go b/internal/command/command_test.go index e182807a0cfe..70da240e86ff 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -147,16 +147,10 @@ func tempWorkingDir(t *testing.T) (*workdir.Dir, func() error) { // the testWorkingDir commentary for an example of how to use this function // along with testChdir to meet the expectations of command.Meta legacy // functionality. -func tempWorkingDirFixture(t *testing.T, fixtureName string) (*workdir.Dir, func() error) { +func tempWorkingDirFixture(t *testing.T, fixtureName string) *workdir.Dir { t.Helper() - dirPath, err := os.MkdirTemp("", "tf-command-test-"+fixtureName) - if err != nil { - t.Fatal(err) - } - done := func() error { - return os.RemoveAll(dirPath) - } + dirPath := testTempDir(t) t.Logf("temporary directory %s with fixture %q", dirPath, fixtureName) fixturePath := testFixturePath(fixtureName) @@ -165,7 +159,7 @@ func tempWorkingDirFixture(t *testing.T, fixtureName string) (*workdir.Dir, func // on failure, a failure to copy will prevent us from cleaning up the // temporary directory. Oh well. :( - return workdir.NewDir(dirPath), done + return workdir.NewDir(dirPath) } func testFixturePath(name string) string { @@ -550,13 +544,8 @@ func testTempFile(t *testing.T) string { func testTempDir(t *testing.T) string { t.Helper() - - d, err := ioutil.TempDir(testingDir, "tf") - if err != nil { - t.Fatalf("err: %s", err) - } - - d, err = filepath.EvalSymlinks(d) + d := t.TempDir() + d, err := filepath.EvalSymlinks(d) if err != nil { t.Fatal(err) } diff --git a/internal/command/get_test.go b/internal/command/get_test.go index 2e9f04611ad1..b2a3ea0a75ac 100644 --- a/internal/command/get_test.go +++ b/internal/command/get_test.go @@ -8,8 +8,7 @@ import ( ) func TestGet(t *testing.T) { - wd, cleanup := tempWorkingDirFixture(t, "get") - defer cleanup() + wd := tempWorkingDirFixture(t, "get") defer testChdir(t, wd.RootModuleDir())() ui := cli.NewMockUi() @@ -56,8 +55,7 @@ func TestGet_multipleArgs(t *testing.T) { } func TestGet_update(t *testing.T) { - wd, cleanup := tempWorkingDirFixture(t, "get") - defer cleanup() + wd := tempWorkingDirFixture(t, "get") defer testChdir(t, wd.RootModuleDir())() ui := cli.NewMockUi() diff --git a/internal/command/meta_backend_test.go b/internal/command/meta_backend_test.go index 23f021b8245d..82dbb0355a4b 100644 --- a/internal/command/meta_backend_test.go +++ b/internal/command/meta_backend_test.go @@ -1855,8 +1855,7 @@ func TestMetaBackend_configToExtra(t *testing.T) { // no config; return inmem backend stored in state func TestBackendFromState(t *testing.T) { - wd, cleanup := tempWorkingDirFixture(t, "backend-from-state") - defer cleanup() + wd := tempWorkingDirFixture(t, "backend-from-state") defer testChdir(t, wd.RootModuleDir())() // Setup the meta From 172200808b8e6910e6bfe51010cb3aed04d76e6b Mon Sep 17 00:00:00 2001 From: Chris Arcand Date: Mon, 13 Sep 2021 14:21:26 -0500 Subject: [PATCH 061/644] Correct terraform.env deprecation message typo --- internal/terraform/evaluate.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/internal/terraform/evaluate.go b/internal/terraform/evaluate.go index 6fad6192eb31..5e746b6d1c5e 100644 --- a/internal/terraform/evaluate.go +++ b/internal/terraform/evaluate.go @@ -911,7 +911,7 @@ func (d *evaluationStateData) GetTerraformAttr(addr addrs.TerraformAttr, rng tfd diags = diags.Append(&hcl.Diagnostic{ Severity: hcl.DiagError, Summary: `Invalid "terraform" attribute`, - Detail: `The terraform.env attribute was deprecated in v0.10 and removed in v0.12. The "state environment" concept was rename to "workspace" in v0.12, and so the workspace name can now be accessed using the terraform.workspace attribute.`, + Detail: `The terraform.env attribute was deprecated in v0.10 and removed in v0.12. The "state environment" concept was renamed to "workspace" in v0.12, and so the workspace name can now be accessed using the terraform.workspace attribute.`, Subject: rng.ToHCL().Ptr(), }) return cty.DynamicVal, diags From 718fa3895f7364f75d0418595f15a31bb7b99ef7 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 14 Sep 2021 09:43:00 -0700 Subject: [PATCH 062/644] backend: Remove Operation.Parallelism field The presence of this field was confusing because in practice the local backend doesn't use it for anything and the remote backend was using it only to return an error if it's set to anything other than the default, under the assumption that it would always match ContextOpts.Parallelism. The "command" package is the one actually responsible for handling this option, and it does so by placing it into the partial ContextOpts which it passes into the backend when preparing for a local operation. To make that clearer, here we remove Operation.Parallelism and change the few uses of it to refer to ContextOpts.Parallelism instead, so that everyone is reading and writing this value from the same place. --- internal/backend/backend.go | 1 - internal/backend/remote/backend_apply.go | 2 +- internal/backend/remote/backend_apply_test.go | 6 ++++-- internal/backend/remote/backend_plan.go | 2 +- internal/backend/remote/backend_plan_test.go | 6 ++++-- internal/command/meta_backend.go | 1 - 6 files changed, 10 insertions(+), 8 deletions(-) diff --git a/internal/backend/backend.go b/internal/backend/backend.go index caac42cc6731..d8b28b9435f4 100644 --- a/internal/backend/backend.go +++ b/internal/backend/backend.go @@ -249,7 +249,6 @@ type Operation struct { // behavior of the operation. PlanMode plans.Mode AutoApprove bool - Parallelism int Targets []addrs.Targetable ForceReplace []addrs.AbsResourceInstance Variables map[string]UnparsedVariableValue diff --git a/internal/backend/remote/backend_apply.go b/internal/backend/remote/backend_apply.go index 2ec123d30afa..ef89466a235b 100644 --- a/internal/backend/remote/backend_apply.go +++ b/internal/backend/remote/backend_apply.go @@ -42,7 +42,7 @@ func (b *Remote) opApply(stopCtx, cancelCtx context.Context, op *backend.Operati return nil, diags.Err() } - if op.Parallelism != defaultParallelism { + if b.ContextOpts != nil && b.ContextOpts.Parallelism != defaultParallelism { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, "Custom parallelism values are currently not supported", diff --git a/internal/backend/remote/backend_apply_test.go b/internal/backend/remote/backend_apply_test.go index d54914559717..4bc2a909f288 100644 --- a/internal/backend/remote/backend_apply_test.go +++ b/internal/backend/remote/backend_apply_test.go @@ -46,7 +46,6 @@ func testOperationApplyWithTimeout(t *testing.T, configDir string, timeout time. return &backend.Operation{ ConfigDir: configDir, ConfigLoader: configLoader, - Parallelism: defaultParallelism, PlanRefresh: true, StateLocker: clistate.NewLocker(timeout, stateLockerView), Type: backend.OperationTypeApply, @@ -223,7 +222,10 @@ func TestRemote_applyWithParallelism(t *testing.T) { op, configCleanup, done := testOperationApply(t, "./testdata/apply") defer configCleanup() - op.Parallelism = 3 + if b.ContextOpts == nil { + b.ContextOpts = &terraform.ContextOpts{} + } + b.ContextOpts.Parallelism = 3 op.Workspace = backend.DefaultStateName run, err := b.Operation(context.Background(), op) diff --git a/internal/backend/remote/backend_plan.go b/internal/backend/remote/backend_plan.go index 82f33ec2dc3b..736c040b4d8b 100644 --- a/internal/backend/remote/backend_plan.go +++ b/internal/backend/remote/backend_plan.go @@ -38,7 +38,7 @@ func (b *Remote) opPlan(stopCtx, cancelCtx context.Context, op *backend.Operatio return nil, diags.Err() } - if op.Parallelism != defaultParallelism { + if b.ContextOpts != nil && b.ContextOpts.Parallelism != defaultParallelism { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, "Custom parallelism values are currently not supported", diff --git a/internal/backend/remote/backend_plan_test.go b/internal/backend/remote/backend_plan_test.go index a231f149ecfa..6d4ced7b87a7 100644 --- a/internal/backend/remote/backend_plan_test.go +++ b/internal/backend/remote/backend_plan_test.go @@ -44,7 +44,6 @@ func testOperationPlanWithTimeout(t *testing.T, configDir string, timeout time.D return &backend.Operation{ ConfigDir: configDir, ConfigLoader: configLoader, - Parallelism: defaultParallelism, PlanRefresh: true, StateLocker: clistate.NewLocker(timeout, stateLockerView), Type: backend.OperationTypePlan, @@ -198,7 +197,10 @@ func TestRemote_planWithParallelism(t *testing.T) { op, configCleanup, done := testOperationPlan(t, "./testdata/plan") defer configCleanup() - op.Parallelism = 3 + if b.ContextOpts == nil { + b.ContextOpts = &terraform.ContextOpts{} + } + b.ContextOpts.Parallelism = 3 op.Workspace = backend.DefaultStateName run, err := b.Operation(context.Background(), op) diff --git a/internal/command/meta_backend.go b/internal/command/meta_backend.go index 71ebfbe26224..82ecd89c7850 100644 --- a/internal/command/meta_backend.go +++ b/internal/command/meta_backend.go @@ -349,7 +349,6 @@ func (m *Meta) Operation(b backend.Backend) *backend.Operation { return &backend.Operation{ PlanOutBackend: planOutBackend, - Parallelism: m.parallelism, Targets: m.targets, UIIn: m.UIInput(), UIOut: m.Ui, From 332ea1f233891e45bd274e714ab8e4e11c79dc95 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 14 Sep 2021 09:47:24 -0700 Subject: [PATCH 063/644] backend/local: TestLocal_plan_context_error to fail terraform.NewContext The original intent of this test was to verify that we properly release the state lock if terraform.NewContext fails. This was in response to a bug in an earlier version of Terraform where that wasn't true. In the recent refactoring that made terraform.NewContext no longer responsible for provider constraint/checksum verification, this test began testing a failed plan operation instead, which left the error return path from terraform.NewContext untested. An invalid parallelism value is the one remaining case where terraform.NewContext can return an error, so as a localized fix for this test I've switched it to just intentionally set an invalid parallelism value. This is still not ideal because it's still testing an implementation detail, but I've at least left a comment inline to try to be clearer about what the goal is here so that we can respond in a more appropriate way if future changes cause this test to fail again. In the long run I'd like to move this last remaining check out to be the responsibility of the CLI layer, with terraform.NewContext either just assuming the value correct or panicking when it isn't, but the handling of this CLI option is currently rather awkwardly spread across the command and backend packages so we'll save that refactoring for a later date. --- internal/backend/local/backend_plan_test.go | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/internal/backend/local/backend_plan_test.go b/internal/backend/local/backend_plan_test.go index 0e7992111124..7d5b92b161bb 100644 --- a/internal/backend/local/backend_plan_test.go +++ b/internal/backend/local/backend_plan_test.go @@ -119,9 +119,24 @@ func TestLocal_plan_context_error(t *testing.T) { b, cleanup := TestLocal(t) defer cleanup() + // This is an intentionally-invalid value to make terraform.NewContext fail + // when b.Operation calls it. + // NOTE: This test was originally using a provider initialization failure + // as its forced error condition, but terraform.NewContext is no longer + // responsible for checking that. Invalid parallelism is the last situation + // where terraform.NewContext can return error diagnostics, and arguably + // we should be validating this argument at the UI layer anyway, so perhaps + // in future we'll make terraform.NewContext never return errors and then + // this test will become redundant, because its purpose is specifically + // to test that we properly unlock the state if terraform.NewContext + // returns an error. + if b.ContextOpts == nil { + b.ContextOpts = &terraform.ContextOpts{} + } + b.ContextOpts.Parallelism = -1 + op, configCleanup, done := testOperationPlan(t, "./testdata/plan") defer configCleanup() - op.PlanRefresh = true // we coerce a failure in Context() by omitting the provider schema run, err := b.Operation(context.Background(), op) @@ -136,7 +151,7 @@ func TestLocal_plan_context_error(t *testing.T) { // the backend should be unlocked after a run assertBackendStateUnlocked(t, b) - if got, want := done(t).Stderr(), "failed to read schema for test_instance.foo in registry.terraform.io/hashicorp/test"; !strings.Contains(got, want) { + if got, want := done(t).Stderr(), "Error: Invalid parallelism value"; !strings.Contains(got, want) { t.Fatalf("unexpected error output:\n%s\nwant: %s", got, want) } } From 902987061321e6860329d121178d166c254b8f46 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 14 Sep 2021 15:55:20 -0400 Subject: [PATCH 064/644] handle NestedTypes in Block.CoerceValue The CoerceValue code was not updated to handle NestedTypes, and while none of the new codepaths make use of this method, there are still some internal uses. --- internal/configs/configschema/coerce_value.go | 48 +++++++-------- .../configs/configschema/coerce_value_test.go | 61 +++++++++++++++++++ 2 files changed, 84 insertions(+), 25 deletions(-) diff --git a/internal/configs/configschema/coerce_value.go b/internal/configs/configschema/coerce_value.go index 41a533745c38..66804c375200 100644 --- a/internal/configs/configschema/coerce_value.go +++ b/internal/configs/configschema/coerce_value.go @@ -27,16 +27,19 @@ func (b *Block) CoerceValue(in cty.Value) (cty.Value, error) { } func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { + convType := b.specType() + impliedType := convType.WithoutOptionalAttributesDeep() + switch { case in.IsNull(): - return cty.NullVal(b.ImpliedType()), nil + return cty.NullVal(impliedType), nil case !in.IsKnown(): - return cty.UnknownVal(b.ImpliedType()), nil + return cty.UnknownVal(impliedType), nil } ty := in.Type() if !ty.IsObjectType() { - return cty.UnknownVal(b.ImpliedType()), path.NewErrorf("an object is required") + return cty.UnknownVal(impliedType), path.NewErrorf("an object is required") } for name := range ty.AttributeTypes() { @@ -46,29 +49,32 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { if _, defined := b.BlockTypes[name]; defined { continue } - return cty.UnknownVal(b.ImpliedType()), path.NewErrorf("unexpected attribute %q", name) + return cty.UnknownVal(impliedType), path.NewErrorf("unexpected attribute %q", name) } attrs := make(map[string]cty.Value) for name, attrS := range b.Attributes { + attrType := impliedType.AttributeType(name) + attrConvType := convType.AttributeType(name) + var val cty.Value switch { case ty.HasAttribute(name): val = in.GetAttr(name) case attrS.Computed || attrS.Optional: - val = cty.NullVal(attrS.Type) + val = cty.NullVal(attrType) default: - return cty.UnknownVal(b.ImpliedType()), path.NewErrorf("attribute %q is required", name) + return cty.UnknownVal(impliedType), path.NewErrorf("attribute %q is required", name) } - val, err := attrS.coerceValue(val, append(path, cty.GetAttrStep{Name: name})) + val, err := convert.Convert(val, attrConvType) if err != nil { - return cty.UnknownVal(b.ImpliedType()), err + return cty.UnknownVal(impliedType), append(path, cty.GetAttrStep{Name: name}).NewError(err) } - attrs[name] = val } + for typeName, blockS := range b.BlockTypes { switch blockS.Nesting { @@ -79,7 +85,7 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { val := in.GetAttr(typeName) attrs[typeName], err = blockS.coerceValue(val, append(path, cty.GetAttrStep{Name: typeName})) if err != nil { - return cty.UnknownVal(b.ImpliedType()), err + return cty.UnknownVal(impliedType), err } default: attrs[typeName] = blockS.EmptyValue() @@ -100,7 +106,7 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { } if !coll.CanIterateElements() { - return cty.UnknownVal(b.ImpliedType()), path.NewErrorf("must be a list") + return cty.UnknownVal(impliedType), path.NewErrorf("must be a list") } l := coll.LengthInt() @@ -116,7 +122,7 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { idx, val := it.Element() val, err = blockS.coerceValue(val, append(path, cty.IndexStep{Key: idx})) if err != nil { - return cty.UnknownVal(b.ImpliedType()), err + return cty.UnknownVal(impliedType), err } elems = append(elems, val) } @@ -141,7 +147,7 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { } if !coll.CanIterateElements() { - return cty.UnknownVal(b.ImpliedType()), path.NewErrorf("must be a set") + return cty.UnknownVal(impliedType), path.NewErrorf("must be a set") } l := coll.LengthInt() @@ -157,7 +163,7 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { idx, val := it.Element() val, err = blockS.coerceValue(val, append(path, cty.IndexStep{Key: idx})) if err != nil { - return cty.UnknownVal(b.ImpliedType()), err + return cty.UnknownVal(impliedType), err } elems = append(elems, val) } @@ -182,7 +188,7 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { } if !coll.CanIterateElements() { - return cty.UnknownVal(b.ImpliedType()), path.NewErrorf("must be a map") + return cty.UnknownVal(impliedType), path.NewErrorf("must be a map") } l := coll.LengthInt() if l == 0 { @@ -196,11 +202,11 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { var err error key, val := it.Element() if key.Type() != cty.String || key.IsNull() || !key.IsKnown() { - return cty.UnknownVal(b.ImpliedType()), path.NewErrorf("must be a map") + return cty.UnknownVal(impliedType), path.NewErrorf("must be a map") } val, err = blockS.coerceValue(val, append(path, cty.IndexStep{Key: key})) if err != nil { - return cty.UnknownVal(b.ImpliedType()), err + return cty.UnknownVal(impliedType), err } elems[key.AsString()] = val } @@ -240,11 +246,3 @@ func (b *Block) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { return cty.ObjectVal(attrs), nil } - -func (a *Attribute) coerceValue(in cty.Value, path cty.Path) (cty.Value, error) { - val, err := convert.Convert(in, a.Type) - if err != nil { - return cty.UnknownVal(a.Type), path.NewError(err) - } - return val, nil -} diff --git a/internal/configs/configschema/coerce_value_test.go b/internal/configs/configschema/coerce_value_test.go index 3f57b174bef3..37f81b76986f 100644 --- a/internal/configs/configschema/coerce_value_test.go +++ b/internal/configs/configschema/coerce_value_test.go @@ -538,6 +538,67 @@ func TestCoerceValue(t *testing.T) { }), ``, }, + "nested types": { + // handle NestedTypes + &Block{ + Attributes: map[string]*Attribute{ + "foo": { + NestedType: &Object{ + Nesting: NestingList, + Attributes: map[string]*Attribute{ + "bar": { + Type: cty.String, + Required: true, + }, + "baz": { + Type: cty.Map(cty.String), + Optional: true, + }, + }, + }, + Optional: true, + }, + "fob": { + NestedType: &Object{ + Nesting: NestingSet, + Attributes: map[string]*Attribute{ + "bar": { + Type: cty.String, + Optional: true, + }, + }, + }, + Optional: true, + }, + }, + }, + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "bar": cty.StringVal("beep"), + }), + cty.ObjectVal(map[string]cty.Value{ + "bar": cty.StringVal("boop"), + }), + }), + }), + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "bar": cty.StringVal("beep"), + "baz": cty.NullVal(cty.Map(cty.String)), + }), + cty.ObjectVal(map[string]cty.Value{ + "bar": cty.StringVal("boop"), + "baz": cty.NullVal(cty.Map(cty.String)), + }), + }), + "fob": cty.NullVal(cty.Set(cty.Object(map[string]cty.Type{ + "bar": cty.String, + }))), + }), + ``, + }, } for name, test := range tests { From 331dc8b14ccdb75d6ed19e740803bf839c218842 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 14 Sep 2021 16:36:45 -0400 Subject: [PATCH 065/644] handle empty containers in ProposedNew NestedTypes Empty containers of NestedTypes were not handled in ProposedNew, causing plans to be submitted with null values where there was configuration present. --- internal/plans/objchange/objchange.go | 13 ++-- internal/plans/objchange/objchange_test.go | 75 ++++++++++++++++++++++ 2 files changed, 79 insertions(+), 9 deletions(-) diff --git a/internal/plans/objchange/objchange.go b/internal/plans/objchange/objchange.go index 281b9dd13ac0..739d666480bc 100644 --- a/internal/plans/objchange/objchange.go +++ b/internal/plans/objchange/objchange.go @@ -308,7 +308,9 @@ func proposedNewAttributes(attrs map[string]*configschema.Attribute, prior, conf } func proposedNewNestedType(schema *configschema.Object, prior, config cty.Value) cty.Value { - var newV cty.Value + // If the config is null or empty, we will be using this default value. + newV := config + switch schema.Nesting { case configschema.NestingSingle: if !config.IsNull() { @@ -323,6 +325,7 @@ func proposedNewNestedType(schema *configschema.Object, prior, config cty.Value) if config.IsKnown() && !config.IsNull() { configVLen = config.LengthInt() } + if configVLen > 0 { newVals := make([]cty.Value, 0, configVLen) for it := config.ElementIterator(); it.Next(); { @@ -345,8 +348,6 @@ func proposedNewNestedType(schema *configschema.Object, prior, config cty.Value) } else { newV = cty.ListVal(newVals) } - } else { - newV = cty.NullVal(schema.ImpliedType()) } case configschema.NestingMap: @@ -378,8 +379,6 @@ func proposedNewNestedType(schema *configschema.Object, prior, config cty.Value) // object values so that elements might have different types // in case of dynamically-typed attributes. newV = cty.ObjectVal(newVals) - } else { - newV = cty.NullVal(schema.ImpliedType()) } } else { configVLen := 0 @@ -403,8 +402,6 @@ func proposedNewNestedType(schema *configschema.Object, prior, config cty.Value) newVals[k] = cty.ObjectVal(newEV) } newV = cty.MapVal(newVals) - } else { - newV = cty.NullVal(schema.ImpliedType()) } } @@ -446,8 +443,6 @@ func proposedNewNestedType(schema *configschema.Object, prior, config cty.Value) } } newV = cty.SetVal(newVals) - } else { - newV = cty.NullVal(schema.ImpliedType()) } } diff --git a/internal/plans/objchange/objchange_test.go b/internal/plans/objchange/objchange_test.go index 77147989b885..b0021fb14b20 100644 --- a/internal/plans/objchange/objchange_test.go +++ b/internal/plans/objchange/objchange_test.go @@ -1461,6 +1461,81 @@ func TestProposedNew(t *testing.T) { }))), }), }, + "expected empty NestedTypes": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "set": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSet, + Attributes: map[string]*configschema.Attribute{ + "bar": {Type: cty.String}, + }, + }, + Optional: true, + }, + "map": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "bar": {Type: cty.String}, + }, + }, + Optional: true, + }, + }, + }, + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapValEmpty(cty.Object(map[string]cty.Type{"bar": cty.String})), + "set": cty.SetValEmpty(cty.Object(map[string]cty.Type{"bar": cty.String})), + }), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapValEmpty(cty.Object(map[string]cty.Type{"bar": cty.String})), + "set": cty.SetValEmpty(cty.Object(map[string]cty.Type{"bar": cty.String})), + }), + cty.ObjectVal(map[string]cty.Value{ + "map": cty.MapValEmpty(cty.Object(map[string]cty.Type{"bar": cty.String})), + "set": cty.SetValEmpty(cty.Object(map[string]cty.Type{"bar": cty.String})), + }), + }, + "optional types set replacement": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "set": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSet, + Attributes: map[string]*configschema.Attribute{ + "bar": { + Type: cty.String, + Required: true, + }, + }, + }, + Optional: true, + }, + }, + }, + cty.ObjectVal(map[string]cty.Value{ + "set": cty.SetVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "bar": cty.StringVal("old"), + }), + }), + }), + cty.ObjectVal(map[string]cty.Value{ + "set": cty.SetVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "bar": cty.StringVal("new"), + }), + }), + }), + cty.ObjectVal(map[string]cty.Value{ + "set": cty.SetVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "bar": cty.StringVal("new"), + }), + }), + }), + }, } for name, test := range tests { From b4594551f768589d1c6a36177f3ad2f0cadf6157 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 14 Sep 2021 15:19:28 -0700 Subject: [PATCH 066/644] refactoring: TestValidateMoves/cyclic_chain can now pass When originally filling out these test cases we didn't yet have the logic in place to detect chained moves and so this test couldn't succeed in spite of being correct. We now have chain-detection implemented and so consequently we can also detect cyclic chains. This commit largely just enables the original test unchanged, although it does include the text of the final error message for reporting cyclic move chains which wasn't yet finalized when we were stubbing out this test case originally. --- internal/refactoring/move_validate_test.go | 48 +++++++++++----------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/internal/refactoring/move_validate_test.go b/internal/refactoring/move_validate_test.go index 5ab83d3aa283..7ec8ab2f9e28 100644 --- a/internal/refactoring/move_validate_test.go +++ b/internal/refactoring/move_validate_test.go @@ -129,31 +129,31 @@ func TestValidateMoves(t *testing.T) { }, WantError: `Redundant move statement: This statement declares a move from module.a to the same address, which is the same as not declaring this move at all.`, }, - /* - // TODO: This test can't pass until we've implemented - // addrs.MoveEndpointInModule.CanChainFrom, which is what - // detects the chaining condition this is testing for. - "cyclic chain": { - Statements: []MoveStatement{ - makeTestMoveStmt(t, - ``, - `module.a`, - `module.b`, - ), - makeTestMoveStmt(t, - ``, - `module.b`, - `module.c`, - ), - makeTestMoveStmt(t, - ``, - `module.c`, - `module.a`, - ), - }, - WantError: `bad cycle`, + "cyclic chain": { + Statements: []MoveStatement{ + makeTestMoveStmt(t, + ``, + `module.a`, + `module.b`, + ), + makeTestMoveStmt(t, + ``, + `module.b`, + `module.c`, + ), + makeTestMoveStmt(t, + ``, + `module.c`, + `module.a`, + ), }, - */ + WantError: `Cyclic dependency in move statements: The following chained move statements form a cycle, and so there is no final location to move objects to: + - test:1,1: module.a[*] → module.b[*] + - test:1,1: module.b[*] → module.c[*] + - test:1,1: module.c[*] → module.a[*] + +A chain of move statements must end with an address that doesn't appear in any other statements, and which typically also refers to an object still declared in the configuration.`, + }, "module.single as a call still exists in configuration": { Statements: []MoveStatement{ makeTestMoveStmt(t, From e6a76d8ba0694a1068e5664f32a6911b3bfb3e65 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 15 Sep 2021 15:54:16 -0700 Subject: [PATCH 067/644] core: Fail if a moved resource instance is excluded by -target Because "moved" blocks produce changes that span across more than one resource instance address at the same time, we need to take extra care with them during planning. The -target option allows for restricting Terraform's attention only to a subset of resources when planning, as an escape hatch to recover from bugs and mistakes. However, we need to avoid any situation where only one "side" of a move would be considered in a particular plan, because that'd create a new situation that would be otherwise unreachable and would be difficult to recover from. As a compromise then, we'll reject an attempt to create a targeted plan if the plan involves resolving a pending move and if the source address of that move is not included in the targets. Our error message offers the user two possible resolutions: to create an untargeted plan, thus allowing everything to resolve, or to add additional -target options to include just the existing resource instances that have pending moves to resolve. This compromise recognizes that it is possible -- though hopefully rare -- that a user could potentially both be recovering from a bug or mistake at the same time as processing a move, if e.g. the bug was fixed by upgrading a module and the new version includes a new "moved" block. In that edge case, it might be necessary to just add the one additional address to the targets rather than removing the targets altogether, if creating a normal untargeted plan is impossible due to whatever bug they're trying to recover from. --- internal/terraform/context_plan.go | 92 +++++++++-- internal/terraform/context_plan2_test.go | 199 +++++++++++++++++++++++ 2 files changed, 277 insertions(+), 14 deletions(-) diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 248673254cf0..7d1353c6a5df 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -3,6 +3,8 @@ package terraform import ( "fmt" "log" + "sort" + "strings" "github.com/zclconf/go-cty/cty" @@ -113,7 +115,7 @@ func (c *Context) Plan(config *configs.Config, prevRunState *states.State, opts tfdiags.Warning, "Resource targeting is in effect", `You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration. - + The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, )) } @@ -297,23 +299,75 @@ func (c *Context) destroyPlan(config *configs.Config, prevRunState *states.State func (c *Context) prePlanFindAndApplyMoves(config *configs.Config, prevRunState *states.State, targets []addrs.Targetable) ([]refactoring.MoveStatement, map[addrs.UniqueKey]refactoring.MoveResult) { moveStmts := refactoring.FindMoveStatements(config) moveResults := refactoring.ApplyMoves(moveStmts, prevRunState) - if len(targets) > 0 { - for _, result := range moveResults { - matchesTarget := false - for _, targetAddr := range targets { - if targetAddr.TargetContains(result.From) { - matchesTarget = true - break - } + return moveStmts, moveResults +} + +func (c *Context) prePlanVerifyTargetedMoves(moveResults map[addrs.UniqueKey]refactoring.MoveResult, targets []addrs.Targetable) tfdiags.Diagnostics { + if len(targets) < 1 { + return nil // the following only matters when targeting + } + + var diags tfdiags.Diagnostics + + var excluded []addrs.AbsResourceInstance + for _, result := range moveResults { + fromMatchesTarget := false + toMatchesTarget := false + for _, targetAddr := range targets { + if targetAddr.TargetContains(result.From) { + fromMatchesTarget = true } - //lint:ignore SA9003 TODO - if !matchesTarget { - // TODO: Return an error stating that a targeted plan is - // only valid if it includes this address that was moved. + if targetAddr.TargetContains(result.To) { + toMatchesTarget = true } } + if !fromMatchesTarget { + excluded = append(excluded, result.From) + } + if !toMatchesTarget { + excluded = append(excluded, result.To) + } } - return moveStmts, moveResults + if len(excluded) > 0 { + sort.Slice(excluded, func(i, j int) bool { + return excluded[i].Less(excluded[j]) + }) + + var listBuf strings.Builder + var prevResourceAddr addrs.AbsResource + for _, instAddr := range excluded { + // Targeting generally ends up selecting whole resources rather + // than individual instances, because we don't factor in + // individual instances until DynamicExpand, so we're going to + // always show whole resource addresses here, excluding any + // instance keys. (This also neatly avoids dealing with the + // different quoting styles required for string instance keys + // on different shells, which is handy.) + // + // To avoid showing duplicates when we have multiple instances + // of the same resource, we'll remember the most recent + // resource we rendered in prevResource, which is sufficient + // because we sorted the list of instance addresses above, and + // our sort order always groups together instances of the same + // resource. + resourceAddr := instAddr.ContainingResource() + if resourceAddr.Equal(prevResourceAddr) { + continue + } + fmt.Fprintf(&listBuf, "\n -target=%q", resourceAddr.String()) + prevResourceAddr = resourceAddr + } + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Moved resource instances excluded by targeting", + fmt.Sprintf( + "Resource instances in your current state have moved to new addresses in the latest configuration. Terraform must include those resource instances while planning in order to ensure a correct result, but your -target=... options to not fully cover all of those resource instances.\n\nTo create a valid plan, either remove your -target=... options altogether or add the following additional target options:%s\n\nNote that adding these options may include further additional resource instances in your plan, in order to respect object dependencies.", + listBuf.String(), + ), + )) + } + + return diags } func (c *Context) postPlanValidateMoves(config *configs.Config, stmts []refactoring.MoveStatement, allInsts instances.Set) tfdiags.Diagnostics { @@ -327,6 +381,16 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, r prevRunState = prevRunState.DeepCopy() // don't modify the caller's object when we process the moves moveStmts, moveResults := c.prePlanFindAndApplyMoves(config, prevRunState, opts.Targets) + // If resource targeting is in effect then it might conflict with the + // move result. + diags = diags.Append(c.prePlanVerifyTargetedMoves(moveResults, opts.Targets)) + if diags.HasErrors() { + // We'll return early here, because if we have any moved resource + // instances excluded by targeting then planning is likely to encounter + // strange problems that may lead to confusing error messages. + return nil, diags + } + graph, walkOp, moreDiags := c.planGraph(config, prevRunState, opts, true) diags = diags.Append(moreDiags) if diags.HasErrors() { diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 53053fedadc1..a1c4e9feb94b 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -7,12 +7,14 @@ import ( "testing" "github.com/davecgh/go-spew/spew" + "github.com/google/go-cmp/cmp" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/lang/marks" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/tfdiags" "github.com/zclconf/go-cty/cty" ) @@ -772,6 +774,203 @@ func TestContext2Plan_movedResourceBasic(t *testing.T) { }) } +func TestContext2Plan_movedResourceUntargeted(t *testing.T) { + addrA := mustResourceInstanceAddr("test_object.a") + addrB := mustResourceInstanceAddr("test_object.b") + m := testModuleInline(t, map[string]string{ + "main.tf": ` + resource "test_object" "b" { + } + + moved { + from = test_object.a + to = test_object.b + } + + terraform { + experiments = [config_driven_move] + } + `, + }) + + state := states.BuildState(func(s *states.SyncState) { + // The prior state tracks test_object.a, which we should treat as + // test_object.b because of the "moved" block in the config. + s.SetResourceInstanceCurrent(addrA, &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + t.Run("without targeting instance A", func(t *testing.T) { + _, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + Targets: []addrs.Targetable{ + // NOTE: addrA isn't included here, but it's pending move to addrB + // and so this plan request is invalid. + addrB, + }, + }) + diags.Sort() + + // We're semi-abusing "ForRPC" here just to get diagnostics that are + // more easily comparable than the various different diagnostics types + // tfdiags uses internally. The RPC-friendly diagnostics are also + // comparison-friendly, by discarding all of the dynamic type information. + gotDiags := diags.ForRPC() + wantDiags := tfdiags.Diagnostics{ + tfdiags.Sourceless( + tfdiags.Warning, + "Resource targeting is in effect", + `You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration. + +The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, + ), + tfdiags.Sourceless( + tfdiags.Error, + "Moved resource instances excluded by targeting", + `Resource instances in your current state have moved to new addresses in the latest configuration. Terraform must include those resource instances while planning in order to ensure a correct result, but your -target=... options to not fully cover all of those resource instances. + +To create a valid plan, either remove your -target=... options altogether or add the following additional target options: + -target="test_object.a" + +Note that adding these options may include further additional resource instances in your plan, in order to respect object dependencies.`, + ), + }.ForRPC() + + if diff := cmp.Diff(wantDiags, gotDiags); diff != "" { + t.Errorf("wrong diagnostics\n%s", diff) + } + }) + t.Run("without targeting instance B", func(t *testing.T) { + _, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + Targets: []addrs.Targetable{ + addrA, + // NOTE: addrB isn't included here, but it's pending move from + // addrA and so this plan request is invalid. + }, + }) + diags.Sort() + + // We're semi-abusing "ForRPC" here just to get diagnostics that are + // more easily comparable than the various different diagnostics types + // tfdiags uses internally. The RPC-friendly diagnostics are also + // comparison-friendly, by discarding all of the dynamic type information. + gotDiags := diags.ForRPC() + wantDiags := tfdiags.Diagnostics{ + tfdiags.Sourceless( + tfdiags.Warning, + "Resource targeting is in effect", + `You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration. + +The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, + ), + tfdiags.Sourceless( + tfdiags.Error, + "Moved resource instances excluded by targeting", + `Resource instances in your current state have moved to new addresses in the latest configuration. Terraform must include those resource instances while planning in order to ensure a correct result, but your -target=... options to not fully cover all of those resource instances. + +To create a valid plan, either remove your -target=... options altogether or add the following additional target options: + -target="test_object.b" + +Note that adding these options may include further additional resource instances in your plan, in order to respect object dependencies.`, + ), + }.ForRPC() + + if diff := cmp.Diff(wantDiags, gotDiags); diff != "" { + t.Errorf("wrong diagnostics\n%s", diff) + } + }) + t.Run("without targeting either instance", func(t *testing.T) { + _, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + Targets: []addrs.Targetable{ + mustResourceInstanceAddr("test_object.unrelated"), + // NOTE: neither addrA nor addrB are included here, but there's + // a pending move between them and so this is invalid. + }, + }) + diags.Sort() + + // We're semi-abusing "ForRPC" here just to get diagnostics that are + // more easily comparable than the various different diagnostics types + // tfdiags uses internally. The RPC-friendly diagnostics are also + // comparison-friendly, by discarding all of the dynamic type information. + gotDiags := diags.ForRPC() + wantDiags := tfdiags.Diagnostics{ + tfdiags.Sourceless( + tfdiags.Warning, + "Resource targeting is in effect", + `You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration. + +The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, + ), + tfdiags.Sourceless( + tfdiags.Error, + "Moved resource instances excluded by targeting", + `Resource instances in your current state have moved to new addresses in the latest configuration. Terraform must include those resource instances while planning in order to ensure a correct result, but your -target=... options to not fully cover all of those resource instances. + +To create a valid plan, either remove your -target=... options altogether or add the following additional target options: + -target="test_object.a" + -target="test_object.b" + +Note that adding these options may include further additional resource instances in your plan, in order to respect object dependencies.`, + ), + }.ForRPC() + + if diff := cmp.Diff(wantDiags, gotDiags); diff != "" { + t.Errorf("wrong diagnostics\n%s", diff) + } + }) + t.Run("with both addresses in the target set", func(t *testing.T) { + // The error messages in the other subtests above suggest adding + // addresses to the set of targets. This additional test makes sure that + // following that advice actually leads to a valid result. + + _, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + Targets: []addrs.Targetable{ + // This time we're including both addresses in the target, + // to get the same effect an end-user would get if following + // the advice in our error message in the other subtests. + addrA, + addrB, + }, + }) + diags.Sort() + + // We're semi-abusing "ForRPC" here just to get diagnostics that are + // more easily comparable than the various different diagnostics types + // tfdiags uses internally. The RPC-friendly diagnostics are also + // comparison-friendly, by discarding all of the dynamic type information. + gotDiags := diags.ForRPC() + wantDiags := tfdiags.Diagnostics{ + // Still get the warning about the -target option... + tfdiags.Sourceless( + tfdiags.Warning, + "Resource targeting is in effect", + `You are creating a plan with the -target option, which means that the result of this plan may not represent all of the changes requested by the current configuration. + +The -target option is not for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when Terraform specifically suggests to use it as part of an error message.`, + ), + // ...but now we have no error about test_object.a + }.ForRPC() + + if diff := cmp.Diff(wantDiags, gotDiags); diff != "" { + t.Errorf("wrong diagnostics\n%s", diff) + } + + }) +} + func TestContext2Plan_refreshOnlyMode(t *testing.T) { addr := mustResourceInstanceAddr("test_object.a") From bebf1ad23a674559c3257fc3ec6b55546e85dc03 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Mon, 13 Sep 2021 16:29:15 -0400 Subject: [PATCH 068/644] core: Compute resource drift after plan walk Rather than delaying resource drift detection until it is ready to be presented, here we perform that computation after the plan walk has completed. The resulting drift is represented like planned resource changes, using a slice of ResourceInstanceChangeSrc values. --- internal/plans/plan.go | 1 + internal/terraform/context_plan.go | 134 ++++++++++++++++++++- internal/terraform/context_plan2_test.go | 22 ++++ internal/terraform/context_refresh_test.go | 12 ++ 4 files changed, 165 insertions(+), 4 deletions(-) diff --git a/internal/plans/plan.go b/internal/plans/plan.go index 68e60ad98ab6..a96a056480e1 100644 --- a/internal/plans/plan.go +++ b/internal/plans/plan.go @@ -31,6 +31,7 @@ type Plan struct { VariableValues map[string]DynamicValue Changes *Changes + DriftedResources []*ResourceInstanceChangeSrc TargetAddrs []addrs.Targetable ForceReplaceAddrs []addrs.AbsResourceInstance ProviderSHA256s map[string][]byte diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 248673254cf0..0326e083307a 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -347,11 +347,17 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, r diags = diags.Append(walkDiags) diags = diags.Append(c.postPlanValidateMoves(config, moveStmts, walker.InstanceExpander.AllInstances())) + prevRunState = walker.PrevRunState.Close() + priorState := walker.RefreshState.Close() + driftedResources, driftDiags := c.driftedResources(config, prevRunState, priorState, moveResults) + diags = diags.Append(driftDiags) + plan := &plans.Plan{ - UIMode: opts.Mode, - Changes: changes, - PriorState: walker.RefreshState.Close(), - PrevRunState: walker.PrevRunState.Close(), + UIMode: opts.Mode, + Changes: changes, + DriftedResources: driftedResources, + PrevRunState: prevRunState, + PriorState: priorState, // Other fields get populated by Context.Plan after we return } @@ -398,6 +404,126 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, } } +func (c *Context) driftedResources(config *configs.Config, oldState, newState *states.State, moves map[addrs.UniqueKey]refactoring.MoveResult) ([]*plans.ResourceInstanceChangeSrc, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + + if newState.ManagedResourcesEqual(oldState) { + // Nothing to do, because we only detect and report drift for managed + // resource instances. + return nil, diags + } + + schemas, schemaDiags := c.Schemas(config, newState) + diags = diags.Append(schemaDiags) + if diags.HasErrors() { + return nil, diags + } + + var drs []*plans.ResourceInstanceChangeSrc + + for _, ms := range oldState.Modules { + for _, rs := range ms.Resources { + if rs.Addr.Resource.Mode != addrs.ManagedResourceMode { + // Drift reporting is only for managed resources + continue + } + + provider := rs.ProviderConfig.Provider + for key, oldIS := range rs.Instances { + if oldIS.Current == nil { + // Not interested in instances that only have deposed objects + continue + } + addr := rs.Addr.Instance(key) + newIS := newState.ResourceInstance(addr) + + schema, _ := schemas.ResourceTypeConfig( + provider, + addr.Resource.Resource.Mode, + addr.Resource.Resource.Type, + ) + if schema == nil { + // This should never happen, but just in case + return nil, diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Missing resource schema from provider", + fmt.Sprintf("No resource schema found for %s.", addr.Resource.Resource.Type), + )) + } + ty := schema.ImpliedType() + + oldObj, err := oldIS.Current.Decode(ty) + if err != nil { + // This should also never happen + return nil, diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to decode resource from state", + fmt.Sprintf("Error decoding %q from previous state: %s", addr.String(), err), + )) + } + + var newObj *states.ResourceInstanceObject + if newIS != nil && newIS.Current != nil { + newObj, err = newIS.Current.Decode(ty) + if err != nil { + // This should also never happen + return nil, diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to decode resource from state", + fmt.Sprintf("Error decoding %q from prior state: %s", addr.String(), err), + )) + } + } + + var oldVal, newVal cty.Value + oldVal = oldObj.Value + if newObj != nil { + newVal = newObj.Value + } else { + newVal = cty.NullVal(ty) + } + + if oldVal.RawEquals(newVal) { + // No drift if the two values are semantically equivalent + continue + } + + // We can only detect updates and deletes as drift. + action := plans.Update + if newVal.IsNull() { + action = plans.Delete + } + + prevRunAddr := addr + if move, ok := moves[addr.UniqueKey()]; ok { + prevRunAddr = move.From + } + + change := &plans.ResourceInstanceChange{ + Addr: addr, + PrevRunAddr: prevRunAddr, + ProviderAddr: rs.ProviderConfig, + Change: plans.Change{ + Action: action, + Before: oldVal, + After: newVal, + }, + } + + changeSrc, err := change.Encode(ty) + if err != nil { + diags = diags.Append(err) + return nil, diags + } + + drs = append(drs, changeSrc) + } + } + } + + return drs, diags +} + // PlanGraphForUI is a last vestage of graphs in the public interface of Context // (as opposed to graphs as an implementation detail) intended only for use // by the "terraform graph" command when asked to render a plan-time graph. diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 53053fedadc1..fc670ce31255 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -106,6 +106,23 @@ resource "test_object" "a" { } } + // This situation should result in a drifted resource change. + var drifted *plans.ResourceInstanceChangeSrc + for _, dr := range plan.DriftedResources { + if dr.Addr.Equal(addr) { + drifted = dr + break + } + } + + if drifted == nil { + t.Errorf("instance %s is missing from the drifted resource changes", addr) + } else { + if got, want := drifted.Action, plans.Delete; got != want { + t.Errorf("unexpected instance %s drifted resource change action. got: %s, want: %s", addr, got, want) + } + } + // Because the configuration still mentions test_object.a, we should've // planned to recreate it in order to fix the drift. for _, c := range plan.Changes.Resources { @@ -1037,6 +1054,11 @@ func TestContext2Plan_refreshOnlyMode_deposed(t *testing.T) { t.Errorf("wrong value for output value 'out'\ngot: %#v\nwant: %#v", got, want) } } + + // Deposed objects should not be represented in drift. + if len(plan.DriftedResources) > 0 { + t.Errorf("unexpected drifted resources (%d)", len(plan.DriftedResources)) + } } func TestContext2Plan_invalidSensitiveModuleOutput(t *testing.T) { diff --git a/internal/terraform/context_refresh_test.go b/internal/terraform/context_refresh_test.go index dd319254a6f2..49cd02e0ea5a 100644 --- a/internal/terraform/context_refresh_test.go +++ b/internal/terraform/context_refresh_test.go @@ -219,6 +219,10 @@ func TestContext2Refresh_targeted(t *testing.T) { ResourceTypes: map[string]*configschema.Block{ "aws_elb": { Attributes: map[string]*configschema.Attribute{ + "id": { + Type: cty.String, + Computed: true, + }, "instances": { Type: cty.Set(cty.String), Optional: true, @@ -295,6 +299,10 @@ func TestContext2Refresh_targetedCount(t *testing.T) { ResourceTypes: map[string]*configschema.Block{ "aws_elb": { Attributes: map[string]*configschema.Attribute{ + "id": { + Type: cty.String, + Computed: true, + }, "instances": { Type: cty.Set(cty.String), Optional: true, @@ -381,6 +389,10 @@ func TestContext2Refresh_targetedCountIndex(t *testing.T) { ResourceTypes: map[string]*configschema.Block{ "aws_elb": { Attributes: map[string]*configschema.Attribute{ + "id": { + Type: cty.String, + Computed: true, + }, "instances": { Type: cty.Set(cty.String), Optional: true, From c4688345a1c050a6410395fc4f24553df66ecee9 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Mon, 13 Sep 2021 16:49:19 -0400 Subject: [PATCH 069/644] plans: Add resource drift to the plan file format --- .../plans/internal/planproto/planfile.pb.go | 294 +++++++++--------- .../plans/internal/planproto/planfile.proto | 5 + internal/plans/planfile/planfile_test.go | 3 +- internal/plans/planfile/tfplan.go | 20 ++ internal/plans/planfile/tfplan_test.go | 41 +++ 5 files changed, 223 insertions(+), 140 deletions(-) diff --git a/internal/plans/internal/planproto/planfile.pb.go b/internal/plans/internal/planproto/planfile.pb.go index 9f541946c6d5..a612c03a3887 100644 --- a/internal/plans/internal/planproto/planfile.pb.go +++ b/internal/plans/internal/planproto/planfile.pb.go @@ -222,6 +222,10 @@ type Plan struct { // configuration, including any nested modules. Use the address of // each resource to determine which module it belongs to. ResourceChanges []*ResourceInstanceChange `protobuf:"bytes,3,rep,name=resource_changes,json=resourceChanges,proto3" json:"resource_changes,omitempty"` + // An unordered set of detected drift: changes made to resources outside of + // Terraform, computed by comparing the previous run's state to the state + // after refresh. + ResourceDrift []*ResourceInstanceChange `protobuf:"bytes,18,rep,name=resource_drift,json=resourceDrift,proto3" json:"resource_drift,omitempty"` // An unordered set of proposed changes to outputs in the root module // of the configuration. This set also includes "no action" changes for // outputs that are not changing, as context for detecting inconsistencies @@ -306,6 +310,13 @@ func (x *Plan) GetResourceChanges() []*ResourceInstanceChange { return nil } +func (x *Plan) GetResourceDrift() []*ResourceInstanceChange { + if x != nil { + return x.ResourceDrift + } + return nil +} + func (x *Plan) GetOutputChanges() []*OutputChange { if x != nil { return x.OutputChanges @@ -952,7 +963,7 @@ var File_planfile_proto protoreflect.FileDescriptor var file_planfile_proto_rawDesc = []byte{ 0x0a, 0x0e, 0x70, 0x6c, 0x61, 0x6e, 0x66, 0x69, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, - 0x12, 0x06, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x22, 0xa5, 0x05, 0x0a, 0x04, 0x50, 0x6c, 0x61, + 0x12, 0x06, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x22, 0xec, 0x05, 0x0a, 0x04, 0x50, 0x6c, 0x61, 0x6e, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x25, 0x0a, 0x07, 0x75, 0x69, 0x5f, 0x6d, 0x6f, 0x64, 0x65, 0x18, 0x11, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x0c, 0x2e, 0x74, @@ -965,124 +976,128 @@ var file_planfile_proto_rawDesc = []byte{ 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x0f, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, - 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x73, 0x12, 0x3b, 0x0a, 0x0e, 0x6f, 0x75, 0x74, 0x70, - 0x75, 0x74, 0x5f, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, - 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, - 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x0d, 0x6f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x43, 0x68, - 0x61, 0x6e, 0x67, 0x65, 0x73, 0x12, 0x21, 0x0a, 0x0c, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x5f, - 0x61, 0x64, 0x64, 0x72, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0b, 0x74, 0x61, 0x72, - 0x67, 0x65, 0x74, 0x41, 0x64, 0x64, 0x72, 0x73, 0x12, 0x2e, 0x0a, 0x13, 0x66, 0x6f, 0x72, 0x63, - 0x65, 0x5f, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x73, 0x18, - 0x10, 0x20, 0x03, 0x28, 0x09, 0x52, 0x11, 0x66, 0x6f, 0x72, 0x63, 0x65, 0x52, 0x65, 0x70, 0x6c, - 0x61, 0x63, 0x65, 0x41, 0x64, 0x64, 0x72, 0x73, 0x12, 0x2b, 0x0a, 0x11, 0x74, 0x65, 0x72, 0x72, - 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x0e, 0x20, - 0x01, 0x28, 0x09, 0x52, 0x10, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x56, 0x65, - 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x49, 0x0a, 0x0f, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, - 0x72, 0x5f, 0x68, 0x61, 0x73, 0x68, 0x65, 0x73, 0x18, 0x0f, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x20, - 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x72, 0x6f, - 0x76, 0x69, 0x64, 0x65, 0x72, 0x48, 0x61, 0x73, 0x68, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, - 0x52, 0x0e, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x48, 0x61, 0x73, 0x68, 0x65, 0x73, - 0x12, 0x29, 0x0a, 0x07, 0x62, 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x18, 0x0d, 0x20, 0x01, 0x28, - 0x0b, 0x32, 0x0f, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x65, - 0x6e, 0x64, 0x52, 0x07, 0x62, 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x1a, 0x52, 0x0a, 0x0e, 0x56, - 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, + 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x73, 0x12, 0x45, 0x0a, 0x0e, 0x72, 0x65, 0x73, 0x6f, + 0x75, 0x72, 0x63, 0x65, 0x5f, 0x64, 0x72, 0x69, 0x66, 0x74, 0x18, 0x12, 0x20, 0x03, 0x28, 0x0b, + 0x32, 0x1e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, + 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, + 0x52, 0x0d, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x44, 0x72, 0x69, 0x66, 0x74, 0x12, + 0x3b, 0x0a, 0x0e, 0x6f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x5f, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, + 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, + 0x2e, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x0d, 0x6f, + 0x75, 0x74, 0x70, 0x75, 0x74, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x73, 0x12, 0x21, 0x0a, 0x0c, + 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x73, 0x18, 0x05, 0x20, 0x03, + 0x28, 0x09, 0x52, 0x0b, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x41, 0x64, 0x64, 0x72, 0x73, 0x12, + 0x2e, 0x0a, 0x13, 0x66, 0x6f, 0x72, 0x63, 0x65, 0x5f, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, + 0x5f, 0x61, 0x64, 0x64, 0x72, 0x73, 0x18, 0x10, 0x20, 0x03, 0x28, 0x09, 0x52, 0x11, 0x66, 0x6f, + 0x72, 0x63, 0x65, 0x52, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x41, 0x64, 0x64, 0x72, 0x73, 0x12, + 0x2b, 0x0a, 0x11, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x5f, 0x76, 0x65, 0x72, + 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x0e, 0x20, 0x01, 0x28, 0x09, 0x52, 0x10, 0x74, 0x65, 0x72, 0x72, + 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x49, 0x0a, 0x0f, + 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x68, 0x61, 0x73, 0x68, 0x65, 0x73, 0x18, + 0x0f, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x20, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, + 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x48, 0x61, 0x73, 0x68, + 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0e, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, + 0x72, 0x48, 0x61, 0x73, 0x68, 0x65, 0x73, 0x12, 0x29, 0x0a, 0x07, 0x62, 0x61, 0x63, 0x6b, 0x65, + 0x6e, 0x64, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0f, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, + 0x6e, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x52, 0x07, 0x62, 0x61, 0x63, 0x6b, 0x65, + 0x6e, 0x64, 0x1a, 0x52, 0x0a, 0x0e, 0x56, 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65, 0x73, 0x45, + 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x2a, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, + 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, + 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, + 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x4f, 0x0a, 0x13, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, + 0x65, 0x72, 0x48, 0x61, 0x73, 0x68, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, - 0x2a, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, - 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, - 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, - 0x4f, 0x0a, 0x13, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x48, 0x61, 0x73, 0x68, 0x65, - 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, - 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x22, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, - 0x2e, 0x48, 0x61, 0x73, 0x68, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, - 0x22, 0x69, 0x0a, 0x07, 0x42, 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x74, - 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, - 0x2c, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, - 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, - 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x1c, 0x0a, - 0x09, 0x77, 0x6f, 0x72, 0x6b, 0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, - 0x52, 0x09, 0x77, 0x6f, 0x72, 0x6b, 0x73, 0x70, 0x61, 0x63, 0x65, 0x22, 0xe4, 0x01, 0x0a, 0x06, - 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x26, 0x0a, 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, - 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, - 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2c, - 0x0a, 0x06, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, - 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, - 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x73, 0x12, 0x42, 0x0a, 0x16, - 0x62, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x5f, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, - 0x5f, 0x70, 0x61, 0x74, 0x68, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, - 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x14, 0x62, 0x65, 0x66, 0x6f, - 0x72, 0x65, 0x53, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x50, 0x61, 0x74, 0x68, 0x73, - 0x12, 0x40, 0x0a, 0x15, 0x61, 0x66, 0x74, 0x65, 0x72, 0x5f, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, - 0x69, 0x76, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, - 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x13, 0x61, - 0x66, 0x74, 0x65, 0x72, 0x53, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x50, 0x61, 0x74, - 0x68, 0x73, 0x22, 0xd3, 0x02, 0x0a, 0x16, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, - 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, - 0x04, 0x61, 0x64, 0x64, 0x72, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x61, 0x64, 0x64, - 0x72, 0x12, 0x22, 0x0a, 0x0d, 0x70, 0x72, 0x65, 0x76, 0x5f, 0x72, 0x75, 0x6e, 0x5f, 0x61, 0x64, - 0x64, 0x72, 0x18, 0x0e, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x70, 0x72, 0x65, 0x76, 0x52, 0x75, - 0x6e, 0x41, 0x64, 0x64, 0x72, 0x12, 0x1f, 0x0a, 0x0b, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, - 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x6f, - 0x73, 0x65, 0x64, 0x4b, 0x65, 0x79, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, - 0x65, 0x72, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, - 0x65, 0x72, 0x12, 0x26, 0x0a, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x09, 0x20, 0x01, - 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, 0x6e, - 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, - 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x0a, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, - 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x10, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, - 0x5f, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x18, 0x0b, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, - 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x0f, 0x72, 0x65, - 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x52, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x12, 0x49, 0x0a, - 0x0d, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x0c, - 0x20, 0x01, 0x28, 0x0e, 0x32, 0x24, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, - 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, - 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x52, 0x0c, 0x61, 0x63, 0x74, 0x69, - 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x22, 0x68, 0x0a, 0x0c, 0x4f, 0x75, 0x74, 0x70, - 0x75, 0x74, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, - 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x26, 0x0a, 0x06, - 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, - 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, - 0x61, 0x6e, 0x67, 0x65, 0x12, 0x1c, 0x0a, 0x09, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, - 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, - 0x76, 0x65, 0x22, 0x28, 0x0a, 0x0c, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, - 0x75, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x73, 0x67, 0x70, 0x61, 0x63, 0x6b, 0x18, 0x01, 0x20, - 0x01, 0x28, 0x0c, 0x52, 0x07, 0x6d, 0x73, 0x67, 0x70, 0x61, 0x63, 0x6b, 0x22, 0x1e, 0x0a, 0x04, - 0x48, 0x61, 0x73, 0x68, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x68, 0x61, 0x32, 0x35, 0x36, 0x18, 0x01, - 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x73, 0x68, 0x61, 0x32, 0x35, 0x36, 0x22, 0xa5, 0x01, 0x0a, - 0x04, 0x50, 0x61, 0x74, 0x68, 0x12, 0x27, 0x0a, 0x05, 0x73, 0x74, 0x65, 0x70, 0x73, 0x18, 0x01, - 0x20, 0x03, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, - 0x74, 0x68, 0x2e, 0x53, 0x74, 0x65, 0x70, 0x52, 0x05, 0x73, 0x74, 0x65, 0x70, 0x73, 0x1a, 0x74, - 0x0a, 0x04, 0x53, 0x74, 0x65, 0x70, 0x12, 0x27, 0x0a, 0x0e, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, - 0x75, 0x74, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, - 0x52, 0x0d, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, - 0x37, 0x0a, 0x0b, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x02, - 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, - 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x48, 0x00, 0x52, 0x0a, 0x65, 0x6c, - 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x4b, 0x65, 0x79, 0x42, 0x0a, 0x0a, 0x08, 0x73, 0x65, 0x6c, 0x65, - 0x63, 0x74, 0x6f, 0x72, 0x2a, 0x31, 0x0a, 0x04, 0x4d, 0x6f, 0x64, 0x65, 0x12, 0x0a, 0x0a, 0x06, - 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c, 0x10, 0x00, 0x12, 0x0b, 0x0a, 0x07, 0x44, 0x45, 0x53, 0x54, - 0x52, 0x4f, 0x59, 0x10, 0x01, 0x12, 0x10, 0x0a, 0x0c, 0x52, 0x45, 0x46, 0x52, 0x45, 0x53, 0x48, - 0x5f, 0x4f, 0x4e, 0x4c, 0x59, 0x10, 0x02, 0x2a, 0x70, 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, 0x6f, - 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4f, 0x50, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x43, - 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x01, 0x12, 0x08, 0x0a, 0x04, 0x52, 0x45, 0x41, 0x44, 0x10, - 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x55, 0x50, 0x44, 0x41, 0x54, 0x45, 0x10, 0x03, 0x12, 0x0a, 0x0a, - 0x06, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x05, 0x12, 0x16, 0x0a, 0x12, 0x44, 0x45, 0x4c, - 0x45, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, - 0x06, 0x12, 0x16, 0x0a, 0x12, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, - 0x5f, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x07, 0x2a, 0x80, 0x01, 0x0a, 0x1c, 0x52, 0x65, - 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, - 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, - 0x4e, 0x45, 0x10, 0x00, 0x12, 0x1b, 0x0a, 0x17, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, - 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x54, 0x41, 0x49, 0x4e, 0x54, 0x45, 0x44, 0x10, - 0x01, 0x12, 0x16, 0x0a, 0x12, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x59, 0x5f, - 0x52, 0x45, 0x51, 0x55, 0x45, 0x53, 0x54, 0x10, 0x02, 0x12, 0x21, 0x0a, 0x1d, 0x52, 0x45, 0x50, - 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x43, 0x41, 0x4e, - 0x4e, 0x4f, 0x54, 0x5f, 0x55, 0x50, 0x44, 0x41, 0x54, 0x45, 0x10, 0x03, 0x42, 0x42, 0x5a, 0x40, - 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, - 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x2f, 0x69, - 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x73, 0x2f, 0x69, 0x6e, - 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, - 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, + 0x22, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0c, + 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x48, 0x61, 0x73, 0x68, 0x52, 0x05, 0x76, 0x61, + 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x69, 0x0a, 0x07, 0x42, 0x61, 0x63, 0x6b, 0x65, + 0x6e, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, + 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x2c, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, + 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, + 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, + 0x6e, 0x66, 0x69, 0x67, 0x12, 0x1c, 0x0a, 0x09, 0x77, 0x6f, 0x72, 0x6b, 0x73, 0x70, 0x61, 0x63, + 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x77, 0x6f, 0x72, 0x6b, 0x73, 0x70, 0x61, + 0x63, 0x65, 0x22, 0xe4, 0x01, 0x0a, 0x06, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x26, 0x0a, + 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x0e, 0x2e, + 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x06, 0x61, + 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2c, 0x0a, 0x06, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x73, 0x18, + 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, + 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x76, 0x61, 0x6c, + 0x75, 0x65, 0x73, 0x12, 0x42, 0x0a, 0x16, 0x62, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x5f, 0x73, 0x65, + 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x73, 0x18, 0x03, 0x20, + 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, + 0x68, 0x52, 0x14, 0x62, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x53, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, + 0x76, 0x65, 0x50, 0x61, 0x74, 0x68, 0x73, 0x12, 0x40, 0x0a, 0x15, 0x61, 0x66, 0x74, 0x65, 0x72, + 0x5f, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x73, + 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, + 0x50, 0x61, 0x74, 0x68, 0x52, 0x13, 0x61, 0x66, 0x74, 0x65, 0x72, 0x53, 0x65, 0x6e, 0x73, 0x69, + 0x74, 0x69, 0x76, 0x65, 0x50, 0x61, 0x74, 0x68, 0x73, 0x22, 0xd3, 0x02, 0x0a, 0x16, 0x52, 0x65, + 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x43, 0x68, + 0x61, 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x61, 0x64, 0x64, 0x72, 0x18, 0x0d, 0x20, 0x01, + 0x28, 0x09, 0x52, 0x04, 0x61, 0x64, 0x64, 0x72, 0x12, 0x22, 0x0a, 0x0d, 0x70, 0x72, 0x65, 0x76, + 0x5f, 0x72, 0x75, 0x6e, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x18, 0x0e, 0x20, 0x01, 0x28, 0x09, 0x52, + 0x0b, 0x70, 0x72, 0x65, 0x76, 0x52, 0x75, 0x6e, 0x41, 0x64, 0x64, 0x72, 0x12, 0x1f, 0x0a, 0x0b, + 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x07, 0x20, 0x01, 0x28, + 0x09, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x4b, 0x65, 0x79, 0x12, 0x1a, 0x0a, + 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, + 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, 0x26, 0x0a, 0x06, 0x63, 0x68, 0x61, + 0x6e, 0x67, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, + 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x0a, 0x20, 0x01, + 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x10, 0x72, + 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x18, + 0x0b, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, + 0x61, 0x74, 0x68, 0x52, 0x0f, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x52, 0x65, 0x70, + 0x6c, 0x61, 0x63, 0x65, 0x12, 0x49, 0x0a, 0x0d, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x72, + 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x24, 0x2e, 0x74, 0x66, + 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, + 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, + 0x6e, 0x52, 0x0c, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x22, + 0x68, 0x0a, 0x0c, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, + 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, + 0x61, 0x6d, 0x65, 0x12, 0x26, 0x0a, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x02, 0x20, + 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, + 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x1c, 0x0a, 0x09, 0x73, + 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09, + 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x22, 0x28, 0x0a, 0x0c, 0x44, 0x79, 0x6e, + 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x73, 0x67, + 0x70, 0x61, 0x63, 0x6b, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x6d, 0x73, 0x67, 0x70, + 0x61, 0x63, 0x6b, 0x22, 0x1e, 0x0a, 0x04, 0x48, 0x61, 0x73, 0x68, 0x12, 0x16, 0x0a, 0x06, 0x73, + 0x68, 0x61, 0x32, 0x35, 0x36, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x73, 0x68, 0x61, + 0x32, 0x35, 0x36, 0x22, 0xa5, 0x01, 0x0a, 0x04, 0x50, 0x61, 0x74, 0x68, 0x12, 0x27, 0x0a, 0x05, + 0x73, 0x74, 0x65, 0x70, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, + 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x2e, 0x53, 0x74, 0x65, 0x70, 0x52, 0x05, + 0x73, 0x74, 0x65, 0x70, 0x73, 0x1a, 0x74, 0x0a, 0x04, 0x53, 0x74, 0x65, 0x70, 0x12, 0x27, 0x0a, + 0x0e, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, + 0x01, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0d, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, + 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x6e, + 0x74, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, + 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, + 0x65, 0x48, 0x00, 0x52, 0x0a, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x4b, 0x65, 0x79, 0x42, + 0x0a, 0x0a, 0x08, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x2a, 0x31, 0x0a, 0x04, 0x4d, + 0x6f, 0x64, 0x65, 0x12, 0x0a, 0x0a, 0x06, 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c, 0x10, 0x00, 0x12, + 0x0b, 0x0a, 0x07, 0x44, 0x45, 0x53, 0x54, 0x52, 0x4f, 0x59, 0x10, 0x01, 0x12, 0x10, 0x0a, 0x0c, + 0x52, 0x45, 0x46, 0x52, 0x45, 0x53, 0x48, 0x5f, 0x4f, 0x4e, 0x4c, 0x59, 0x10, 0x02, 0x2a, 0x70, + 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4f, 0x50, + 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x01, 0x12, 0x08, + 0x0a, 0x04, 0x52, 0x45, 0x41, 0x44, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x55, 0x50, 0x44, 0x41, + 0x54, 0x45, 0x10, 0x03, 0x12, 0x0a, 0x0a, 0x06, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x05, + 0x12, 0x16, 0x0a, 0x12, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, + 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x06, 0x12, 0x16, 0x0a, 0x12, 0x43, 0x52, 0x45, 0x41, + 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x07, + 0x2a, 0x80, 0x01, 0x0a, 0x1c, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, + 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, + 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4e, 0x45, 0x10, 0x00, 0x12, 0x1b, 0x0a, 0x17, 0x52, + 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x54, + 0x41, 0x49, 0x4e, 0x54, 0x45, 0x44, 0x10, 0x01, 0x12, 0x16, 0x0a, 0x12, 0x52, 0x45, 0x50, 0x4c, + 0x41, 0x43, 0x45, 0x5f, 0x42, 0x59, 0x5f, 0x52, 0x45, 0x51, 0x55, 0x45, 0x53, 0x54, 0x10, 0x02, + 0x12, 0x21, 0x0a, 0x1d, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, + 0x55, 0x53, 0x45, 0x5f, 0x43, 0x41, 0x4e, 0x4e, 0x4f, 0x54, 0x5f, 0x55, 0x50, 0x44, 0x41, 0x54, + 0x45, 0x10, 0x03, 0x42, 0x42, 0x5a, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, + 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, + 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, + 0x6c, 0x61, 0x6e, 0x73, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, + 0x61, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( @@ -1119,27 +1134,28 @@ var file_planfile_proto_depIdxs = []int32{ 0, // 0: tfplan.Plan.ui_mode:type_name -> tfplan.Mode 11, // 1: tfplan.Plan.variables:type_name -> tfplan.Plan.VariablesEntry 6, // 2: tfplan.Plan.resource_changes:type_name -> tfplan.ResourceInstanceChange - 7, // 3: tfplan.Plan.output_changes:type_name -> tfplan.OutputChange - 12, // 4: tfplan.Plan.provider_hashes:type_name -> tfplan.Plan.ProviderHashesEntry - 4, // 5: tfplan.Plan.backend:type_name -> tfplan.Backend - 8, // 6: tfplan.Backend.config:type_name -> tfplan.DynamicValue - 1, // 7: tfplan.Change.action:type_name -> tfplan.Action - 8, // 8: tfplan.Change.values:type_name -> tfplan.DynamicValue - 10, // 9: tfplan.Change.before_sensitive_paths:type_name -> tfplan.Path - 10, // 10: tfplan.Change.after_sensitive_paths:type_name -> tfplan.Path - 5, // 11: tfplan.ResourceInstanceChange.change:type_name -> tfplan.Change - 10, // 12: tfplan.ResourceInstanceChange.required_replace:type_name -> tfplan.Path - 2, // 13: tfplan.ResourceInstanceChange.action_reason:type_name -> tfplan.ResourceInstanceActionReason - 5, // 14: tfplan.OutputChange.change:type_name -> tfplan.Change - 13, // 15: tfplan.Path.steps:type_name -> tfplan.Path.Step - 8, // 16: tfplan.Plan.VariablesEntry.value:type_name -> tfplan.DynamicValue - 9, // 17: tfplan.Plan.ProviderHashesEntry.value:type_name -> tfplan.Hash - 8, // 18: tfplan.Path.Step.element_key:type_name -> tfplan.DynamicValue - 19, // [19:19] is the sub-list for method output_type - 19, // [19:19] is the sub-list for method input_type - 19, // [19:19] is the sub-list for extension type_name - 19, // [19:19] is the sub-list for extension extendee - 0, // [0:19] is the sub-list for field type_name + 6, // 3: tfplan.Plan.resource_drift:type_name -> tfplan.ResourceInstanceChange + 7, // 4: tfplan.Plan.output_changes:type_name -> tfplan.OutputChange + 12, // 5: tfplan.Plan.provider_hashes:type_name -> tfplan.Plan.ProviderHashesEntry + 4, // 6: tfplan.Plan.backend:type_name -> tfplan.Backend + 8, // 7: tfplan.Backend.config:type_name -> tfplan.DynamicValue + 1, // 8: tfplan.Change.action:type_name -> tfplan.Action + 8, // 9: tfplan.Change.values:type_name -> tfplan.DynamicValue + 10, // 10: tfplan.Change.before_sensitive_paths:type_name -> tfplan.Path + 10, // 11: tfplan.Change.after_sensitive_paths:type_name -> tfplan.Path + 5, // 12: tfplan.ResourceInstanceChange.change:type_name -> tfplan.Change + 10, // 13: tfplan.ResourceInstanceChange.required_replace:type_name -> tfplan.Path + 2, // 14: tfplan.ResourceInstanceChange.action_reason:type_name -> tfplan.ResourceInstanceActionReason + 5, // 15: tfplan.OutputChange.change:type_name -> tfplan.Change + 13, // 16: tfplan.Path.steps:type_name -> tfplan.Path.Step + 8, // 17: tfplan.Plan.VariablesEntry.value:type_name -> tfplan.DynamicValue + 9, // 18: tfplan.Plan.ProviderHashesEntry.value:type_name -> tfplan.Hash + 8, // 19: tfplan.Path.Step.element_key:type_name -> tfplan.DynamicValue + 20, // [20:20] is the sub-list for method output_type + 20, // [20:20] is the sub-list for method input_type + 20, // [20:20] is the sub-list for extension type_name + 20, // [20:20] is the sub-list for extension extendee + 0, // [0:20] is the sub-list for field type_name } func init() { file_planfile_proto_init() } diff --git a/internal/plans/internal/planproto/planfile.proto b/internal/plans/internal/planproto/planfile.proto index 6ec4ae402441..d1427bfbea47 100644 --- a/internal/plans/internal/planproto/planfile.proto +++ b/internal/plans/internal/planproto/planfile.proto @@ -33,6 +33,11 @@ message Plan { // each resource to determine which module it belongs to. repeated ResourceInstanceChange resource_changes = 3; + // An unordered set of detected drift: changes made to resources outside of + // Terraform, computed by comparing the previous run's state to the state + // after refresh. + repeated ResourceInstanceChange resource_drift = 18; + // An unordered set of proposed changes to outputs in the root module // of the configuration. This set also includes "no action" changes for // outputs that are not changing, as context for detecting inconsistencies diff --git a/internal/plans/planfile/planfile_test.go b/internal/plans/planfile/planfile_test.go index b0001312d40f..14d23c87a67b 100644 --- a/internal/plans/planfile/planfile_test.go +++ b/internal/plans/planfile/planfile_test.go @@ -51,7 +51,8 @@ func TestRoundtrip(t *testing.T) { Resources: []*plans.ResourceInstanceChangeSrc{}, Outputs: []*plans.OutputChangeSrc{}, }, - ProviderSHA256s: map[string][]byte{}, + DriftedResources: []*plans.ResourceInstanceChangeSrc{}, + ProviderSHA256s: map[string][]byte{}, VariableValues: map[string]plans.DynamicValue{ "foo": plans.DynamicValue([]byte("foo placeholder")), }, diff --git a/internal/plans/planfile/tfplan.go b/internal/plans/planfile/tfplan.go index 7572020b6c41..8cfd3694fb4f 100644 --- a/internal/plans/planfile/tfplan.go +++ b/internal/plans/planfile/tfplan.go @@ -56,6 +56,7 @@ func readTfplan(r io.Reader) (*plans.Plan, error) { Outputs: []*plans.OutputChangeSrc{}, Resources: []*plans.ResourceInstanceChangeSrc{}, }, + DriftedResources: []*plans.ResourceInstanceChangeSrc{}, ProviderSHA256s: map[string][]byte{}, } @@ -98,6 +99,16 @@ func readTfplan(r io.Reader) (*plans.Plan, error) { plan.Changes.Resources = append(plan.Changes.Resources, change) } + for _, rawRC := range rawPlan.ResourceDrift { + change, err := resourceChangeFromTfplan(rawRC) + if err != nil { + // errors from resourceChangeFromTfplan already include context + return nil, err + } + + plan.DriftedResources = append(plan.DriftedResources, change) + } + for _, rawTargetAddr := range rawPlan.TargetAddrs { target, diags := addrs.ParseTargetStr(rawTargetAddr) if diags.HasErrors() { @@ -342,6 +353,7 @@ func writeTfplan(plan *plans.Plan, w io.Writer) error { Variables: map[string]*planproto.DynamicValue{}, OutputChanges: []*planproto.OutputChange{}, ResourceChanges: []*planproto.ResourceInstanceChange{}, + ResourceDrift: []*planproto.ResourceInstanceChange{}, } switch plan.UIMode { @@ -388,6 +400,14 @@ func writeTfplan(plan *plans.Plan, w io.Writer) error { rawPlan.ResourceChanges = append(rawPlan.ResourceChanges, rawRC) } + for _, rc := range plan.DriftedResources { + rawRC, err := resourceChangeToTfplan(rc) + if err != nil { + return err + } + rawPlan.ResourceDrift = append(rawPlan.ResourceDrift, rawRC) + } + for _, targetAddr := range plan.TargetAddrs { rawPlan.TargetAddrs = append(rawPlan.TargetAddrs, targetAddr.String()) } diff --git a/internal/plans/planfile/tfplan_test.go b/internal/plans/planfile/tfplan_test.go index b6c69657e4d3..7ab62de532d7 100644 --- a/internal/plans/planfile/tfplan_test.go +++ b/internal/plans/planfile/tfplan_test.go @@ -118,6 +118,46 @@ func TestTFPlanRoundTrip(t *testing.T) { }, }, }, + DriftedResources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.IntKey(0)).Absolute(addrs.RootModuleInstance), + PrevRunAddr: addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_thing", + Name: "woot", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), + ProviderAddr: addrs.AbsProviderConfig{ + Provider: addrs.NewDefaultProvider("test"), + Module: addrs.RootModule, + }, + ChangeSrc: plans.ChangeSrc{ + Action: plans.DeleteThenCreate, + Before: mustNewDynamicValue(cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("foo-bar-baz"), + "boop": cty.ListVal([]cty.Value{ + cty.StringVal("beep"), + }), + }), objTy), + After: mustNewDynamicValue(cty.ObjectVal(map[string]cty.Value{ + "id": cty.UnknownVal(cty.String), + "boop": cty.ListVal([]cty.Value{ + cty.StringVal("beep"), + cty.StringVal("bonk"), + }), + }), objTy), + AfterValMarks: []cty.PathValueMarks{ + { + Path: cty.GetAttrPath("boop").IndexInt(1), + Marks: cty.NewValueMarks(marks.Sensitive), + }, + }, + }, + }, + }, TargetAddrs: []addrs.Targetable{ addrs.Resource{ Mode: addrs.ManagedResourceMode, @@ -243,6 +283,7 @@ func TestTFPlanRoundTripDestroy(t *testing.T) { }, }, }, + DriftedResources: []*plans.ResourceInstanceChangeSrc{}, TargetAddrs: []addrs.Targetable{ addrs.Resource{ Mode: addrs.ManagedResourceMode, From f0cf4235f9e8eafe1d13a6a6e0720f0f0bc67e7e Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Mon, 13 Sep 2021 17:30:16 -0400 Subject: [PATCH 070/644] cli: Refactor resource drift rendering --- internal/command/format/diff.go | 175 +++------------ internal/command/format/diff_test.go | 2 +- .../command/format/difflanguage_string.go | 29 +++ internal/command/jsonplan/plan.go | 187 +++------------- .../testdata/show-json/drift/output.json | 1 + .../multi-resource-update/output.json | 3 +- internal/command/views/operation_test.go | 209 +++++++----------- internal/command/views/plan.go | 116 ++++------ 8 files changed, 208 insertions(+), 514 deletions(-) create mode 100644 internal/command/format/difflanguage_string.go diff --git a/internal/command/format/diff.go b/internal/command/format/diff.go index e111485e0268..0028d0038cda 100644 --- a/internal/command/format/diff.go +++ b/internal/command/format/diff.go @@ -20,6 +20,21 @@ import ( "github.com/hashicorp/terraform/internal/states" ) +// DiffLanguage controls the description of the resource change reasons. +type DiffLanguage rune + +//go:generate go run golang.org/x/tools/cmd/stringer -type=DiffLanguage diff.go + +const ( + // DiffLanguageProposedChange indicates that the change is one which is + // planned to be applied. + DiffLanguageProposedChange DiffLanguage = 'P' + + // DiffLanguageDetectedDrift indicates that the change is detected drift + // from the configuration. + DiffLanguageDetectedDrift DiffLanguage = 'D' +) + // ResourceChange returns a string representation of a change to a particular // resource, for inclusion in user-facing plan output. // @@ -33,6 +48,7 @@ func ResourceChange( change *plans.ResourceInstanceChangeSrc, schema *configschema.Block, color *colorstring.Colorize, + language DiffLanguage, ) string { addr := change.Addr var buf bytes.Buffer @@ -56,7 +72,14 @@ func ResourceChange( case plans.Read: buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be read during apply\n # (config refers to values not yet known)", dispAddr))) case plans.Update: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be updated in-place", dispAddr))) + switch language { + case DiffLanguageProposedChange: + buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be updated in-place", dispAddr))) + case DiffLanguageDetectedDrift: + buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has changed", dispAddr))) + default: + buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] update (unknown reason %s)", dispAddr, language))) + } case plans.CreateThenDelete, plans.DeleteThenCreate: switch change.ActionReason { case plans.ResourceInstanceReplaceBecauseTainted: @@ -67,7 +90,14 @@ func ResourceChange( buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] must be [bold][red]replaced", dispAddr))) } case plans.Delete: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be [bold][red]destroyed", dispAddr))) + switch language { + case DiffLanguageProposedChange: + buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be [bold][red]destroyed", dispAddr))) + case DiffLanguageDetectedDrift: + buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has been deleted", dispAddr))) + default: + buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] delete (unknown reason %s)", dispAddr, language))) + } if change.DeposedKey != states.NotDeposed { // Some extra context about this unusual situation. buf.WriteString(color.Color("\n # (left over from a partially-failed replacement of this instance)")) @@ -154,147 +184,6 @@ func ResourceChange( return buf.String() } -// ResourceInstanceDrift returns a string representation of a change to a -// particular resource instance that was made outside of Terraform, for -// reporting a change that has already happened rather than one that is planned. -// -// The the two resource instances have equal current objects then the result -// will be an empty string to indicate that there is no drift to render. -// -// The resource schema must be provided along with the change so that the -// formatted change can reflect the configuration structure for the associated -// resource. -// -// If "color" is non-nil, it will be used to color the result. Otherwise, -// no color codes will be included. -func ResourceInstanceDrift( - addr addrs.AbsResourceInstance, - before, after *states.ResourceInstance, - schema *configschema.Block, - color *colorstring.Colorize, -) string { - var buf bytes.Buffer - - if color == nil { - color = &colorstring.Colorize{ - Colors: colorstring.DefaultColors, - Disable: true, - Reset: false, - } - } - - dispAddr := addr.String() - action := plans.Update - - switch { - case before == nil || before.Current == nil: - // before should never be nil, but before.Current can be if the - // instance was deposed. There is nothing to render for a deposed - // instance, since we intend to remove it. - return "" - - case after == nil || after.Current == nil: - // The object was deleted - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has been deleted", dispAddr))) - action = plans.Delete - default: - // The object was changed - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has been changed", dispAddr))) - } - - buf.WriteString(color.Color("[reset]\n")) - - buf.WriteString(color.Color(DiffActionSymbol(action)) + " ") - - switch addr.Resource.Resource.Mode { - case addrs.ManagedResourceMode: - buf.WriteString(fmt.Sprintf( - "resource %q %q", - addr.Resource.Resource.Type, - addr.Resource.Resource.Name, - )) - case addrs.DataResourceMode: - buf.WriteString(fmt.Sprintf( - "data %q %q ", - addr.Resource.Resource.Type, - addr.Resource.Resource.Name, - )) - default: - // should never happen, since the above is exhaustive - buf.WriteString(addr.String()) - } - - buf.WriteString(" {") - - p := blockBodyDiffPrinter{ - buf: &buf, - color: color, - action: action, - } - - // Most commonly-used resources have nested blocks that result in us - // going at least three traversals deep while we recurse here, so we'll - // start with that much capacity and then grow as needed for deeper - // structures. - path := make(cty.Path, 0, 3) - - ty := schema.ImpliedType() - - var err error - var oldObj, newObj *states.ResourceInstanceObject - oldObj, err = before.Current.Decode(ty) - if err != nil { - // We shouldn't encounter errors here because Terraform Core should've - // made sure that the previous run object conforms to the current - // schema by having the provider upgrade it, but we'll be robust here - // in case there are some edges we didn't find yet. - return fmt.Sprintf(" # %s previous run state doesn't conform to current schema; this is a Terraform bug\n # %s\n", addr, err) - } - if after != nil && after.Current != nil { - newObj, err = after.Current.Decode(ty) - if err != nil { - // We shouldn't encounter errors here because Terraform Core should've - // made sure that the prior state object conforms to the current - // schema by having the provider upgrade it, even if we skipped - // refreshing on this run, but we'll be robust here in case there are - // some edges we didn't find yet. - return fmt.Sprintf(" # %s refreshed state doesn't conform to current schema; this is a Terraform bug\n # %s\n", addr, err) - } - } - - oldVal := oldObj.Value - var newVal cty.Value - if newObj != nil { - newVal = newObj.Value - } else { - newVal = cty.NullVal(ty) - } - - if newVal.RawEquals(oldVal) { - // Nothing to show, then. - return "" - } - - // We currently have an opt-out that permits the legacy SDK to return values - // that defy our usual conventions around handling of nesting blocks. To - // avoid the rendering code from needing to handle all of these, we'll - // normalize first. - // (Ideally we'd do this as part of the SDK opt-out implementation in core, - // but we've added it here for now to reduce risk of unexpected impacts - // on other code in core.) - oldVal = objchange.NormalizeObjectFromLegacySDK(oldVal, schema) - newVal = objchange.NormalizeObjectFromLegacySDK(newVal, schema) - - result := p.writeBlockBodyDiff(schema, oldVal, newVal, 6, path) - if result.bodyWritten { - buf.WriteString("\n") - buf.WriteString(strings.Repeat(" ", 4)) - } - buf.WriteString("}\n") - - return buf.String() -} - // OutputChanges returns a string representation of a set of changes to output // values for inclusion in user-facing plan output. // diff --git a/internal/command/format/diff_test.go b/internal/command/format/diff_test.go index 7ca289c79ad5..b59a3af56412 100644 --- a/internal/command/format/diff_test.go +++ b/internal/command/format/diff_test.go @@ -4599,7 +4599,7 @@ func runTestCases(t *testing.T, testCases map[string]testCase) { RequiredReplace: tc.RequiredReplace, } - output := ResourceChange(change, tc.Schema, color) + output := ResourceChange(change, tc.Schema, color, DiffLanguageProposedChange) if diff := cmp.Diff(output, tc.ExpectedOutput); diff != "" { t.Errorf("wrong output\n%s", diff) } diff --git a/internal/command/format/difflanguage_string.go b/internal/command/format/difflanguage_string.go new file mode 100644 index 000000000000..8399cddc4645 --- /dev/null +++ b/internal/command/format/difflanguage_string.go @@ -0,0 +1,29 @@ +// Code generated by "stringer -type=DiffLanguage diff.go"; DO NOT EDIT. + +package format + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[DiffLanguageProposedChange-80] + _ = x[DiffLanguageDetectedDrift-68] +} + +const ( + _DiffLanguage_name_0 = "DiffLanguageDetectedDrift" + _DiffLanguage_name_1 = "DiffLanguageProposedChange" +) + +func (i DiffLanguage) String() string { + switch { + case i == 68: + return _DiffLanguage_name_0 + case i == 80: + return _DiffLanguage_name_1 + default: + return "DiffLanguage(" + strconv.FormatInt(int64(i), 10) + ")" + } +} diff --git a/internal/command/jsonplan/plan.go b/internal/command/jsonplan/plan.go index 1b90daf2d092..1d7eff1ffb37 100644 --- a/internal/command/jsonplan/plan.go +++ b/internal/command/jsonplan/plan.go @@ -130,15 +130,17 @@ func Marshal( } // output.ResourceDrift - err = output.marshalResourceDrift(p.PrevRunState, p.PriorState, schemas) + output.ResourceDrift, err = output.marshalResourceChanges(p.DriftedResources, schemas) if err != nil { - return nil, fmt.Errorf("error in marshalResourceDrift: %s", err) + return nil, fmt.Errorf("error in marshaling resource drift: %s", err) } // output.ResourceChanges - err = output.marshalResourceChanges(p.Changes, schemas) - if err != nil { - return nil, fmt.Errorf("error in marshalResourceChanges: %s", err) + if p.Changes != nil { + output.ResourceChanges, err = output.marshalResourceChanges(p.Changes.Resources, schemas) + if err != nil { + return nil, fmt.Errorf("error in marshaling resource changes: %s", err) + } } // output.OutputChanges @@ -188,149 +190,10 @@ func (p *plan) marshalPlanVariables(vars map[string]plans.DynamicValue, schemas return nil } -func (p *plan) marshalResourceDrift(oldState, newState *states.State, schemas *terraform.Schemas) error { - // Our goal here is to build a data structure of the same shape as we use - // to describe planned resource changes, but in this case we'll be - // taking the old and new values from different state snapshots rather - // than from a real "Changes" object. - // - // In doing this we make an assumption that drift detection can only - // ever show objects as updated or removed, and will never show anything - // as created because we only refresh objects we were already tracking - // after the previous run. This means we can use oldState as our baseline - // for what resource instances we might include, and check for each item - // whether it's present in newState. If we ever have some mechanism to - // detect "additive drift" later then we'll need to take a different - // approach here, but we have no plans for that at the time of writing. - // - // We also assume that both states have had all managed resource objects - // upgraded to match the current schemas given in schemas, so we shouldn't - // need to contend with oldState having old-shaped objects even if the - // user changed provider versions since the last run. - - if newState.ManagedResourcesEqual(oldState) { - // Nothing to do, because we only detect and report drift for managed - // resource instances. - return nil - } - for _, ms := range oldState.Modules { - for _, rs := range ms.Resources { - if rs.Addr.Resource.Mode != addrs.ManagedResourceMode { - // Drift reporting is only for managed resources - continue - } - - provider := rs.ProviderConfig.Provider - for key, oldIS := range rs.Instances { - if oldIS.Current == nil { - // Not interested in instances that only have deposed objects - continue - } - addr := rs.Addr.Instance(key) - newIS := newState.ResourceInstance(addr) - - schema, _ := schemas.ResourceTypeConfig( - provider, - addr.Resource.Resource.Mode, - addr.Resource.Resource.Type, - ) - if schema == nil { - return fmt.Errorf("no schema found for %s (in provider %s)", addr, provider) - } - ty := schema.ImpliedType() - - oldObj, err := oldIS.Current.Decode(ty) - if err != nil { - return fmt.Errorf("failed to decode previous run data for %s: %s", addr, err) - } - - var newObj *states.ResourceInstanceObject - if newIS != nil && newIS.Current != nil { - newObj, err = newIS.Current.Decode(ty) - if err != nil { - return fmt.Errorf("failed to decode refreshed data for %s: %s", addr, err) - } - } - - var oldVal, newVal cty.Value - oldVal = oldObj.Value - if newObj != nil { - newVal = newObj.Value - } else { - newVal = cty.NullVal(ty) - } - - if oldVal.RawEquals(newVal) { - // No drift if the two values are semantically equivalent - continue - } - - oldSensitive := jsonstate.SensitiveAsBool(oldVal) - newSensitive := jsonstate.SensitiveAsBool(newVal) - oldVal, _ = oldVal.UnmarkDeep() - newVal, _ = newVal.UnmarkDeep() - - var before, after []byte - var beforeSensitive, afterSensitive []byte - before, err = ctyjson.Marshal(oldVal, oldVal.Type()) - if err != nil { - return fmt.Errorf("failed to encode previous run data for %s as JSON: %s", addr, err) - } - after, err = ctyjson.Marshal(newVal, oldVal.Type()) - if err != nil { - return fmt.Errorf("failed to encode refreshed data for %s as JSON: %s", addr, err) - } - beforeSensitive, err = ctyjson.Marshal(oldSensitive, oldSensitive.Type()) - if err != nil { - return fmt.Errorf("failed to encode previous run data sensitivity for %s as JSON: %s", addr, err) - } - afterSensitive, err = ctyjson.Marshal(newSensitive, newSensitive.Type()) - if err != nil { - return fmt.Errorf("failed to encode refreshed data sensitivity for %s as JSON: %s", addr, err) - } - - // We can only detect updates and deletes as drift. - action := plans.Update - if newVal.IsNull() { - action = plans.Delete - } +func (p *plan) marshalResourceChanges(resources []*plans.ResourceInstanceChangeSrc, schemas *terraform.Schemas) ([]resourceChange, error) { + var ret []resourceChange - change := resourceChange{ - Address: addr.String(), - ModuleAddress: addr.Module.String(), - Mode: "managed", // drift reporting is only for managed resources - Name: addr.Resource.Resource.Name, - Type: addr.Resource.Resource.Type, - ProviderName: provider.String(), - - Change: change{ - Actions: actionString(action.String()), - Before: json.RawMessage(before), - BeforeSensitive: json.RawMessage(beforeSensitive), - After: json.RawMessage(after), - AfterSensitive: json.RawMessage(afterSensitive), - // AfterUnknown is never populated here because - // values in a state are always fully known. - }, - } - p.ResourceDrift = append(p.ResourceDrift, change) - } - } - } - - sort.Slice(p.ResourceChanges, func(i, j int) bool { - return p.ResourceChanges[i].Address < p.ResourceChanges[j].Address - }) - - return nil -} - -func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform.Schemas) error { - if changes == nil { - // Nothing to do! - return nil - } - for _, rc := range changes.Resources { + for _, rc := range resources { var r resourceChange addr := rc.Addr r.Address = addr.String() @@ -349,12 +212,12 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform addr.Resource.Resource.Type, ) if schema == nil { - return fmt.Errorf("no schema found for %s (in provider %s)", r.Address, rc.ProviderAddr.Provider) + return nil, fmt.Errorf("no schema found for %s (in provider %s)", r.Address, rc.ProviderAddr.Provider) } changeV, err := rc.Decode(schema.ImpliedType()) if err != nil { - return err + return nil, err } // We drop the marks from the change, as decoding is only an // intermediate step to re-encode the values as json @@ -368,7 +231,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform if changeV.Before != cty.NilVal { before, err = ctyjson.Marshal(changeV.Before, changeV.Before.Type()) if err != nil { - return err + return nil, err } marks := rc.BeforeValMarks if schema.ContainsSensitive() { @@ -377,14 +240,14 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform bs := jsonstate.SensitiveAsBool(changeV.Before.MarkWithPaths(marks)) beforeSensitive, err = ctyjson.Marshal(bs, bs.Type()) if err != nil { - return err + return nil, err } } if changeV.After != cty.NilVal { if changeV.After.IsWhollyKnown() { after, err = ctyjson.Marshal(changeV.After, changeV.After.Type()) if err != nil { - return err + return nil, err } afterUnknown = cty.EmptyObjectVal } else { @@ -394,7 +257,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform } else { after, err = ctyjson.Marshal(filteredAfter, filteredAfter.Type()) if err != nil { - return err + return nil, err } } afterUnknown = unknownAsBool(changeV.After) @@ -406,17 +269,17 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform as := jsonstate.SensitiveAsBool(changeV.After.MarkWithPaths(marks)) afterSensitive, err = ctyjson.Marshal(as, as.Type()) if err != nil { - return err + return nil, err } } a, err := ctyjson.Marshal(afterUnknown, afterUnknown.Type()) if err != nil { - return err + return nil, err } replacePaths, err := encodePaths(rc.RequiredReplace) if err != nil { - return err + return nil, err } r.Change = change{ @@ -444,7 +307,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform case addrs.DataResourceMode: r.Mode = "data" default: - return fmt.Errorf("resource %s has an unsupported mode %s", r.Address, addr.Resource.Resource.Mode.String()) + return nil, fmt.Errorf("resource %s has an unsupported mode %s", r.Address, addr.Resource.Resource.Mode.String()) } r.ModuleAddress = addr.Module.String() r.Name = addr.Resource.Resource.Name @@ -461,18 +324,18 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform case plans.ResourceInstanceReplaceByRequest: r.ActionReason = "replace_by_request" default: - return fmt.Errorf("resource %s has an unsupported action reason %s", r.Address, rc.ActionReason) + return nil, fmt.Errorf("resource %s has an unsupported action reason %s", r.Address, rc.ActionReason) } - p.ResourceChanges = append(p.ResourceChanges, r) + ret = append(ret, r) } - sort.Slice(p.ResourceChanges, func(i, j int) bool { - return p.ResourceChanges[i].Address < p.ResourceChanges[j].Address + sort.Slice(ret, func(i, j int) bool { + return ret[i].Address < ret[j].Address }) - return nil + return ret, nil } func (p *plan) marshalOutputChanges(changes *plans.Changes) error { diff --git a/internal/command/testdata/show-json/drift/output.json b/internal/command/testdata/show-json/drift/output.json index 7badb45e5edd..79e702161576 100644 --- a/internal/command/testdata/show-json/drift/output.json +++ b/internal/command/testdata/show-json/drift/output.json @@ -52,6 +52,7 @@ "id": "placeholder" }, "after_sensitive": {}, + "after_unknown": {}, "before_sensitive": {} } } diff --git a/internal/command/testdata/show-json/multi-resource-update/output.json b/internal/command/testdata/show-json/multi-resource-update/output.json index d84bc5b08789..a7a6a3053fac 100644 --- a/internal/command/testdata/show-json/multi-resource-update/output.json +++ b/internal/command/testdata/show-json/multi-resource-update/output.json @@ -62,7 +62,8 @@ }, "after": null, "before_sensitive": {}, - "after_sensitive": false + "after_sensitive": false, + "after_unknown": {} } } ], diff --git a/internal/command/views/operation_test.go b/internal/command/views/operation_test.go index fd56350c5ebe..b2e4f8f8e47a 100644 --- a/internal/command/views/operation_test.go +++ b/internal/command/views/operation_test.go @@ -13,6 +13,7 @@ import ( "github.com/hashicorp/terraform/internal/states/statefile" "github.com/hashicorp/terraform/internal/terminal" "github.com/hashicorp/terraform/internal/terraform" + "github.com/zclconf/go-cty/cty" ) func TestOperation_stopping(t *testing.T) { @@ -82,10 +83,8 @@ func TestOperation_planNoChanges(t *testing.T) { "nothing at all in normal mode": { func(schemas *terraform.Schemas) *plans.Plan { return &plans.Plan{ - UIMode: plans.NormalMode, - Changes: plans.NewChanges(), - PrevRunState: states.NewState(), - PriorState: states.NewState(), + UIMode: plans.NormalMode, + Changes: plans.NewChanges(), } }, "no differences, so no changes are needed.", @@ -93,10 +92,8 @@ func TestOperation_planNoChanges(t *testing.T) { "nothing at all in refresh-only mode": { func(schemas *terraform.Schemas) *plans.Plan { return &plans.Plan{ - UIMode: plans.RefreshOnlyMode, - Changes: plans.NewChanges(), - PrevRunState: states.NewState(), - PriorState: states.NewState(), + UIMode: plans.RefreshOnlyMode, + Changes: plans.NewChanges(), } }, "Terraform has checked that the real remote objects still match", @@ -104,148 +101,90 @@ func TestOperation_planNoChanges(t *testing.T) { "nothing at all in destroy mode": { func(schemas *terraform.Schemas) *plans.Plan { return &plans.Plan{ - UIMode: plans.DestroyMode, - Changes: plans.NewChanges(), - PrevRunState: states.NewState(), - PriorState: states.NewState(), - } - }, - "No objects need to be destroyed.", - }, - "no drift to display with only deposed instances": { - // changes in deposed instances will cause a change in state, but - // have nothing to display to the user - func(schemas *terraform.Schemas) *plans.Plan { - return &plans.Plan{ - UIMode: plans.NormalMode, + UIMode: plans.DestroyMode, Changes: plans.NewChanges(), - PrevRunState: states.BuildState(func(state *states.SyncState) { - state.SetResourceInstanceDeposed( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "somewhere", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - states.NewDeposedKey(), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo": "ok", "bars":[]}`), - }, - addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - }), - PriorState: states.NewState(), } }, - "no differences, so no changes are needed.", + "No objects need to be destroyed.", }, "drift detected in normal mode": { func(schemas *terraform.Schemas) *plans.Plan { - return &plans.Plan{ - UIMode: plans.NormalMode, - Changes: plans.NewChanges(), - PrevRunState: states.BuildState(func(state *states.SyncState) { - state.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "somewhere", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{}`), - }, - addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - }), - PriorState: states.NewState(), + addr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_resource", + Name: "somewhere", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance) + schema, _ := schemas.ResourceTypeConfig( + addrs.NewDefaultProvider("test"), + addr.Resource.Resource.Mode, + addr.Resource.Resource.Type, + ) + ty := schema.ImpliedType() + rc := &plans.ResourceInstanceChange{ + Addr: addr, + PrevRunAddr: addr, + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault( + addrs.NewDefaultProvider("test"), + ), + Change: plans.Change{ + Action: plans.Update, + Before: cty.NullVal(ty), + After: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("1234"), + "foo": cty.StringVal("bar"), + }), + }, } - }, - "to update the Terraform state to match, create and apply a refresh-only plan", - }, - "drift detected with deposed": { - func(schemas *terraform.Schemas) *plans.Plan { + rcs, err := rc.Encode(ty) + if err != nil { + panic(err) + } + drs := []*plans.ResourceInstanceChangeSrc{rcs} return &plans.Plan{ - UIMode: plans.NormalMode, - Changes: plans.NewChanges(), - PrevRunState: states.BuildState(func(state *states.SyncState) { - state.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "changes", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"b"}`), - }, - addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - state.SetResourceInstanceDeposed( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "broken", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - states.NewDeposedKey(), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"c"}`), - }, - addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - }), - PriorState: states.BuildState(func(state *states.SyncState) { - state.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "changed", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"b"}`), - }, - addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - state.SetResourceInstanceDeposed( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "broken", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - states.NewDeposedKey(), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"d"}`), - }, - addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - }), + UIMode: plans.NormalMode, + Changes: plans.NewChanges(), + DriftedResources: drs, } }, "to update the Terraform state to match, create and apply a refresh-only plan", }, "drift detected in refresh-only mode": { func(schemas *terraform.Schemas) *plans.Plan { + addr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_resource", + Name: "somewhere", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance) + schema, _ := schemas.ResourceTypeConfig( + addrs.NewDefaultProvider("test"), + addr.Resource.Resource.Mode, + addr.Resource.Resource.Type, + ) + ty := schema.ImpliedType() + rc := &plans.ResourceInstanceChange{ + Addr: addr, + PrevRunAddr: addr, + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault( + addrs.NewDefaultProvider("test"), + ), + Change: plans.Change{ + Action: plans.Update, + Before: cty.NullVal(ty), + After: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("1234"), + "foo": cty.StringVal("bar"), + }), + }, + } + rcs, err := rc.Encode(ty) + if err != nil { + panic(err) + } + drs := []*plans.ResourceInstanceChangeSrc{rcs} return &plans.Plan{ - UIMode: plans.RefreshOnlyMode, - Changes: plans.NewChanges(), - PrevRunState: states.BuildState(func(state *states.SyncState) { - state.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_resource", - Name: "somewhere", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{}`), - }, - addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - }), - PriorState: states.NewState(), + UIMode: plans.RefreshOnlyMode, + Changes: plans.NewChanges(), + DriftedResources: drs, } }, "If you were expecting these changes then you can apply this plan", diff --git a/internal/command/views/plan.go b/internal/command/views/plan.go index ff794933b420..175c263698a2 100644 --- a/internal/command/views/plan.go +++ b/internal/command/views/plan.go @@ -10,7 +10,6 @@ import ( "github.com/hashicorp/terraform/internal/command/arguments" "github.com/hashicorp/terraform/internal/command/format" "github.com/hashicorp/terraform/internal/plans" - "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/terraform" "github.com/hashicorp/terraform/internal/tfdiags" ) @@ -97,8 +96,9 @@ func (v *PlanJSON) HelpPrompt() { // The plan renderer is used by the Operation view (for plan and apply // commands) and the Show view (for the show command). func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { - haveRefreshChanges := renderChangesDetectedByRefresh(plan.PrevRunState, plan.PriorState, schemas, view) + haveRefreshChanges := len(plan.DriftedResources) > 0 if haveRefreshChanges { + renderChangesDetectedByRefresh(plan.DriftedResources, schemas, view) switch plan.UIMode { case plans.RefreshOnlyMode: view.streams.Println(format.WordWrap( @@ -292,6 +292,7 @@ func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { rcs, rSchema, view.colorize, + format.DiffLanguageProposedChange, )) } @@ -344,82 +345,53 @@ func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { // renderChangesDetectedByRefresh returns true if it produced at least one // line of output, and guarantees to always produce whole lines terminated // by newline characters. -func renderChangesDetectedByRefresh(before, after *states.State, schemas *terraform.Schemas, view *View) bool { - // ManagedResourceEqual checks that the state is exactly equal for all - // managed resources; but semantically equivalent states, or changes to - // deposed instances may not actually represent changes we need to present - // to the user, so for now this only serves as a short-circuit to skip - // attempting to render the diffs below. - if after.ManagedResourcesEqual(before) { - return false - } - - var diffs []string - - for _, bms := range before.Modules { - for _, brs := range bms.Resources { - if brs.Addr.Resource.Mode != addrs.ManagedResourceMode { - continue // only managed resources can "drift" - } - addr := brs.Addr - prs := after.Resource(brs.Addr) - - provider := brs.ProviderConfig.Provider - providerSchema := schemas.ProviderSchema(provider) - if providerSchema == nil { - // Should never happen - view.streams.Printf("(schema missing for %s)\n", provider) - continue - } - rSchema, _ := providerSchema.SchemaForResourceAddr(addr.Resource) - if rSchema == nil { - // Should never happen - view.streams.Printf("(schema missing for %s)\n", addr) - continue - } +func renderChangesDetectedByRefresh(drs []*plans.ResourceInstanceChangeSrc, schemas *terraform.Schemas, view *View) { + view.streams.Print( + view.colorize.Color("[reset]\n[bold][cyan]Note:[reset][bold] Objects have changed outside of Terraform[reset]\n\n"), + ) + view.streams.Print(format.WordWrap( + "Terraform detected the following changes made outside of Terraform since the last \"terraform apply\":\n\n", + view.outputColumns(), + )) + + // Note: we're modifying the backing slice of this plan object in-place + // here. The ordering of resource changes in a plan is not significant, + // but we can only do this safely here because we can assume that nobody + // is concurrently modifying our changes while we're trying to print it. + sort.Slice(drs, func(i, j int) bool { + iA := drs[i].Addr + jA := drs[j].Addr + if iA.String() == jA.String() { + return drs[i].DeposedKey < drs[j].DeposedKey + } + return iA.Less(jA) + }) - for key, bis := range brs.Instances { - if bis.Current == nil { - // No current instance to render here - continue - } - var pis *states.ResourceInstance - if prs != nil { - pis = prs.Instance(key) - } + for _, rcs := range drs { + if rcs.Action == plans.NoOp && !rcs.Moved() { + continue + } - diff := format.ResourceInstanceDrift( - addr.Instance(key), - bis, pis, - rSchema, - view.colorize, - ) - if diff != "" { - diffs = append(diffs, diff) - } - } + providerSchema := schemas.ProviderSchema(rcs.ProviderAddr.Provider) + if providerSchema == nil { + // Should never happen + view.streams.Printf("(schema missing for %s)\n\n", rcs.ProviderAddr) + continue + } + rSchema, _ := providerSchema.SchemaForResourceAddr(rcs.Addr.Resource.Resource) + if rSchema == nil { + // Should never happen + view.streams.Printf("(schema missing for %s)\n\n", rcs.Addr) + continue } - } - // If we only have changes regarding deposed instances, or the diff - // renderer is suppressing irrelevant changes from the legacy SDK, there - // may not have been anything to display to the user. - if len(diffs) > 0 { - view.streams.Print( - view.colorize.Color("[reset]\n[bold][cyan]Note:[reset][bold] Objects have changed outside of Terraform[reset]\n\n"), - ) - view.streams.Print(format.WordWrap( - "Terraform detected the following changes made outside of Terraform since the last \"terraform apply\":\n\n", - view.outputColumns(), + view.streams.Println(format.ResourceChange( + rcs, + rSchema, + view.colorize, + format.DiffLanguageDetectedDrift, )) - - for _, diff := range diffs { - view.streams.Print(diff) - } - return true } - - return false } const planHeaderIntro = ` From d5b5407ccc66f4808e1b5009bb4e5a93dde67dc2 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Wed, 15 Sep 2021 16:53:58 -0400 Subject: [PATCH 071/644] format: Fix incorrect nesting of Color/Sprintf Colorizing the result of an interpolated string can result in incorrect output, if the values used to generate the string happen to include color codes such as `[red]` or `[bold]`. Instead we should always colorize the format string before calling functions like `Sprintf`. This commit fixes all instances in this file. --- internal/command/format/diff.go | 44 ++++++++++++++++----------------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/internal/command/format/diff.go b/internal/command/format/diff.go index 0028d0038cda..c70918af4ac2 100644 --- a/internal/command/format/diff.go +++ b/internal/command/format/diff.go @@ -68,35 +68,35 @@ func ResourceChange( switch change.Action { case plans.Create: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be created", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be created"), dispAddr)) case plans.Read: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be read during apply\n # (config refers to values not yet known)", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be read during apply\n # (config refers to values not yet known)"), dispAddr)) case plans.Update: switch language { case DiffLanguageProposedChange: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be updated in-place", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be updated in-place"), dispAddr)) case DiffLanguageDetectedDrift: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has changed", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] has changed"), dispAddr)) default: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] update (unknown reason %s)", dispAddr, language))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] update (unknown reason %s)"), dispAddr, language)) } case plans.CreateThenDelete, plans.DeleteThenCreate: switch change.ActionReason { case plans.ResourceInstanceReplaceBecauseTainted: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] is tainted, so must be [bold][red]replaced", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] is tainted, so must be [bold][red]replaced"), dispAddr)) case plans.ResourceInstanceReplaceByRequest: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be [bold][red]replaced[reset], as requested", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be [bold][red]replaced[reset], as requested"), dispAddr)) default: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] must be [bold][red]replaced", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] must be [bold][red]replaced"), dispAddr)) } case plans.Delete: switch language { case DiffLanguageProposedChange: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be [bold][red]destroyed", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be [bold][red]destroyed"), dispAddr)) case DiffLanguageDetectedDrift: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has been deleted", dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] has been deleted"), dispAddr)) default: - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] delete (unknown reason %s)", dispAddr, language))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] delete (unknown reason %s)"), dispAddr, language)) } if change.DeposedKey != states.NotDeposed { // Some extra context about this unusual situation. @@ -104,7 +104,7 @@ func ResourceChange( } case plans.NoOp: if change.Moved() { - buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has moved to [bold]%s[reset]", change.PrevRunAddr.String(), dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] has moved to [bold]%s[reset]"), change.PrevRunAddr.String(), dispAddr)) break } fallthrough @@ -115,7 +115,7 @@ func ResourceChange( buf.WriteString(color.Color("[reset]\n")) if change.Moved() && change.Action != plans.NoOp { - buf.WriteString(color.Color(fmt.Sprintf("[bold] # [reset]([bold]%s[reset] has moved to [bold]%s[reset])\n", change.PrevRunAddr.String(), dispAddr))) + buf.WriteString(fmt.Sprintf(color.Color("[bold] # [reset]([bold]%s[reset] has moved to [bold]%s[reset])\n"), change.PrevRunAddr.String(), dispAddr)) } if change.Moved() && change.Action == plans.NoOp { @@ -290,7 +290,7 @@ func (p *blockBodyDiffPrinter) writeBlockBodyDiff(schema *configschema.Block, ol } p.buf.WriteString("\n") p.buf.WriteString(strings.Repeat(" ", indent+2)) - p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", result.skippedBlocks, noun))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), result.skippedBlocks, noun)) } } @@ -1310,7 +1310,7 @@ func (p *blockBodyDiffPrinter) writeValueDiff(old, new cty.Value, indent int, pa if suppressedElements == 1 { noun = "element" } - p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", suppressedElements, noun))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), suppressedElements, noun)) p.buf.WriteString("\n") } @@ -1371,7 +1371,7 @@ func (p *blockBodyDiffPrinter) writeValueDiff(old, new cty.Value, indent int, pa if hidden == 1 { noun = "element" } - p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", hidden, noun))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), hidden, noun)) p.buf.WriteString("\n") } @@ -1513,7 +1513,7 @@ func (p *blockBodyDiffPrinter) writeValueDiff(old, new cty.Value, indent int, pa if suppressedElements == 1 { noun = "element" } - p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", suppressedElements, noun))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), suppressedElements, noun)) p.buf.WriteString("\n") } @@ -1605,7 +1605,7 @@ func (p *blockBodyDiffPrinter) writeValueDiff(old, new cty.Value, indent int, pa if suppressedElements == 1 { noun = "element" } - p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", suppressedElements, noun))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), suppressedElements, noun)) p.buf.WriteString("\n") } @@ -1678,7 +1678,7 @@ func (p *blockBodyDiffPrinter) writeSensitivityWarning(old, new cty.Value, inden if new.HasMark(marks.Sensitive) && !old.HasMark(marks.Sensitive) { p.buf.WriteString(strings.Repeat(" ", indent)) - p.buf.WriteString(p.color.Color(fmt.Sprintf("# [yellow]Warning:[reset] this %s will be marked as sensitive and will not\n", diffType))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("# [yellow]Warning:[reset] this %s will be marked as sensitive and will not\n"), diffType)) p.buf.WriteString(strings.Repeat(" ", indent)) p.buf.WriteString(fmt.Sprintf("# display in UI output after applying this change.%s\n", valueUnchangedSuffix)) } @@ -1686,7 +1686,7 @@ func (p *blockBodyDiffPrinter) writeSensitivityWarning(old, new cty.Value, inden // Note if changing this attribute will change its sensitivity if old.HasMark(marks.Sensitive) && !new.HasMark(marks.Sensitive) { p.buf.WriteString(strings.Repeat(" ", indent)) - p.buf.WriteString(p.color.Color(fmt.Sprintf("# [yellow]Warning:[reset] this %s will no longer be marked as sensitive\n", diffType))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("# [yellow]Warning:[reset] this %s will no longer be marked as sensitive\n"), diffType)) p.buf.WriteString(strings.Repeat(" ", indent)) p.buf.WriteString(fmt.Sprintf("# after applying this change.%s\n", valueUnchangedSuffix)) } @@ -1948,7 +1948,7 @@ func (p *blockBodyDiffPrinter) writeSkippedAttr(skipped, indent int) { } p.buf.WriteString("\n") p.buf.WriteString(strings.Repeat(" ", indent)) - p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", skipped, noun))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), skipped, noun)) } } @@ -1959,7 +1959,7 @@ func (p *blockBodyDiffPrinter) writeSkippedElems(skipped, indent int) { noun = "element" } p.buf.WriteString(strings.Repeat(" ", indent)) - p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", skipped, noun))) + p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), skipped, noun)) p.buf.WriteString("\n") } } From 4d1baaceabaa968c1e5009536a70341b1657b7fe Mon Sep 17 00:00:00 2001 From: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> Date: Thu, 16 Sep 2021 17:06:52 -0400 Subject: [PATCH 072/644] Add Machine-Readable UI to sidebar and add hyphen :-) --- .../internals/machine-readable-ui.html.md | 4 ++-- website/layouts/docs.erb | 23 +++++++++++-------- 2 files changed, 16 insertions(+), 11 deletions(-) diff --git a/website/docs/internals/machine-readable-ui.html.md b/website/docs/internals/machine-readable-ui.html.md index 76f675132071..250481eb7306 100644 --- a/website/docs/internals/machine-readable-ui.html.md +++ b/website/docs/internals/machine-readable-ui.html.md @@ -1,12 +1,12 @@ --- layout: "docs" -page_title: "Internals: Machine Readable UI" +page_title: "Internals: Machine-Readable UI" sidebar_current: "docs-internals-machine-readable-ui" description: |- Terraform provides a machine-readable streaming JSON UI output for plan, apply, and refresh operations. --- -# Machine Readable UI +# Machine-Readable UI -> **Note:** This format is available in Terraform 0.15.3 and later. diff --git a/website/layouts/docs.erb b/website/layouts/docs.erb index 06fcc1ea5ca3..b5b13bb5380c 100644 --- a/website/layouts/docs.erb +++ b/website/layouts/docs.erb @@ -543,40 +543,45 @@
  • - Module Registry Protocol + JSON Output Format
  • - Provider Network Mirror Protocol + Login Protocol
  • - Provider Registry Protocol + Machine-Readable UI
  • - Resource Graph + Module Registry Protocol
  • - Resource Lifecycle + Provider Metadata
  • - Login Protocol + Provider Network Mirror Protocol
  • - JSON Output Format + Provider Registry Protocol
  • - Remote Service Discovery + Resource Graph
  • - Provider Metadata + Resource Lifecycle
  • + +
  • + Remote Service Discovery +
  • + From a8e5b6a4ad1747d95a6f00acde01d68f3fc5b3b8 Mon Sep 17 00:00:00 2001 From: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> Date: Thu, 16 Sep 2021 17:11:31 -0400 Subject: [PATCH 073/644] Fix alphabetical order of sidebar --- website/layouts/docs.erb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/website/layouts/docs.erb b/website/layouts/docs.erb index b5b13bb5380c..da4f0f6c9176 100644 --- a/website/layouts/docs.erb +++ b/website/layouts/docs.erb @@ -571,15 +571,15 @@
  • - Resource Graph + Remote Service Discovery
  • - +
  • - Resource Lifecycle + Resource Graph
  • - Remote Service Discovery + Resource Lifecycle
  • From 9a7bbdab6f0707fa07de5b1a57a50f492ab00459 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Fri, 17 Sep 2021 14:46:44 -0400 Subject: [PATCH 074/644] Fix terraform add test failure due to bad merge --- internal/command/add_test.go | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/internal/command/add_test.go b/internal/command/add_test.go index 7effd2f698ea..7afe5e754bc6 100644 --- a/internal/command/add_test.go +++ b/internal/command/add_test.go @@ -136,9 +136,23 @@ resource "test_instance" "new" { fmt.Println(output.Stderr()) t.Fatalf("wrong exit status. Got %d, want 0", code) } - expected := `resource "test_instance" "new" { + expected := `# NOTE: The "terraform add" command is currently experimental and offers only a +# starting point for your resource configuration, with some limitations. +# +# The behavior of this command may change in future based on feedback, possibly +# in incompatible ways. We don't recommend building automation around this +# command at this time. If you have feedback about this command, please open +# a feature request issue in the Terraform GitHub repository. +resource "test_instance" "new" { value = null # REQUIRED string } +# NOTE: The "terraform add" command is currently experimental and offers only a +# starting point for your resource configuration, with some limitations. +# +# The behavior of this command may change in future based on feedback, possibly +# in incompatible ways. We don't recommend building automation around this +# command at this time. If you have feedback about this command, please open +# a feature request issue in the Terraform GitHub repository. resource "test_instance" "new2" { value = null # REQUIRED string } @@ -261,7 +275,14 @@ resource "test_instance" "new" { fmt.Println(output.Stderr()) t.Fatalf("wrong exit status. Got %d, want 0", code) } - expected := `resource "test_instance" "exists" { + expected := `# NOTE: The "terraform add" command is currently experimental and offers only a +# starting point for your resource configuration, with some limitations. +# +# The behavior of this command may change in future based on feedback, possibly +# in incompatible ways. We don't recommend building automation around this +# command at this time. If you have feedback about this command, please open +# a feature request issue in the Terraform GitHub repository. +resource "test_instance" "exists" { value = null # REQUIRED string } ` From 61b2d8e3fe89d2397ac8286bcba32060ea96c47d Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Thu, 16 Sep 2021 12:42:00 -0400 Subject: [PATCH 075/644] core: Plan drift includes move-only changes Previously, drifted resources included only updates and deletes. To correctly display the full changes which would result as part of a refresh-only apply, the drifted resources must also include move-only changes. --- internal/terraform/context_plan.go | 36 ++++++++++++++++++++++-------- 1 file changed, 27 insertions(+), 9 deletions(-) diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index fce564bf3fa3..20124bd88436 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -499,6 +499,14 @@ func (c *Context) driftedResources(config *configs.Config, oldState, newState *s continue } addr := rs.Addr.Instance(key) + + // Previous run address defaults to the current address, but + // can differ if the resource moved before refreshing + prevRunAddr := addr + if move, ok := moves[addr.UniqueKey()]; ok { + prevRunAddr = move.From + } + newIS := newState.ResourceInstance(addr) schema, _ := schemas.ResourceTypeConfig( @@ -547,20 +555,30 @@ func (c *Context) driftedResources(config *configs.Config, oldState, newState *s newVal = cty.NullVal(ty) } - if oldVal.RawEquals(newVal) { + if oldVal.RawEquals(newVal) && addr.Equal(prevRunAddr) { // No drift if the two values are semantically equivalent + // and no move has happened continue } - // We can only detect updates and deletes as drift. - action := plans.Update - if newVal.IsNull() { + // We can detect three types of changes after refreshing state, + // only two of which are easily understood as "drift": + // + // - Resources which were deleted outside of Terraform; + // - Resources where the object value has changed outside of + // Terraform; + // - Resources which have been moved without other changes. + // + // All of these are returned as drift, to allow refresh-only plans + // to present a full set of changes which will be applied. + var action plans.Action + switch { + case newVal.IsNull(): action = plans.Delete - } - - prevRunAddr := addr - if move, ok := moves[addr.UniqueKey()]; ok { - prevRunAddr = move.From + case !oldVal.RawEquals(newVal): + action = plans.Update + default: + action = plans.NoOp } change := &plans.ResourceInstanceChange{ From 638784b19590c489cf03d7567e77a901d1a5a313 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Thu, 16 Sep 2021 12:44:13 -0400 Subject: [PATCH 076/644] cli: Omit move-only drift, except for refresh-only The set of drifted resources now includes move-only changes, where the object value is identical but a move has been executed. In normal operation, we previousl displayed these moves twice: once as part of drift output, and once as part of planned changes. As of this commit we omit move-only changes from drift display, except for refresh-only plans. This fixes the redundant output. --- internal/command/views/operation_test.go | 49 ++++++++++++++++ internal/command/views/plan.go | 23 ++++++-- internal/terraform/context_plan.go | 2 +- internal/terraform/context_plan2_test.go | 75 ++++++++++++++++++++++++ 4 files changed, 142 insertions(+), 7 deletions(-) diff --git a/internal/command/views/operation_test.go b/internal/command/views/operation_test.go index b2e4f8f8e47a..56ced35779a2 100644 --- a/internal/command/views/operation_test.go +++ b/internal/command/views/operation_test.go @@ -189,6 +189,55 @@ func TestOperation_planNoChanges(t *testing.T) { }, "If you were expecting these changes then you can apply this plan", }, + "move-only changes in refresh-only mode": { + func(schemas *terraform.Schemas) *plans.Plan { + addr := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_resource", + Name: "somewhere", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance) + addrPrev := addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "test_resource", + Name: "anywhere", + }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance) + schema, _ := schemas.ResourceTypeConfig( + addrs.NewDefaultProvider("test"), + addr.Resource.Resource.Mode, + addr.Resource.Resource.Type, + ) + ty := schema.ImpliedType() + rc := &plans.ResourceInstanceChange{ + Addr: addr, + PrevRunAddr: addrPrev, + ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault( + addrs.NewDefaultProvider("test"), + ), + Change: plans.Change{ + Action: plans.NoOp, + Before: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("1234"), + "foo": cty.StringVal("bar"), + }), + After: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("1234"), + "foo": cty.StringVal("bar"), + }), + }, + } + rcs, err := rc.Encode(ty) + if err != nil { + panic(err) + } + drs := []*plans.ResourceInstanceChangeSrc{rcs} + return &plans.Plan{ + UIMode: plans.RefreshOnlyMode, + Changes: plans.NewChanges(), + DriftedResources: drs, + } + }, + "test_resource.anywhere has moved to test_resource.somewhere", + }, "drift detected in destroy mode": { func(schemas *terraform.Schemas) *plans.Plan { return &plans.Plan{ diff --git a/internal/command/views/plan.go b/internal/command/views/plan.go index 175c263698a2..b861ddcfdf3f 100644 --- a/internal/command/views/plan.go +++ b/internal/command/views/plan.go @@ -96,9 +96,24 @@ func (v *PlanJSON) HelpPrompt() { // The plan renderer is used by the Operation view (for plan and apply // commands) and the Show view (for the show command). func renderPlan(plan *plans.Plan, schemas *terraform.Schemas, view *View) { - haveRefreshChanges := len(plan.DriftedResources) > 0 + // In refresh-only mode, we show all resources marked as drifted, + // including those which have moved without other changes. In other plan + // modes, move-only changes will be rendered in the planned changes, so + // we skip them here. + var driftedResources []*plans.ResourceInstanceChangeSrc + if plan.UIMode == plans.RefreshOnlyMode { + driftedResources = plan.DriftedResources + } else { + for _, dr := range plan.DriftedResources { + if dr.Action != plans.NoOp { + driftedResources = append(driftedResources, dr) + } + } + } + + haveRefreshChanges := len(driftedResources) > 0 if haveRefreshChanges { - renderChangesDetectedByRefresh(plan.DriftedResources, schemas, view) + renderChangesDetectedByRefresh(driftedResources, schemas, view) switch plan.UIMode { case plans.RefreshOnlyMode: view.streams.Println(format.WordWrap( @@ -368,10 +383,6 @@ func renderChangesDetectedByRefresh(drs []*plans.ResourceInstanceChangeSrc, sche }) for _, rcs := range drs { - if rcs.Action == plans.NoOp && !rcs.Moved() { - continue - } - providerSchema := schemas.ProviderSchema(rcs.ProviderAddr.Provider) if providerSchema == nil { // Should never happen diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 20124bd88436..3735dd91c927 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -471,7 +471,7 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, func (c *Context) driftedResources(config *configs.Config, oldState, newState *states.State, moves map[addrs.UniqueKey]refactoring.MoveResult) ([]*plans.ResourceInstanceChangeSrc, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - if newState.ManagedResourcesEqual(oldState) { + if newState.ManagedResourcesEqual(oldState) && len(moves) == 0 { // Nothing to do, because we only detect and report drift for managed // resource instances. return nil, diags diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 46185ae22e76..1167f234c00d 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -984,7 +984,82 @@ The -target option is not for routine use, and is provided only for exceptional if diff := cmp.Diff(wantDiags, gotDiags); diff != "" { t.Errorf("wrong diagnostics\n%s", diff) } + }) +} + +func TestContext2Plan_movedResourceRefreshOnly(t *testing.T) { + addrA := mustResourceInstanceAddr("test_object.a") + addrB := mustResourceInstanceAddr("test_object.b") + m := testModuleInline(t, map[string]string{ + "main.tf": ` + resource "test_object" "b" { + } + + moved { + from = test_object.a + to = test_object.b + } + + terraform { + experiments = [config_driven_move] + } + `, + }) + + state := states.BuildState(func(s *states.SyncState) { + // The prior state tracks test_object.a, which we should treat as + // test_object.b because of the "moved" block in the config. + s.SetResourceInstanceCurrent(addrA, &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + }) + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.RefreshOnlyMode, + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + t.Run(addrA.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addrA) + if instPlan != nil { + t.Fatalf("unexpected plan for %s; should've moved to %s", addrA, addrB) + } + }) + t.Run(addrB.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addrB) + if instPlan != nil { + t.Fatalf("unexpected plan for %s", addrB) + } + }) + t.Run("drift", func(t *testing.T) { + var drifted *plans.ResourceInstanceChangeSrc + for _, dr := range plan.DriftedResources { + if dr.Addr.Equal(addrB) { + drifted = dr + break + } + } + + if drifted == nil { + t.Fatalf("instance %s is missing from the drifted resource changes", addrB) + } + + if got, want := drifted.PrevRunAddr, addrA; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := drifted.Action, plans.NoOp; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } }) } From 7f99a8802e85e09ef4407d8931ddc99b3b5507e1 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Thu, 16 Sep 2021 17:23:32 -0700 Subject: [PATCH 077/644] addrs: MoveEndpointInModule.SelectsResource This is similar to the existing SelectsModule method, returning true if the reciever selects either a particular resource as a whole or any of the instances of that resource. We don't need this test in the normal case, but we will need it in a subsequent commit when we'll be possibly generating _implied_ move statements between instances of resources, but only if there aren't explicit move statements mentioning those resources already. --- internal/addrs/move_endpoint_module.go | 31 +++++ internal/addrs/move_endpoint_module_test.go | 127 ++++++++++++++++++++ 2 files changed, 158 insertions(+) diff --git a/internal/addrs/move_endpoint_module.go b/internal/addrs/move_endpoint_module.go index 654252ba9f25..5b9b62478c1e 100644 --- a/internal/addrs/move_endpoint_module.go +++ b/internal/addrs/move_endpoint_module.go @@ -233,6 +233,37 @@ func (e *MoveEndpointInModule) SelectsModule(addr ModuleInstance) bool { return true } +// SelectsResource returns true if the receiver directly selects either +// the given resource or one of its instances. +func (e *MoveEndpointInModule) SelectsResource(addr AbsResource) bool { + // Only a subset of subject types can possibly select a resource, so + // we'll take care of those quickly before we do anything more expensive. + switch e.relSubject.(type) { + case AbsResource, AbsResourceInstance: + // okay + default: + return false // can't possibly match + } + + if !e.SelectsModule(addr.Module) { + return false + } + + // If we get here then we know the module part matches, so we only need + // to worry about the relative resource part. + switch relSubject := e.relSubject.(type) { + case AbsResource: + return addr.Resource.Equal(relSubject.Resource) + case AbsResourceInstance: + // We intentionally ignore the instance key, because we consider + // instances to be part of the resource they belong to. + return addr.Resource.Equal(relSubject.Resource.Resource) + default: + // We should've filtered out all other types above + panic(fmt.Sprintf("unsupported relSubject type %T", relSubject)) + } +} + // moduleInstanceCanMatch indicates that modA can match modB taking into // account steps with an anyKey InstanceKey as wildcards. The comparison of // wildcard steps is done symmetrically, because varying portions of either diff --git a/internal/addrs/move_endpoint_module_test.go b/internal/addrs/move_endpoint_module_test.go index 6c85380b6089..bda37ca53974 100644 --- a/internal/addrs/move_endpoint_module_test.go +++ b/internal/addrs/move_endpoint_module_test.go @@ -1457,6 +1457,133 @@ func TestSelectsModule(t *testing.T) { } } +func TestSelectsResource(t *testing.T) { + matchingResource := Resource{ + Mode: ManagedResourceMode, + Type: "foo", + Name: "matching", + } + unmatchingResource := Resource{ + Mode: ManagedResourceMode, + Type: "foo", + Name: "unmatching", + } + childMod := Module{ + "child", + } + childModMatchingInst := ModuleInstance{ + ModuleInstanceStep{Name: "child", InstanceKey: StringKey("matching")}, + } + childModUnmatchingInst := ModuleInstance{ + ModuleInstanceStep{Name: "child", InstanceKey: StringKey("unmatching")}, + } + + tests := []struct { + Endpoint *MoveEndpointInModule + Addr AbsResource + Selects bool + }{ + { + Endpoint: &MoveEndpointInModule{ + relSubject: matchingResource.Absolute(nil), + }, + Addr: matchingResource.Absolute(nil), + Selects: true, // exact match + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: unmatchingResource.Absolute(nil), + }, + Addr: matchingResource.Absolute(nil), + Selects: false, // wrong resource name + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: unmatchingResource.Instance(IntKey(1)).Absolute(nil), + }, + Addr: matchingResource.Absolute(nil), + Selects: false, // wrong resource name + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: matchingResource.Instance(NoKey).Absolute(nil), + }, + Addr: matchingResource.Absolute(nil), + Selects: true, // matches one instance + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: matchingResource.Instance(IntKey(0)).Absolute(nil), + }, + Addr: matchingResource.Absolute(nil), + Selects: true, // matches one instance + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: matchingResource.Instance(StringKey("a")).Absolute(nil), + }, + Addr: matchingResource.Absolute(nil), + Selects: true, // matches one instance + }, + { + Endpoint: &MoveEndpointInModule{ + module: childMod, + relSubject: matchingResource.Absolute(nil), + }, + Addr: matchingResource.Absolute(childModMatchingInst), + Selects: true, // in one of the instances of the module where the statement was written + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: matchingResource.Absolute(childModMatchingInst), + }, + Addr: matchingResource.Absolute(childModMatchingInst), + Selects: true, // exact match + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: matchingResource.Instance(IntKey(2)).Absolute(childModMatchingInst), + }, + Addr: matchingResource.Absolute(childModMatchingInst), + Selects: true, // matches one instance + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: matchingResource.Absolute(childModMatchingInst), + }, + Addr: matchingResource.Absolute(childModUnmatchingInst), + Selects: false, // the containing module instance doesn't match + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: AbsModuleCall{ + Module: mustParseModuleInstanceStr("module.foo[2]"), + Call: ModuleCall{Name: "bar"}, + }, + }, + Addr: matchingResource.Absolute(mustParseModuleInstanceStr("module.foo[2]")), + Selects: false, // a module call can't match a resource + }, + { + Endpoint: &MoveEndpointInModule{ + relSubject: mustParseModuleInstanceStr("module.foo[2]"), + }, + Addr: matchingResource.Absolute(mustParseModuleInstanceStr("module.foo[2]")), + Selects: false, // a module instance can't match a resource + }, + } + + for i, test := range tests { + t.Run(fmt.Sprintf("[%02d]%s SelectsResource(%s)", i, test.Endpoint, test.Addr), + func(t *testing.T) { + if got, want := test.Endpoint.SelectsResource(test.Addr), test.Selects; got != want { + t.Errorf("wrong result\nReceiver: %s\nArgument: %s\ngot: %t\nwant: %t", test.Endpoint, test.Addr, got, want) + } + }, + ) + } +} + func mustParseAbsResourceInstanceStr(s string) AbsResourceInstance { r, diags := ParseAbsResourceInstanceStr(s) if diags.HasErrors() { From ef5a1c9cfe5b0c5c578707c135028528322e4d99 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Thu, 16 Sep 2021 17:58:06 -0700 Subject: [PATCH 078/644] refactoring: ImpliedMoveStatements function This new function complements the existing function FindMoveStatements by potentially generating additional "implied" move statements that aren't written explicit in the configuration but that we'll infer by comparing the configuration and te previous run state. The goal here is to infer only enough to replicate the effect of the "count boundary fixup" graph node (terraform.NodeCountBoundary) that we currently use to deal with this concern of preserving the zero-instance when switching between "count" and not "count". This is just dead code for now. A subsequent commit will introduce this into the "terraform" package while also removing terraform.NodeCountBoundary, thus achieving the same effect as before but in a way that'll get reported in the UI as a move, using the same language that we'd use for an explicit move statement. --- internal/addrs/move_endpoint_module.go | 40 ++++++ internal/refactoring/move_statement.go | 129 +++++++++++++++++ internal/refactoring/move_statement_test.go | 132 ++++++++++++++++++ .../move-statement-implied.tf | 46 ++++++ 4 files changed, 347 insertions(+) create mode 100644 internal/refactoring/move_statement_test.go create mode 100644 internal/refactoring/testdata/move-statement-implied/move-statement-implied.tf diff --git a/internal/addrs/move_endpoint_module.go b/internal/addrs/move_endpoint_module.go index 5b9b62478c1e..8bbbe83e7d11 100644 --- a/internal/addrs/move_endpoint_module.go +++ b/internal/addrs/move_endpoint_module.go @@ -2,6 +2,7 @@ package addrs import ( "fmt" + "reflect" "strings" "github.com/hashicorp/terraform/internal/tfdiags" @@ -51,6 +52,27 @@ type MoveEndpointInModule struct { relSubject AbsMoveable } +// ImpliedMoveStatementEndpoint is a special constructor for MoveEndpointInModule +// which is suitable only for constructing "implied" move statements, which +// means that we inferred the statement automatically rather than building it +// from an explicit block in the configuration. +// +// Implied move endpoints, just as for the statements they are embedded in, +// have somewhat-related-but-imprecise source ranges, typically referring to +// some general configuration construct that implied the statement, because +// by definition there is no explicit move endpoint expression in this case. +func ImpliedMoveStatementEndpoint(addr AbsResourceInstance, rng tfdiags.SourceRange) *MoveEndpointInModule { + // implied move endpoints always belong to the root module, because each + // one refers to a single resource instance inside a specific module + // instance, rather than all instances of the module where the resource + // was declared. + return &MoveEndpointInModule{ + SourceRange: rng, + module: RootModule, + relSubject: addr, + } +} + func (e *MoveEndpointInModule) ObjectKind() MoveEndpointKind { return absMoveableEndpointKind(e.relSubject) } @@ -85,6 +107,24 @@ func (e *MoveEndpointInModule) String() string { return buf.String() } +// Equal returns true if the reciever represents the same matching pattern +// as the other given endpoint, ignoring the source location information. +// +// This is not an optimized function and is here primarily to help with +// writing concise assertions in test code. +func (e *MoveEndpointInModule) Equal(other *MoveEndpointInModule) bool { + if (e == nil) != (other == nil) { + return false + } + if !e.module.Equal(other.module) { + return false + } + // This assumes that all of our possible "movables" are trivially + // comparable with reflect, which is true for all of them at the time + // of writing. + return reflect.DeepEqual(e.relSubject, other.relSubject) +} + // Module returns the address of the module where the receiving address was // declared. func (e *MoveEndpointInModule) Module() Module { diff --git a/internal/refactoring/move_statement.go b/internal/refactoring/move_statement.go index 9edafc7b4c09..baaf9d519b85 100644 --- a/internal/refactoring/move_statement.go +++ b/internal/refactoring/move_statement.go @@ -5,12 +5,26 @@ import ( "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" + "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" ) type MoveStatement struct { From, To *addrs.MoveEndpointInModule DeclRange tfdiags.SourceRange + + // Implied is true for statements produced by ImpliedMoveStatements, and + // false for statements produced by FindMoveStatements. + // + // An "implied" statement is one that has no explicit "moved" block in + // the configuration and was instead generated automatically based on a + // comparison between current configuration and previous run state. + // For implied statements, the DeclRange field contains the source location + // of something in the source code that implied the statement, in which + // case it would probably be confusing to show that source range to the + // user, e.g. in an error message, without clearly mentioning that it's + // related to an implied move statement. + Implied bool } // FindMoveStatements recurses through the modules of the given configuration @@ -34,6 +48,7 @@ func findMoveStatements(cfg *configs.Config, into []MoveStatement) []MoveStateme From: fromAddr, To: toAddr, DeclRange: tfdiags.SourceRangeFromHCL(mc.DeclRange), + Implied: false, }) } @@ -44,6 +59,102 @@ func findMoveStatements(cfg *configs.Config, into []MoveStatement) []MoveStateme return into } +// ImpliedMoveStatements compares addresses in the given state with addresses +// in the given configuration and potentially returns additional MoveStatement +// objects representing moves we infer automatically, even though they aren't +// explicitly recorded in the configuration. +// +// We do this primarily for backward compatibility with behaviors of Terraform +// versions prior to introducing explicit "moved" blocks. Specifically, this +// function aims to achieve the same result as the "NodeCountBoundary" +// heuristic from Terraform v1.0 and earlier, where adding or removing the +// "count" meta-argument from an already-created resource can automatically +// preserve the zeroth or the NoKey instance, depending on the direction of +// the change. We do this only for resources that aren't mentioned already +// in at least one explicit move statement. +// +// As with the previous-version heuristics it replaces, this is a best effort +// and doesn't handle all situations. An explicit move statement is always +// preferred, but our goal here is to match exactly the same cases that the +// old heuristic would've matched, to retain compatibility for existing modules. +// +// We should think very hard before adding any _new_ implication rules for +// moved statements. +func ImpliedMoveStatements(rootCfg *configs.Config, prevRunState *states.State, explicitStmts []MoveStatement) []MoveStatement { + return impliedMoveStatements(rootCfg, prevRunState, explicitStmts, nil) +} + +func impliedMoveStatements(cfg *configs.Config, prevRunState *states.State, explicitStmts []MoveStatement, into []MoveStatement) []MoveStatement { + modAddr := cfg.Path + + // There can be potentially many instances of the module, so we need + // to consider each of them separately. + for _, modState := range prevRunState.ModuleInstances(modAddr) { + // What we're looking for here is either a no-key resource instance + // where the configuration has count set or a zero-key resource + // instance where the configuration _doesn't_ have count set. + // If so, we'll generate a statement replacing no-key with zero-key or + // vice-versa. + for _, rState := range modState.Resources { + rAddr := rState.Addr + rCfg := cfg.Module.ResourceByAddr(rAddr.Resource) + if rCfg == nil { + // If there's no configuration at all then there can't be any + // automatic move fixup to do. + continue + } + approxSrcRange := tfdiags.SourceRangeFromHCL(rCfg.DeclRange) + + // NOTE: We're intentionally not checking to see whether the + // "to" addresses in our implied statements already have + // instances recorded in state, because ApplyMoves should + // deal with such conflicts in a deterministic way for both + // explicit and implicit moves, and we'd rather have that + // handled all in one place. + + var fromKey, toKey addrs.InstanceKey + + switch { + case rCfg.Count != nil: + // If we have a count expression then we'll use _that_ as + // a slightly-more-precise approximate source range. + approxSrcRange = tfdiags.SourceRangeFromHCL(rCfg.Count.Range()) + + if riState := rState.Instances[addrs.NoKey]; riState != nil { + fromKey = addrs.NoKey + toKey = addrs.IntKey(0) + } + default: + if riState := rState.Instances[addrs.IntKey(0)]; riState != nil { + fromKey = addrs.IntKey(0) + toKey = addrs.NoKey + } + } + + if fromKey != toKey { + // We mustn't generate an impied statement if the user already + // wrote an explicit statement referring to this resource, + // because they may wish to select an instance key other than + // zero as the one to retain. + if !haveMoveStatementForResource(rAddr, explicitStmts) { + into = append(into, MoveStatement{ + From: addrs.ImpliedMoveStatementEndpoint(rAddr.Instance(fromKey), approxSrcRange), + To: addrs.ImpliedMoveStatementEndpoint(rAddr.Instance(toKey), approxSrcRange), + DeclRange: approxSrcRange, + Implied: true, + }) + } + } + } + } + + for _, childCfg := range cfg.Children { + into = findMoveStatements(childCfg, into) + } + + return into +} + func (s *MoveStatement) ObjectKind() addrs.MoveEndpointKind { // addrs.UnifyMoveEndpoints guarantees that both of our addresses have // the same kind, so we can just arbitrary use From and assume To will @@ -55,3 +166,21 @@ func (s *MoveStatement) ObjectKind() addrs.MoveEndpointKind { func (s *MoveStatement) Name() string { return fmt.Sprintf("%s->%s", s.From, s.To) } + +func haveMoveStatementForResource(addr addrs.AbsResource, stmts []MoveStatement) bool { + // This is not a particularly optimal way to answer this question, + // particularly since our caller calls this function in a loop already, + // but we expect the total number of explicit statements to be small + // in any reasonable Terraform configuration and so a more complicated + // approach wouldn't be justified here. + + for _, stmt := range stmts { + if stmt.From.SelectsResource(addr) { + return true + } + if stmt.To.SelectsResource(addr) { + return true + } + } + return false +} diff --git a/internal/refactoring/move_statement_test.go b/internal/refactoring/move_statement_test.go new file mode 100644 index 000000000000..93164f94cc5e --- /dev/null +++ b/internal/refactoring/move_statement_test.go @@ -0,0 +1,132 @@ +package refactoring + +import ( + "sort" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/states" + "github.com/hashicorp/terraform/internal/tfdiags" +) + +func TestImpliedMoveStatements(t *testing.T) { + resourceAddr := func(name string) addrs.AbsResource { + return addrs.Resource{ + Mode: addrs.ManagedResourceMode, + Type: "foo", + Name: name, + }.Absolute(addrs.RootModuleInstance) + } + instObjState := func() *states.ResourceInstanceObjectSrc { + return &states.ResourceInstanceObjectSrc{} + } + providerAddr := addrs.AbsProviderConfig{ + Module: addrs.RootModule, + Provider: addrs.MustParseProviderSourceString("hashicorp/foo"), + } + + rootCfg, _ := loadRefactoringFixture(t, "testdata/move-statement-implied") + prevRunState := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + resourceAddr("formerly_count").Instance(addrs.IntKey(0)), + instObjState(), + providerAddr, + ) + s.SetResourceInstanceCurrent( + resourceAddr("formerly_count").Instance(addrs.IntKey(1)), + instObjState(), + providerAddr, + ) + s.SetResourceInstanceCurrent( + resourceAddr("now_count").Instance(addrs.NoKey), + instObjState(), + providerAddr, + ) + s.SetResourceInstanceCurrent( + resourceAddr("formerly_count_explicit").Instance(addrs.IntKey(0)), + instObjState(), + providerAddr, + ) + s.SetResourceInstanceCurrent( + resourceAddr("formerly_count_explicit").Instance(addrs.IntKey(1)), + instObjState(), + providerAddr, + ) + s.SetResourceInstanceCurrent( + resourceAddr("now_count_explicit").Instance(addrs.NoKey), + instObjState(), + providerAddr, + ) + + // This "ambiguous" resource is representing a rare but possible + // situation where we end up having a mixture of different index + // types in the state at the same time. The main way to get into + // this state would be to remove "count = 1" and then have the + // provider fail to destroy the zero-key instance even though we + // already created the no-key instance. Users can also get here + // by using "terraform state mv" in weird ways. + s.SetResourceInstanceCurrent( + resourceAddr("ambiguous").Instance(addrs.NoKey), + instObjState(), + providerAddr, + ) + s.SetResourceInstanceCurrent( + resourceAddr("ambiguous").Instance(addrs.IntKey(0)), + instObjState(), + providerAddr, + ) + }) + + explicitStmts := FindMoveStatements(rootCfg) + got := ImpliedMoveStatements(rootCfg, prevRunState, explicitStmts) + want := []MoveStatement{ + { + From: addrs.ImpliedMoveStatementEndpoint(resourceAddr("formerly_count").Instance(addrs.IntKey(0)), tfdiags.SourceRange{}), + To: addrs.ImpliedMoveStatementEndpoint(resourceAddr("formerly_count").Instance(addrs.NoKey), tfdiags.SourceRange{}), + Implied: true, + DeclRange: tfdiags.SourceRange{ + Filename: "testdata/move-statement-implied/move-statement-implied.tf", + Start: tfdiags.SourcePos{Line: 9, Column: 1, Byte: 232}, + End: tfdiags.SourcePos{Line: 9, Column: 32, Byte: 263}, + }, + }, + { + From: addrs.ImpliedMoveStatementEndpoint(resourceAddr("now_count").Instance(addrs.NoKey), tfdiags.SourceRange{}), + To: addrs.ImpliedMoveStatementEndpoint(resourceAddr("now_count").Instance(addrs.IntKey(0)), tfdiags.SourceRange{}), + Implied: true, + DeclRange: tfdiags.SourceRange{ + Filename: "testdata/move-statement-implied/move-statement-implied.tf", + Start: tfdiags.SourcePos{Line: 14, Column: 11, Byte: 334}, + End: tfdiags.SourcePos{Line: 14, Column: 12, Byte: 335}, + }, + }, + + // We generate foo.ambiguous[0] to foo.ambiguous here, even though + // there's already a foo.ambiguous in the state, because it's the + // responsibility of the later ApplyMoves step to deal with the + // situation where an object wants to move into an address already + // occupied by another object. + { + From: addrs.ImpliedMoveStatementEndpoint(resourceAddr("ambiguous").Instance(addrs.IntKey(0)), tfdiags.SourceRange{}), + To: addrs.ImpliedMoveStatementEndpoint(resourceAddr("ambiguous").Instance(addrs.NoKey), tfdiags.SourceRange{}), + Implied: true, + DeclRange: tfdiags.SourceRange{ + Filename: "testdata/move-statement-implied/move-statement-implied.tf", + Start: tfdiags.SourcePos{Line: 42, Column: 1, Byte: 709}, + End: tfdiags.SourcePos{Line: 42, Column: 27, Byte: 735}, + }, + }, + } + + sort.Slice(got, func(i, j int) bool { + // This is just an arbitrary sort to make the result consistent + // regardless of what order the ImpliedMoveStatements function + // visits the entries in the state/config. + return got[i].DeclRange.Start.Line < got[j].DeclRange.Start.Line + }) + + if diff := cmp.Diff(want, got); diff != "" { + t.Errorf("wrong result\n%s", diff) + } +} diff --git a/internal/refactoring/testdata/move-statement-implied/move-statement-implied.tf b/internal/refactoring/testdata/move-statement-implied/move-statement-implied.tf new file mode 100644 index 000000000000..1de3238bdb34 --- /dev/null +++ b/internal/refactoring/testdata/move-statement-implied/move-statement-implied.tf @@ -0,0 +1,46 @@ +# This fixture is useful only in conjunction with a previous run state that +# conforms to the statements encoded in the resource names. It's for +# TestImpliedMoveStatements only. + +terraform { + experiments = [config_driven_move] +} + +resource "foo" "formerly_count" { + # but not count anymore +} + +resource "foo" "now_count" { + count = 2 +} + +resource "foo" "new_no_count" { +} + +resource "foo" "new_count" { + count = 2 +} + +resource "foo" "formerly_count_explicit" { + # but not count anymore +} + +moved { + from = foo.formerly_count_explicit[1] + to = foo.formerly_count_explicit +} + +resource "foo" "now_count_explicit" { + count = 2 +} + +moved { + from = foo.now_count_explicit + to = foo.now_count_explicit[1] +} + +resource "foo" "ambiguous" { + # this one doesn't have count in the config, but the test should + # set it up to have both no-key and zero-key instances in the + # state. +} From ee9e346039ebe5f60782be24c82dd0c36fc1a783 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 17 Sep 2021 15:08:29 -0700 Subject: [PATCH 079/644] refactoring: ApplyMoves skips moving when destination address occupied Per our rule that the content of the state can never make a move statement invalid, our behavior for two objects trying to occupy the same address will be to just ignore that and let the object already at the address take priority. For the moment this is silent from an end-user perspective and appears only in our internal logs. However, I'm hoping that our future planned adjustment to the interface of this function will include some way to allow reporting these collisions in some end-user-visible way, either as a separate warning per collision or as a single warning that collects together all of the collisions into a single message somehow. This situation can arise both because the previous run state already contained an object at the target address of a move and because more than one move ends up trying to target the same location. In the latter case, which one "wins" is decided by our depth-first traversal order, which is in turn derived from our chaining and nesting rules and is therefore arbitrary but deterministic. --- internal/refactoring/move_execute.go | 36 +++++++++ internal/refactoring/move_execute_test.go | 96 +++++++++++++++++++++++ 2 files changed, 132 insertions(+) diff --git a/internal/refactoring/move_execute.go b/internal/refactoring/move_execute.go index 178a336af0b7..66c0d6c0a0ea 100644 --- a/internal/refactoring/move_execute.go +++ b/internal/refactoring/move_execute.go @@ -79,6 +79,18 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] // directly. if newAddr, matches := modAddr.MoveDestination(stmt.From, stmt.To); matches { log.Printf("[TRACE] refactoring.ApplyMoves: %s has moved to %s", modAddr, newAddr) + + // If we already have a module at the new address then + // we'll skip this move and let the existing object take + // priority. + // TODO: This should probably generate a user-visible + // warning, but we'd need to rethink the signature of this + // function to achieve that. + if ms := state.Module(newAddr); ms != nil { + log.Printf("[WARN] Skipped moving %s to %s, because there's already another module instance at the destination", modAddr, newAddr) + continue + } + // We need to visit all of the resource instances in the // module and record them individually as results. for _, rs := range ms.Resources { @@ -105,6 +117,18 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] rAddr := rs.Addr if newAddr, matches := rAddr.MoveDestination(stmt.From, stmt.To); matches { log.Printf("[TRACE] refactoring.ApplyMoves: resource %s has moved to %s", rAddr, newAddr) + + // If we already have a resource at the new address then + // we'll skip this move and let the existing object take + // priority. + // TODO: This should probably generate a user-visible + // warning, but we'd need to rethink the signature of this + // function to achieve that. + if rs := state.Resource(newAddr); rs != nil { + log.Printf("[WARN] Skipped moving %s to %s, because there's already another resource at the destination", rAddr, newAddr) + continue + } + for key := range rs.Instances { oldInst := rAddr.Instance(key) newInst := newAddr.Instance(key) @@ -122,6 +146,18 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] iAddr := rAddr.Instance(key) if newAddr, matches := iAddr.MoveDestination(stmt.From, stmt.To); matches { log.Printf("[TRACE] refactoring.ApplyMoves: resource instance %s has moved to %s", iAddr, newAddr) + + // If we already have a resource instance at the new + // address then we'll skip this move and let the existing + // object take priority. + // TODO: This should probably generate a user-visible + // warning, but we'd need to rethink the signature of this + // function to achieve that. + if is := state.ResourceInstance(newAddr); is != nil { + log.Printf("[WARN] Skipped moving %s to %s, because there's already another resource instance at the destination", iAddr, newAddr) + continue + } + result := MoveResult{From: iAddr, To: newAddr} results[iAddr.UniqueKey()] = result results[newAddr.UniqueKey()] = result diff --git a/internal/refactoring/move_execute_test.go b/internal/refactoring/move_execute_test.go index 63c54a7adbca..f3ab1ec1e31f 100644 --- a/internal/refactoring/move_execute_test.go +++ b/internal/refactoring/move_execute_test.go @@ -391,6 +391,102 @@ func TestApplyMoves(t *testing.T) { `module.bar[0].foo.to[0]`, }, }, + + "move module instance to already-existing module instance": { + []MoveStatement{ + testMoveStatement(t, "", "module.bar[0]", "module.boo"), + }, + states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + instAddrs["module.bar[0].foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + s.SetResourceInstanceCurrent( + instAddrs["module.boo.foo.to[0]"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + }), + map[addrs.UniqueKey]MoveResult{ + // Nothing moved, because the module.b address is already + // occupied by another module. + }, + []string{ + `module.bar[0].foo.from`, + `module.boo.foo.to[0]`, + }, + }, + + "move resource to already-existing resource": { + []MoveStatement{ + testMoveStatement(t, "", "foo.from", "foo.to"), + }, + states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + instAddrs["foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + s.SetResourceInstanceCurrent( + instAddrs["foo.to"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + }), + map[addrs.UniqueKey]MoveResult{ + // Nothing moved, because the module.b address is already + // occupied by another module. + }, + []string{ + `foo.from`, + `foo.to`, + }, + }, + + "move resource instance to already-existing resource instance": { + []MoveStatement{ + testMoveStatement(t, "", "foo.from", "foo.to[0]"), + }, + states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + instAddrs["foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + s.SetResourceInstanceCurrent( + instAddrs["foo.to[0]"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + }), + map[addrs.UniqueKey]MoveResult{ + // Nothing moved, because the module.b address is already + // occupied by another module. + }, + []string{ + `foo.from`, + `foo.to[0]`, + }, + }, } for name, test := range tests { From f0034beb339c52521c5c7cd40c99e225d861b2ac Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 17 Sep 2021 15:32:32 -0700 Subject: [PATCH 080/644] core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary Going back a long time we've had a special magic behavior which tries to recognize a situation where a module author either added or removed the "count" argument from a resource that already has instances, and to silently rename the zeroth or no-key instance so that we don't plan to destroy and recreate the associated object. Now we have a more general idea of "move statements", and specifically the idea of "implied" move statements which replicates the same heuristic we used to use for this behavior, we can treat this magic renaming rule as just another "move statement", special only in that Terraform generates it automatically rather than it being written out explicitly in the configuration. In return for wiring that in, we can now remove altogether the NodeCountBoundary graph node type and its associated graph transformer, CountBoundaryTransformer. We handle moves as a preprocessing step before building the plan graph, so we no longer need to include any special nodes in the graph to deal with that situation. The test updates here are mainly for the graph builders themselves, to acknowledge that indeed we're no longer inserting the NodeCountBoundary vertices. The vertices that NodeCountBoundary previously depended on now become dependencies of the special "root" vertex, although in many cases here we don't see that explicitly because of the transitive reduction algorithm, which notices when there's already an equivalent indirect dependency chain and removes the redundant edge. We already have plenty of test coverage for these "count boundary" cases in the context tests whose names start with TestContext2Plan_count and TestContext2Apply_resourceCount, all of which continued to pass here without any modification and so are not visible in the diff. The test functions particularly relevant to this situation are: - TestContext2Plan_countIncreaseFromNotSet - TestContext2Plan_countDecreaseToOne - TestContext2Plan_countOneIndex - TestContext2Apply_countDecreaseToOneCorrupted The last of those in particular deals with the situation where we have both a no-key instance _and_ a zero-key instance in the prior state, which is interesting here because to exercises an intentional interaction between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves, where we intentionally generate an implied move statement that produces a collision and then expect ApplyMoves to deal with it in the same way as it would deal with all other collisions, and thus ensure we handle both the explicit and implied collisions in the same way. This does affect some UI-level tests, because a nice side-effect of this new treatment of this old feature is that we can now report explicitly in the UI that we're assigning new addresses to these objects, whereas before we just said nothing and hoped the user would just guess what had happened and why they therefore weren't seeing a diff. The backend/local plan tests actually had a pre-existing bug where they were using a state with a different instance key than the config called for but getting away with it because we'd previously silently fix it up. That's still fixed up, but now done with an explicit mention in the UI and so I made the state consistent with the configuration here so that the tests would be able to recognize _real_ differences where present, as opposed to the errant difference caused by that inconsistency. --- internal/backend/local/backend_plan_test.go | 2 +- .../multi-resource-update/output.json | 12 ++- internal/states/sync.go | 54 ------------- internal/terraform/context_apply_test.go | 19 ++++- internal/terraform/context_plan.go | 9 ++- internal/terraform/eval_count.go | 31 ------- internal/terraform/graph_builder_apply.go | 5 -- .../terraform/graph_builder_apply_test.go | 29 +++---- internal/terraform/graph_builder_plan.go | 5 -- internal/terraform/graph_builder_plan_test.go | 54 +++++-------- internal/terraform/node_count_boundary.go | 80 ------------------- .../terraform/node_count_boundary_test.go | 72 ----------------- internal/terraform/node_resource_plan.go | 4 - .../terraform/transform_count_boundary.go | 33 -------- 14 files changed, 65 insertions(+), 344 deletions(-) delete mode 100644 internal/terraform/node_count_boundary.go delete mode 100644 internal/terraform/node_count_boundary_test.go delete mode 100644 internal/terraform/transform_count_boundary.go diff --git a/internal/backend/local/backend_plan_test.go b/internal/backend/local/backend_plan_test.go index 7d5b92b161bb..6bae23555f3a 100644 --- a/internal/backend/local/backend_plan_test.go +++ b/internal/backend/local/backend_plan_test.go @@ -746,7 +746,7 @@ func testPlanState() *states.State { Mode: addrs.ManagedResourceMode, Type: "test_instance", Name: "foo", - }.Instance(addrs.IntKey(0)), + }.Instance(addrs.NoKey), &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, AttrsJSON: []byte(`{ diff --git a/internal/command/testdata/show-json/multi-resource-update/output.json b/internal/command/testdata/show-json/multi-resource-update/output.json index a7a6a3053fac..247749261356 100644 --- a/internal/command/testdata/show-json/multi-resource-update/output.json +++ b/internal/command/testdata/show-json/multi-resource-update/output.json @@ -47,22 +47,26 @@ }, "resource_drift": [ { - "address": "test_instance.test", + "address": "test_instance.test[0]", "mode": "managed", "type": "test_instance", "provider_name": "registry.terraform.io/hashicorp/test", "name": "test", + "index": 0, "change": { "actions": [ - "delete" + "no-op" ], "before": { "ami": "bar", "id": "placeholder" }, - "after": null, + "after": { + "ami": "bar", + "id": "placeholder" + }, "before_sensitive": {}, - "after_sensitive": false, + "after_sensitive": {}, "after_unknown": {} } } diff --git a/internal/states/sync.go b/internal/states/sync.go index c70714f9df49..f21984279509 100644 --- a/internal/states/sync.go +++ b/internal/states/sync.go @@ -248,60 +248,6 @@ func (s *SyncState) RemoveResourceIfEmpty(addr addrs.AbsResource) bool { return true } -// MaybeFixUpResourceInstanceAddressForCount deals with the situation where a -// resource has changed from having "count" set to not set, or vice-versa, and -// so we need to rename the zeroth instance key to no key at all, or vice-versa. -// -// Set countEnabled to true if the resource has count set in its new -// configuration, or false if it does not. -// -// The state is modified in-place if necessary, moving a resource instance -// between the two addresses. The return value is true if a change was made, -// and false otherwise. -func (s *SyncState) MaybeFixUpResourceInstanceAddressForCount(addr addrs.ConfigResource, countEnabled bool) bool { - s.lock.Lock() - defer s.lock.Unlock() - - // get all modules instances that may match this state - modules := s.state.ModuleInstances(addr.Module) - if len(modules) == 0 { - return false - } - - changed := false - - for _, ms := range modules { - relAddr := addr.Resource - rs := ms.Resource(relAddr) - if rs == nil { - continue - } - - huntKey := addrs.NoKey - replaceKey := addrs.InstanceKey(addrs.IntKey(0)) - if !countEnabled { - huntKey, replaceKey = replaceKey, huntKey - } - - is, exists := rs.Instances[huntKey] - if !exists { - continue - } - - if _, exists := rs.Instances[replaceKey]; exists { - // If the replacement key also exists then we'll do nothing and keep both. - continue - } - - // If we get here then we need to "rename" from hunt to replace - rs.Instances[replaceKey] = is - delete(rs.Instances, huntKey) - changed = true - } - - return changed -} - // SetResourceInstanceCurrent saves the given instance object as the current // generation of the resource instance with the given address, simultaneously // updating the recorded provider configuration address, dependencies, and diff --git a/internal/terraform/context_apply_test.go b/internal/terraform/context_apply_test.go index 9d155e73e865..babf067a5e69 100644 --- a/internal/terraform/context_apply_test.go +++ b/internal/terraform/context_apply_test.go @@ -2073,9 +2073,22 @@ func TestContext2Apply_countDecreaseToOneX(t *testing.T) { // https://github.com/PeoplePerHour/terraform/pull/11 // -// This tests a case where both a "resource" and "resource.0" are in -// the state file, which apparently is a reasonable backwards compatibility -// concern found in the above 3rd party repo. +// This tests a rare but possible situation where we have both a no-key and +// a zero-key instance of the same resource in the configuration when we +// disable count. +// +// The main way to get here is for a provider to fail to destroy the zero-key +// instance but succeed in creating the no-key instance, since those two +// can typically happen concurrently. There are various other ways to get here +// that might be considered user error, such as using "terraform state mv" +// to create a strange combination of different key types on the same resource. +// +// This test indirectly exercises an intentional interaction between +// refactoring.ImpliedMoveStatements and refactoring.ApplyMoves: we'll first +// generate an implied move statement from aws_instance.foo[0] to +// aws_instance.foo, but then refactoring.ApplyMoves should notice that and +// ignore the statement, in the same way as it would if an explicit move +// statement specified the same situation. func TestContext2Apply_countDecreaseToOneCorrupted(t *testing.T) { m := testModule(t, "apply-count-dec-one") p := testProvider("aws") diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 3735dd91c927..f70a8f67b746 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -297,7 +297,14 @@ func (c *Context) destroyPlan(config *configs.Config, prevRunState *states.State } func (c *Context) prePlanFindAndApplyMoves(config *configs.Config, prevRunState *states.State, targets []addrs.Targetable) ([]refactoring.MoveStatement, map[addrs.UniqueKey]refactoring.MoveResult) { - moveStmts := refactoring.FindMoveStatements(config) + explicitMoveStmts := refactoring.FindMoveStatements(config) + implicitMoveStmts := refactoring.ImpliedMoveStatements(config, prevRunState, explicitMoveStmts) + var moveStmts []refactoring.MoveStatement + if stmtsLen := len(explicitMoveStmts) + len(implicitMoveStmts); stmtsLen > 0 { + moveStmts = make([]refactoring.MoveStatement, 0, stmtsLen) + moveStmts = append(moveStmts, explicitMoveStmts...) + moveStmts = append(moveStmts, implicitMoveStmts...) + } moveResults := refactoring.ApplyMoves(moveStmts, prevRunState) return moveStmts, moveResults } diff --git a/internal/terraform/eval_count.go b/internal/terraform/eval_count.go index a7f3c25ab35d..c74a051b301f 100644 --- a/internal/terraform/eval_count.go +++ b/internal/terraform/eval_count.go @@ -2,10 +2,8 @@ package terraform import ( "fmt" - "log" "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/tfdiags" "github.com/zclconf/go-cty/cty" "github.com/zclconf/go-cty/cty/gocty" @@ -101,32 +99,3 @@ func evaluateCountExpressionValue(expr hcl.Expression, ctx EvalContext) (cty.Val return countVal, diags } - -// fixResourceCountSetTransition is a helper function to fix up the state when a -// resource transitions its "count" from being set to unset or vice-versa, -// treating a 0-key and a no-key instance as aliases for one another across -// the transition. -// -// The correct time to call this function is in the DynamicExpand method for -// a node representing a resource, just after evaluating the count with -// evaluateCountExpression, and before any other analysis of the -// state such as orphan detection. -// -// This function calls methods on the given EvalContext to update the current -// state in-place, if necessary. It is a no-op if there is no count transition -// taking place. -// -// Since the state is modified in-place, this function must take a writer lock -// on the state. The caller must therefore not also be holding a state lock, -// or this function will block forever awaiting the lock. -func fixResourceCountSetTransition(ctx EvalContext, addr addrs.ConfigResource, countEnabled bool) { - state := ctx.State() - if state.MaybeFixUpResourceInstanceAddressForCount(addr, countEnabled) { - log.Printf("[TRACE] renamed first %s instance in transient state due to count argument change", addr) - } - - refreshState := ctx.RefreshState() - if refreshState != nil && refreshState.MaybeFixUpResourceInstanceAddressForCount(addr, countEnabled) { - log.Printf("[TRACE] renamed first %s instance in transient state due to count argument change", addr) - } -} diff --git a/internal/terraform/graph_builder_apply.go b/internal/terraform/graph_builder_apply.go index 94f1e7699fb5..75f9d3d4ad25 100644 --- a/internal/terraform/graph_builder_apply.go +++ b/internal/terraform/graph_builder_apply.go @@ -152,11 +152,6 @@ func (b *ApplyGraphBuilder) Steps() []GraphTransformer { // Target &TargetsTransformer{Targets: b.Targets}, - // Add the node to fix the state count boundaries - &CountBoundaryTransformer{ - Config: b.Config, - }, - // Close opened plugin connections &CloseProviderTransformer{}, diff --git a/internal/terraform/graph_builder_apply_test.go b/internal/terraform/graph_builder_apply_test.go index e1aba8d2c136..88ebcfefd1f3 100644 --- a/internal/terraform/graph_builder_apply_test.go +++ b/internal/terraform/graph_builder_apply_test.go @@ -5,10 +5,12 @@ import ( "strings" "testing" + "github.com/google/go-cmp/cmp" + "github.com/zclconf/go-cty/cty" + "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/states" - "github.com/zclconf/go-cty/cty" ) func TestApplyGraphBuilder_impl(t *testing.T) { @@ -60,11 +62,10 @@ func TestApplyGraphBuilder(t *testing.T) { t.Fatalf("wrong path %q", g.Path.String()) } - actual := strings.TrimSpace(g.String()) - - expected := strings.TrimSpace(testApplyGraphBuilderStr) - if actual != expected { - t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) + got := strings.TrimSpace(g.String()) + want := strings.TrimSpace(testApplyGraphBuilderStr) + if diff := cmp.Diff(want, got); diff != "" { + t.Fatalf("wrong result\n%s", diff) } } @@ -352,10 +353,10 @@ func TestApplyGraphBuilder_destroyCount(t *testing.T) { t.Fatalf("wrong module path %q", g.Path) } - actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(testApplyGraphBuilderDestroyCountStr) - if actual != expected { - t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected) + got := strings.TrimSpace(g.String()) + want := strings.TrimSpace(testApplyGraphBuilderDestroyCountStr) + if diff := cmp.Diff(want, got); diff != "" { + t.Fatalf("wrong result\n%s", diff) } } @@ -699,9 +700,6 @@ func TestApplyGraphBuilder_orphanedWithProvider(t *testing.T) { } const testApplyGraphBuilderStr = ` -meta.count-boundary (EachMode fixup) - module.child (close) - test_object.other module.child (close) module.child.test_object.other module.child (expand) @@ -721,7 +719,7 @@ provider["registry.terraform.io/hashicorp/test"] (close) module.child.test_object.other test_object.other root - meta.count-boundary (EachMode fixup) + module.child (close) provider["registry.terraform.io/hashicorp/test"] (close) test_object.create test_object.create (expand) @@ -735,13 +733,10 @@ test_object.other (expand) ` const testApplyGraphBuilderDestroyCountStr = ` -meta.count-boundary (EachMode fixup) - test_object.B provider["registry.terraform.io/hashicorp/test"] provider["registry.terraform.io/hashicorp/test"] (close) test_object.B root - meta.count-boundary (EachMode fixup) provider["registry.terraform.io/hashicorp/test"] (close) test_object.A (expand) provider["registry.terraform.io/hashicorp/test"] diff --git a/internal/terraform/graph_builder_plan.go b/internal/terraform/graph_builder_plan.go index b267f9c428c7..709b917b6733 100644 --- a/internal/terraform/graph_builder_plan.go +++ b/internal/terraform/graph_builder_plan.go @@ -156,11 +156,6 @@ func (b *PlanGraphBuilder) Steps() []GraphTransformer { // node due to dependency edges, to avoid graph cycles during apply. &ForcedCBDTransformer{}, - // Add the node to fix the state count boundaries - &CountBoundaryTransformer{ - Config: b.Config, - }, - // Close opened plugin connections &CloseProviderTransformer{}, diff --git a/internal/terraform/graph_builder_plan_test.go b/internal/terraform/graph_builder_plan_test.go index 689f9faff3db..9ec16c6ed79f 100644 --- a/internal/terraform/graph_builder_plan_test.go +++ b/internal/terraform/graph_builder_plan_test.go @@ -4,10 +4,12 @@ import ( "strings" "testing" + "github.com/google/go-cmp/cmp" + "github.com/zclconf/go-cty/cty" + "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/providers" - "github.com/zclconf/go-cty/cty" ) func TestPlanGraphBuilder_impl(t *testing.T) { @@ -45,10 +47,10 @@ func TestPlanGraphBuilder(t *testing.T) { t.Fatalf("wrong module path %q", g.Path) } - actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(testPlanGraphBuilderStr) - if actual != expected { - t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) + got := strings.TrimSpace(g.String()) + want := strings.TrimSpace(testPlanGraphBuilderStr) + if diff := cmp.Diff(want, got); diff != "" { + t.Fatalf("wrong result\n%s", diff) } } @@ -92,15 +94,12 @@ func TestPlanGraphBuilder_dynamicBlock(t *testing.T) { // is that at the end test_thing.c depends on both test_thing.a and // test_thing.b. Other details might shift over time as other logic in // the graph builders changes. - actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(` -meta.count-boundary (EachMode fixup) - test_thing.c (expand) + got := strings.TrimSpace(g.String()) + want := strings.TrimSpace(` provider["registry.terraform.io/hashicorp/test"] provider["registry.terraform.io/hashicorp/test"] (close) test_thing.c (expand) root - meta.count-boundary (EachMode fixup) provider["registry.terraform.io/hashicorp/test"] (close) test_thing.a (expand) provider["registry.terraform.io/hashicorp/test"] @@ -110,8 +109,8 @@ test_thing.c (expand) test_thing.a (expand) test_thing.b (expand) `) - if actual != expected { - t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) + if diff := cmp.Diff(want, got); diff != "" { + t.Fatalf("wrong result\n%s", diff) } } @@ -150,23 +149,20 @@ func TestPlanGraphBuilder_attrAsBlocks(t *testing.T) { // list-of-objects attribute. This requires some special effort // inside lang.ReferencesInBlock to make sure it searches blocks of // type "nested" along with an attribute named "nested". - actual := strings.TrimSpace(g.String()) - expected := strings.TrimSpace(` -meta.count-boundary (EachMode fixup) - test_thing.b (expand) + got := strings.TrimSpace(g.String()) + want := strings.TrimSpace(` provider["registry.terraform.io/hashicorp/test"] provider["registry.terraform.io/hashicorp/test"] (close) test_thing.b (expand) root - meta.count-boundary (EachMode fixup) provider["registry.terraform.io/hashicorp/test"] (close) test_thing.a (expand) provider["registry.terraform.io/hashicorp/test"] test_thing.b (expand) test_thing.a (expand) `) - if actual != expected { - t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) + if diff := cmp.Diff(want, got); diff != "" { + t.Fatalf("wrong result\n%s", diff) } } @@ -211,12 +207,12 @@ func TestPlanGraphBuilder_forEach(t *testing.T) { t.Fatalf("wrong module path %q", g.Path) } - actual := strings.TrimSpace(g.String()) + got := strings.TrimSpace(g.String()) // We're especially looking for the edge here, where aws_instance.bat // has a dependency on aws_instance.boo - expected := strings.TrimSpace(testPlanGraphBuilderForEachStr) - if actual != expected { - t.Fatalf("expected:\n%s\n\ngot:\n%s", expected, actual) + want := strings.TrimSpace(testPlanGraphBuilderForEachStr) + if diff := cmp.Diff(want, got); diff != "" { + t.Fatalf("wrong result\n%s", diff) } } @@ -230,9 +226,6 @@ aws_security_group.firewall (expand) provider["registry.terraform.io/hashicorp/aws"] local.instance_id (expand) aws_instance.web (expand) -meta.count-boundary (EachMode fixup) - aws_load_balancer.weblb (expand) - output.instance_id openstack_floating_ip.random (expand) provider["registry.terraform.io/hashicorp/openstack"] output.instance_id @@ -245,7 +238,7 @@ provider["registry.terraform.io/hashicorp/openstack"] provider["registry.terraform.io/hashicorp/openstack"] (close) openstack_floating_ip.random (expand) root - meta.count-boundary (EachMode fixup) + output.instance_id provider["registry.terraform.io/hashicorp/aws"] (close) provider["registry.terraform.io/hashicorp/openstack"] (close) var.foo @@ -263,12 +256,6 @@ aws_instance.boo (expand) provider["registry.terraform.io/hashicorp/aws"] aws_instance.foo (expand) provider["registry.terraform.io/hashicorp/aws"] -meta.count-boundary (EachMode fixup) - aws_instance.bar (expand) - aws_instance.bar2 (expand) - aws_instance.bat (expand) - aws_instance.baz (expand) - aws_instance.foo (expand) provider["registry.terraform.io/hashicorp/aws"] provider["registry.terraform.io/hashicorp/aws"] (close) aws_instance.bar (expand) @@ -277,6 +264,5 @@ provider["registry.terraform.io/hashicorp/aws"] (close) aws_instance.baz (expand) aws_instance.foo (expand) root - meta.count-boundary (EachMode fixup) provider["registry.terraform.io/hashicorp/aws"] (close) ` diff --git a/internal/terraform/node_count_boundary.go b/internal/terraform/node_count_boundary.go deleted file mode 100644 index 26968b11937d..000000000000 --- a/internal/terraform/node_count_boundary.go +++ /dev/null @@ -1,80 +0,0 @@ -package terraform - -import ( - "fmt" - "log" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/tfdiags" -) - -// NodeCountBoundary fixes up any transitions between "each modes" in objects -// saved in state, such as switching from NoEach to EachInt. -type NodeCountBoundary struct { - Config *configs.Config -} - -var _ GraphNodeExecutable = (*NodeCountBoundary)(nil) - -func (n *NodeCountBoundary) Name() string { - return "meta.count-boundary (EachMode fixup)" -} - -// GraphNodeExecutable -func (n *NodeCountBoundary) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) { - // We'll temporarily lock the state to grab the modules, then work on each - // one separately while taking a lock again for each separate resource. - // This means that if another caller concurrently adds a module here while - // we're working then we won't update it, but that's no worse than the - // concurrent writer blocking for our entire fixup process and _then_ - // adding a new module, and in practice the graph node associated with - // this eval depends on everything else in the graph anyway, so there - // should not be concurrent writers. - state := ctx.State().Lock() - moduleAddrs := make([]addrs.ModuleInstance, 0, len(state.Modules)) - for _, m := range state.Modules { - moduleAddrs = append(moduleAddrs, m.Addr) - } - ctx.State().Unlock() - - for _, addr := range moduleAddrs { - cfg := n.Config.DescendentForInstance(addr) - if cfg == nil { - log.Printf("[WARN] Not fixing up EachModes for %s because it has no config", addr) - continue - } - if err := n.fixModule(ctx, addr); err != nil { - diags = diags.Append(err) - return diags - } - } - return diags -} - -func (n *NodeCountBoundary) fixModule(ctx EvalContext, moduleAddr addrs.ModuleInstance) error { - ms := ctx.State().Module(moduleAddr) - cfg := n.Config.DescendentForInstance(moduleAddr) - if ms == nil { - // Theoretically possible for a concurrent writer to delete a module - // while we're running, but in practice the graph node that called us - // depends on everything else in the graph and so there can never - // be a concurrent writer. - return fmt.Errorf("[WARN] no state found for %s while trying to fix up EachModes", moduleAddr) - } - if cfg == nil { - return fmt.Errorf("[WARN] no config found for %s while trying to fix up EachModes", moduleAddr) - } - - for _, r := range ms.Resources { - rCfg := cfg.Module.ResourceByAddr(r.Addr.Resource) - if rCfg == nil { - log.Printf("[WARN] Not fixing up EachModes for %s because it has no config", r.Addr) - continue - } - hasCount := rCfg.Count != nil - fixResourceCountSetTransition(ctx, r.Addr.Config(), hasCount) - } - - return nil -} diff --git a/internal/terraform/node_count_boundary_test.go b/internal/terraform/node_count_boundary_test.go deleted file mode 100644 index 096a980ad773..000000000000 --- a/internal/terraform/node_count_boundary_test.go +++ /dev/null @@ -1,72 +0,0 @@ -package terraform - -import ( - "testing" - - "github.com/hashicorp/hcl/v2/hcltest" - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/states" - "github.com/zclconf/go-cty/cty" -) - -func TestNodeCountBoundaryExecute(t *testing.T) { - - // Create a state with a single instance (addrs.NoKey) of test_instance.foo - state := states.NewState() - state.Module(addrs.RootModuleInstance).SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_instance", - Name: "foo", - }.Instance(addrs.NoKey), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"type":"string","value":"hello"}`), - }, - addrs.AbsProviderConfig{ - Provider: addrs.NewDefaultProvider("test"), - Module: addrs.RootModule, - }, - ) - - // Create a config that uses count to create 2 instances of test_instance.foo - rc := &configs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_instance", - Name: "foo", - Count: hcltest.MockExprLiteral(cty.NumberIntVal(2)), - Config: configs.SynthBody("", map[string]cty.Value{ - "test_string": cty.StringVal("hello"), - }), - } - config := &configs.Config{ - Module: &configs.Module{ - ManagedResources: map[string]*configs.Resource{ - "test_instance.foo": rc, - }, - }, - } - - ctx := &MockEvalContext{ - StateState: state.SyncWrapper(), - } - node := NodeCountBoundary{Config: config} - - diags := node.Execute(ctx, walkApply) - if diags.HasErrors() { - t.Fatalf("unexpected error: %s", diags.Err()) - } - if !state.HasResources() { - t.Fatal("resources missing from state") - } - - // verify that the resource changed from test_instance.foo to - // test_instance.foo.0 in the state - actual := state.String() - expected := "test_instance.foo.0:\n ID = \n provider = provider[\"registry.terraform.io/hashicorp/test\"]\n type = string\n value = hello" - - if actual != expected { - t.Fatalf("wrong result: %s", actual) - } -} diff --git a/internal/terraform/node_resource_plan.go b/internal/terraform/node_resource_plan.go index 64850c3b7a0f..c732575e5650 100644 --- a/internal/terraform/node_resource_plan.go +++ b/internal/terraform/node_resource_plan.go @@ -304,10 +304,6 @@ func (n *NodePlannableResource) ModifyCreateBeforeDestroy(v bool) error { func (n *NodePlannableResource) DynamicExpand(ctx EvalContext) (*Graph, error) { var diags tfdiags.Diagnostics - // We need to potentially rename an instance address in the state - // if we're transitioning whether "count" is set at all. - fixResourceCountSetTransition(ctx, n.Addr.Config(), n.Config.Count != nil) - // Our instance expander should already have been informed about the // expansion of this resource and of all of its containing modules, so // it can tell us which instance addresses we need to process. diff --git a/internal/terraform/transform_count_boundary.go b/internal/terraform/transform_count_boundary.go deleted file mode 100644 index 9f944240edb9..000000000000 --- a/internal/terraform/transform_count_boundary.go +++ /dev/null @@ -1,33 +0,0 @@ -package terraform - -import ( - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/dag" -) - -// CountBoundaryTransformer adds a node that depends on everything else -// so that it runs last in order to clean up the state for nodes that -// are on the "count boundary": "foo.0" when only one exists becomes "foo" -type CountBoundaryTransformer struct { - Config *configs.Config -} - -func (t *CountBoundaryTransformer) Transform(g *Graph) error { - node := &NodeCountBoundary{ - Config: t.Config, - } - g.Add(node) - - // Depends on everything - for _, v := range g.Vertices() { - // Don't connect to ourselves - if v == node { - continue - } - - // Connect! - g.Connect(dag.BasicEdge(node, v)) - } - - return nil -} From 78705f4f109d7c1e23e09658fd44c5b7ebccd85f Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Mon, 20 Sep 2021 11:29:24 -0700 Subject: [PATCH 081/644] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 809e5795bfc3..281c399ccaec 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,6 +13,7 @@ NEW FEATURES: ENHANCEMENTS: * config: Terraform now checks the syntax of and normalizes module source addresses (the `source` argument in `module` blocks) during configuration decoding rather than only at module installation time. This is largely just an internal refactoring, but a visible benefit of this change is that the `terraform init` messages about module downloading will now show the canonical module package address Terraform is downloading from, after interpreting the special shorthands for common cases like GitHub URLs. ([#28854](https://github.com/hashicorp/terraform/issues/28854)) +* cli: Terraform will now report explicitly in the UI if it automatically moves a resource instance to a new address as a result of adding or removing the `count` argument from an existing resource. For example, if you previously had `resource "aws_subnet" "example"` _without_ `count`, you might have `aws_subnet.example` already bound to a remote object in your state. If you add `count = 1` to that resource then Terraform would previously silently rebind the object to `aws_subnet.example[0]` as part of planning, whereas now Terraform will mention that it did so explicitly in the plan description. [GH-29605] BUG FIXES: From 78c4a8c4617d4fa615768f27a0bd50cc3f648cfe Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Fri, 17 Sep 2021 13:18:34 -0400 Subject: [PATCH 082/644] json-output: Previous address for resource changes Configuration-driven moves are represented in the plan file by setting the resource's `PrevRunAddr` to a different value than its `Addr`. For JSON plan output, we here add a new field to resource changes, `previous_address`, which is present and non-empty only if the resource is planned to be moved. Like the CLI UI, refresh-only plans will include move-only changes in the resource drift JSON output. In normal plan mode, these are elided to avoid redundancy with planned changes. --- internal/command/jsonplan/plan.go | 25 ++- internal/command/jsonplan/resource.go | 12 ++ .../testdata/show-json/moved-drift/main.tf | 22 +++ .../show-json/moved-drift/output.json | 177 ++++++++++++++++++ .../show-json/moved-drift/terraform.tfstate | 38 ++++ .../command/testdata/show-json/moved/main.tf | 12 ++ .../testdata/show-json/moved/output.json | 89 +++++++++ .../show-json/moved/terraform.tfstate | 23 +++ .../multi-resource-update/output.json | 27 +-- website/docs/internals/json-format.html.md | 8 +- 10 files changed, 403 insertions(+), 30 deletions(-) create mode 100644 internal/command/testdata/show-json/moved-drift/main.tf create mode 100644 internal/command/testdata/show-json/moved-drift/output.json create mode 100644 internal/command/testdata/show-json/moved-drift/terraform.tfstate create mode 100644 internal/command/testdata/show-json/moved/main.tf create mode 100644 internal/command/testdata/show-json/moved/output.json create mode 100644 internal/command/testdata/show-json/moved/terraform.tfstate diff --git a/internal/command/jsonplan/plan.go b/internal/command/jsonplan/plan.go index 1d7eff1ffb37..b2bf2cceb80b 100644 --- a/internal/command/jsonplan/plan.go +++ b/internal/command/jsonplan/plan.go @@ -130,9 +130,25 @@ func Marshal( } // output.ResourceDrift - output.ResourceDrift, err = output.marshalResourceChanges(p.DriftedResources, schemas) - if err != nil { - return nil, fmt.Errorf("error in marshaling resource drift: %s", err) + if len(p.DriftedResources) > 0 { + // In refresh-only mode, we render all resources marked as drifted, + // including those which have moved without other changes. In other plan + // modes, move-only changes will be included in the planned changes, so + // we skip them here. + var driftedResources []*plans.ResourceInstanceChangeSrc + if p.UIMode == plans.RefreshOnlyMode { + driftedResources = p.DriftedResources + } else { + for _, dr := range p.DriftedResources { + if dr.Action != plans.NoOp { + driftedResources = append(driftedResources, dr) + } + } + } + output.ResourceDrift, err = output.marshalResourceChanges(driftedResources, schemas) + if err != nil { + return nil, fmt.Errorf("error in marshaling resource drift: %s", err) + } } // output.ResourceChanges @@ -197,6 +213,9 @@ func (p *plan) marshalResourceChanges(resources []*plans.ResourceInstanceChangeS var r resourceChange addr := rc.Addr r.Address = addr.String() + if !addr.Equal(rc.PrevRunAddr) { + r.PreviousAddress = rc.PrevRunAddr.String() + } dataSource := addr.Resource.Resource.Mode == addrs.DataResourceMode // We create "delete" actions for data resources so we can clean up diff --git a/internal/command/jsonplan/resource.go b/internal/command/jsonplan/resource.go index ca1299c994a7..1e737a626654 100644 --- a/internal/command/jsonplan/resource.go +++ b/internal/command/jsonplan/resource.go @@ -48,6 +48,18 @@ type resourceChange struct { // Address is the absolute resource address Address string `json:"address,omitempty"` + // PreviousAddress is the absolute address that this resource instance had + // at the conclusion of a previous run. + // + // This will typically be omitted, but will be present if the previous + // resource instance was subject to a "moved" block that we handled in the + // process of creating this plan. + // + // Note that this behavior diverges from the internal plan data structure, + // where the previous address is set equal to the current address in the + // common case, rather than being omitted. + PreviousAddress string `json:"previous_address,omitempty"` + // ModuleAddress is the module portion of the above address. Omitted if the // instance is in the root module. ModuleAddress string `json:"module_address,omitempty"` diff --git a/internal/command/testdata/show-json/moved-drift/main.tf b/internal/command/testdata/show-json/moved-drift/main.tf new file mode 100644 index 000000000000..afdf9fe668cd --- /dev/null +++ b/internal/command/testdata/show-json/moved-drift/main.tf @@ -0,0 +1,22 @@ +# In state with `ami = "foo"`, so this should be a regular update. The provider +# should not detect changes on refresh. +resource "test_instance" "no_refresh" { + ami = "bar" +} + +# In state with `ami = "refresh-me"`, but the provider will return +# `"refreshed"` after the refresh phase. The plan should show the drift +# (`"refresh-me"` to `"refreshed"`) and plan the update (`"refreshed"` to +# `"baz"`). +resource "test_instance" "should_refresh_with_move" { + ami = "baz" +} + +terraform { + experiments = [ config_driven_move ] +} + +moved { + from = test_instance.should_refresh + to = test_instance.should_refresh_with_move +} diff --git a/internal/command/testdata/show-json/moved-drift/output.json b/internal/command/testdata/show-json/moved-drift/output.json new file mode 100644 index 000000000000..0d151808fa87 --- /dev/null +++ b/internal/command/testdata/show-json/moved-drift/output.json @@ -0,0 +1,177 @@ +{ + "format_version": "0.2", + "planned_values": { + "root_module": { + "resources": [ + { + "address": "test_instance.no_refresh", + "mode": "managed", + "type": "test_instance", + "name": "no_refresh", + "provider_name": "registry.terraform.io/hashicorp/test", + "schema_version": 0, + "values": { + "ami": "bar", + "id": "placeholder" + }, + "sensitive_values": {} + }, + { + "address": "test_instance.should_refresh_with_move", + "mode": "managed", + "type": "test_instance", + "name": "should_refresh_with_move", + "provider_name": "registry.terraform.io/hashicorp/test", + "schema_version": 0, + "values": { + "ami": "baz", + "id": "placeholder" + }, + "sensitive_values": {} + } + ] + } + }, + "resource_drift": [ + { + "address": "test_instance.should_refresh_with_move", + "mode": "managed", + "type": "test_instance", + "previous_address": "test_instance.should_refresh", + "provider_name": "registry.terraform.io/hashicorp/test", + "name": "should_refresh_with_move", + "change": { + "actions": [ + "update" + ], + "before": { + "ami": "refresh-me", + "id": "placeholder" + }, + "after": { + "ami": "refreshed", + "id": "placeholder" + }, + "after_sensitive": {}, + "after_unknown": {}, + "before_sensitive": {} + } + } + ], + "resource_changes": [ + { + "address": "test_instance.no_refresh", + "mode": "managed", + "type": "test_instance", + "provider_name": "registry.terraform.io/hashicorp/test", + "name": "no_refresh", + "change": { + "actions": [ + "update" + ], + "before": { + "ami": "foo", + "id": "placeholder" + }, + "after": { + "ami": "bar", + "id": "placeholder" + }, + "after_unknown": {}, + "after_sensitive": {}, + "before_sensitive": {} + } + }, + { + "address": "test_instance.should_refresh_with_move", + "mode": "managed", + "type": "test_instance", + "previous_address": "test_instance.should_refresh", + "provider_name": "registry.terraform.io/hashicorp/test", + "name": "should_refresh_with_move", + "change": { + "actions": [ + "update" + ], + "before": { + "ami": "refreshed", + "id": "placeholder" + }, + "after": { + "ami": "baz", + "id": "placeholder" + }, + "after_unknown": {}, + "after_sensitive": {}, + "before_sensitive": {} + } + } + ], + "prior_state": { + "format_version": "0.2", + "values": { + "root_module": { + "resources": [ + { + "address": "test_instance.no_refresh", + "mode": "managed", + "type": "test_instance", + "name": "no_refresh", + "schema_version": 0, + "provider_name": "registry.terraform.io/hashicorp/test", + "values": { + "ami": "foo", + "id": "placeholder" + }, + "sensitive_values": {} + }, + { + "address": "test_instance.should_refresh_with_move", + "mode": "managed", + "type": "test_instance", + "name": "should_refresh_with_move", + "schema_version": 0, + "provider_name": "registry.terraform.io/hashicorp/test", + "values": { + "ami": "refreshed", + "id": "placeholder" + }, + "sensitive_values": {} + } + ] + } + } + }, + "configuration": { + "root_module": { + "resources": [ + { + "address": "test_instance.no_refresh", + "mode": "managed", + "type": "test_instance", + "name": "no_refresh", + "provider_config_key": "test", + "schema_version": 0, + "expressions": { + "ami": { + "constant_value": "bar" + } + } + }, + { + "address": "test_instance.should_refresh_with_move", + "mode": "managed", + "type": "test_instance", + "name": "should_refresh_with_move", + "provider_config_key": "test", + "schema_version": 0, + "expressions": { + "ami": { + "constant_value": "baz" + } + } + } + ] + } + } +} diff --git a/internal/command/testdata/show-json/moved-drift/terraform.tfstate b/internal/command/testdata/show-json/moved-drift/terraform.tfstate new file mode 100644 index 000000000000..02b8944d8994 --- /dev/null +++ b/internal/command/testdata/show-json/moved-drift/terraform.tfstate @@ -0,0 +1,38 @@ +{ + "version": 4, + "terraform_version": "0.12.0", + "serial": 7, + "lineage": "configuredUnchanged", + "resources": [ + { + "mode": "managed", + "type": "test_instance", + "name": "no_refresh", + "provider": "provider[\"registry.terraform.io/hashicorp/test\"]", + "instances": [ + { + "schema_version": 0, + "attributes": { + "ami": "foo", + "id": "placeholder" + } + } + ] + }, + { + "mode": "managed", + "type": "test_instance", + "name": "should_refresh", + "provider": "provider[\"registry.terraform.io/hashicorp/test\"]", + "instances": [ + { + "schema_version": 0, + "attributes": { + "ami": "refresh-me", + "id": "placeholder" + } + } + ] + } + ] +} diff --git a/internal/command/testdata/show-json/moved/main.tf b/internal/command/testdata/show-json/moved/main.tf new file mode 100644 index 000000000000..0be803cbc9d2 --- /dev/null +++ b/internal/command/testdata/show-json/moved/main.tf @@ -0,0 +1,12 @@ +resource "test_instance" "baz" { + ami = "baz" +} + +terraform { + experiments = [ config_driven_move ] +} + +moved { + from = test_instance.foo + to = test_instance.baz +} diff --git a/internal/command/testdata/show-json/moved/output.json b/internal/command/testdata/show-json/moved/output.json new file mode 100644 index 000000000000..3ce28198342e --- /dev/null +++ b/internal/command/testdata/show-json/moved/output.json @@ -0,0 +1,89 @@ +{ + "format_version": "0.2", + "planned_values": { + "root_module": { + "resources": [ + { + "address": "test_instance.baz", + "mode": "managed", + "type": "test_instance", + "name": "baz", + "provider_name": "registry.terraform.io/hashicorp/test", + "schema_version": 0, + "values": { + "ami": "baz", + "id": "placeholder" + }, + "sensitive_values": {} + } + ] + } + }, + "resource_changes": [ + { + "address": "test_instance.baz", + "mode": "managed", + "type": "test_instance", + "previous_address": "test_instance.foo", + "provider_name": "registry.terraform.io/hashicorp/test", + "name": "baz", + "change": { + "actions": [ + "update" + ], + "before": { + "ami": "foo", + "id": "placeholder" + }, + "after": { + "ami": "baz", + "id": "placeholder" + }, + "after_unknown": {}, + "after_sensitive": {}, + "before_sensitive": {} + } + } + ], + "prior_state": { + "format_version": "0.2", + "values": { + "root_module": { + "resources": [ + { + "address": "test_instance.baz", + "mode": "managed", + "type": "test_instance", + "name": "baz", + "schema_version": 0, + "provider_name": "registry.terraform.io/hashicorp/test", + "values": { + "ami": "foo", + "id": "placeholder" + }, + "sensitive_values": {} + } + ] + } + } + }, + "configuration": { + "root_module": { + "resources": [ + { + "address": "test_instance.baz", + "mode": "managed", + "type": "test_instance", + "name": "baz", + "provider_config_key": "test", + "schema_version": 0, + "expressions": { + "ami": { + "constant_value": "baz" + } + } + } + ] + } + } +} diff --git a/internal/command/testdata/show-json/moved/terraform.tfstate b/internal/command/testdata/show-json/moved/terraform.tfstate new file mode 100644 index 000000000000..b4e5718874d0 --- /dev/null +++ b/internal/command/testdata/show-json/moved/terraform.tfstate @@ -0,0 +1,23 @@ +{ + "version": 4, + "terraform_version": "0.12.0", + "serial": 7, + "lineage": "configuredUnchanged", + "resources": [ + { + "mode": "managed", + "type": "test_instance", + "name": "foo", + "provider": "provider[\"registry.terraform.io/hashicorp/test\"]", + "instances": [ + { + "schema_version": 0, + "attributes": { + "ami": "foo", + "id": "placeholder" + } + } + ] + } + ] +} diff --git a/internal/command/testdata/show-json/multi-resource-update/output.json b/internal/command/testdata/show-json/multi-resource-update/output.json index 247749261356..262b6194b9bf 100644 --- a/internal/command/testdata/show-json/multi-resource-update/output.json +++ b/internal/command/testdata/show-json/multi-resource-update/output.json @@ -45,32 +45,6 @@ ] } }, - "resource_drift": [ - { - "address": "test_instance.test[0]", - "mode": "managed", - "type": "test_instance", - "provider_name": "registry.terraform.io/hashicorp/test", - "name": "test", - "index": 0, - "change": { - "actions": [ - "no-op" - ], - "before": { - "ami": "bar", - "id": "placeholder" - }, - "after": { - "ami": "bar", - "id": "placeholder" - }, - "before_sensitive": {}, - "after_sensitive": {}, - "after_unknown": {} - } - } - ], "resource_changes": [ { "address": "test_instance.test[0]", @@ -78,6 +52,7 @@ "type": "test_instance", "name": "test", "index": 0, + "previous_address": "test_instance.test", "provider_name": "registry.terraform.io/hashicorp/test", "change": { "actions": [ diff --git a/website/docs/internals/json-format.html.md b/website/docs/internals/json-format.html.md index 9a3efeff5d46..ffe54c2cc8b6 100644 --- a/website/docs/internals/json-format.html.md +++ b/website/docs/internals/json-format.html.md @@ -98,9 +98,15 @@ For ease of consumption by callers, the plan representation includes a partial r { // "address" is the full absolute address of the resource instance this // change applies to, in the same format as addresses in a value - // representation + // representation. "address": "module.child.aws_instance.foo[0]", + // "previous_address" is the full absolute address of this resource + // instance as it was known after the previous Terraform run. + // Included only if the address has changed, e.g. by handling + // a "moved" block in the configuration. + "previous_address": "module.instances.aws_instance.foo[0]", + // "module_address", if set, is the module portion of the above address. // Omitted if the instance is in the root module. "module_address": "module.child", From b59b057591bfab5e1561a66a249df360756a6e60 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Fri, 17 Sep 2021 14:09:20 -0400 Subject: [PATCH 083/644] json-output: Config-driven move support in JSON UI Add previous address information to the `planned_change` and `resource_drift` messages for the streaming JSON UI output of plan and apply operations. Here we also add a "move" action value to the `change` object of these messages, to represent a move-only operation. As part of this work we also simplify this code to use the plan's DriftedResources values instead of recomputing the drift from state. --- internal/command/views/json/change.go | 15 +- internal/command/views/json_view_test.go | 6 +- internal/command/views/operation.go | 103 +----- internal/command/views/operation_test.go | 293 +++++++++++++----- .../internals/machine-readable-ui.html.md | 3 +- 5 files changed, 247 insertions(+), 173 deletions(-) diff --git a/internal/command/views/json/change.go b/internal/command/views/json/change.go index bee20904e184..c18a2c15a456 100644 --- a/internal/command/views/json/change.go +++ b/internal/command/views/json/change.go @@ -12,14 +12,22 @@ func NewResourceInstanceChange(change *plans.ResourceInstanceChangeSrc) *Resourc Action: changeAction(change.Action), Reason: changeReason(change.ActionReason), } + if !change.Addr.Equal(change.PrevRunAddr) { + if c.Action == ActionNoOp { + c.Action = ActionMove + } + pr := newResourceAddr(change.PrevRunAddr) + c.PreviousResource = &pr + } return c } type ResourceInstanceChange struct { - Resource ResourceAddr `json:"resource"` - Action ChangeAction `json:"action"` - Reason ChangeReason `json:"reason,omitempty"` + Resource ResourceAddr `json:"resource"` + PreviousResource *ResourceAddr `json:"previous_resource,omitempty"` + Action ChangeAction `json:"action"` + Reason ChangeReason `json:"reason,omitempty"` } func (c *ResourceInstanceChange) String() string { @@ -30,6 +38,7 @@ type ChangeAction string const ( ActionNoOp ChangeAction = "noop" + ActionMove ChangeAction = "move" ActionCreate ChangeAction = "create" ActionRead ChangeAction = "read" ActionUpdate ChangeAction = "update" diff --git a/internal/command/views/json_view_test.go b/internal/command/views/json_view_test.go index c755cf0b7f34..6bb5c4913241 100644 --- a/internal/command/views/json_view_test.go +++ b/internal/command/views/json_view_test.go @@ -111,7 +111,8 @@ func TestJSONView_PlannedChange(t *testing.T) { } managed := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_instance", Name: "bar"} cs := &plans.ResourceInstanceChangeSrc{ - Addr: managed.Instance(addrs.StringKey("boop")).Absolute(foo), + Addr: managed.Instance(addrs.StringKey("boop")).Absolute(foo), + PrevRunAddr: managed.Instance(addrs.StringKey("boop")).Absolute(foo), ChangeSrc: plans.ChangeSrc{ Action: plans.Create, }, @@ -151,7 +152,8 @@ func TestJSONView_ResourceDrift(t *testing.T) { } managed := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_instance", Name: "bar"} cs := &plans.ResourceInstanceChangeSrc{ - Addr: managed.Instance(addrs.StringKey("boop")).Absolute(foo), + Addr: managed.Instance(addrs.StringKey("boop")).Absolute(foo), + PrevRunAddr: managed.Instance(addrs.StringKey("boop")).Absolute(foo), ChangeSrc: plans.ChangeSrc{ Action: plans.Update, }, diff --git a/internal/command/views/operation.go b/internal/command/views/operation.go index b38e93b6f556..01daedc39b3f 100644 --- a/internal/command/views/operation.go +++ b/internal/command/views/operation.go @@ -3,7 +3,6 @@ package views import ( "bytes" "fmt" - "sort" "strings" "github.com/hashicorp/terraform/internal/addrs" @@ -11,11 +10,9 @@ import ( "github.com/hashicorp/terraform/internal/command/format" "github.com/hashicorp/terraform/internal/command/views/json" "github.com/hashicorp/terraform/internal/plans" - "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/states/statefile" "github.com/hashicorp/terraform/internal/terraform" "github.com/hashicorp/terraform/internal/tfdiags" - "github.com/zclconf/go-cty/cty" ) type Operation interface { @@ -163,10 +160,14 @@ func (v *OperationJSON) EmergencyDumpState(stateFile *statefile.File) error { // Log a change summary and a series of "planned" messages for the changes in // the plan. func (v *OperationJSON) Plan(plan *plans.Plan, schemas *terraform.Schemas) { - if err := v.resourceDrift(plan.PrevRunState, plan.PriorState, schemas); err != nil { - var diags tfdiags.Diagnostics - diags = diags.Append(err) - v.Diagnostics(diags) + for _, dr := range plan.DriftedResources { + // In refresh-only mode, we output all resources marked as drifted, + // including those which have moved without other changes. In other plan + // modes, move-only changes will be included in the planned changes, so + // we skip them here. + if dr.Action != plans.NoOp || plan.UIMode == plans.RefreshOnlyMode { + v.view.ResourceDrift(json.NewResourceInstanceChange(dr)) + } } cs := &json.ChangeSummary{ @@ -189,7 +190,7 @@ func (v *OperationJSON) Plan(plan *plans.Plan, schemas *terraform.Schemas) { cs.Remove++ } - if change.Action != plans.NoOp { + if change.Action != plans.NoOp || !change.Addr.Equal(change.PrevRunAddr) { v.view.PlannedChange(json.NewResourceInstanceChange(change)) } } @@ -208,92 +209,6 @@ func (v *OperationJSON) Plan(plan *plans.Plan, schemas *terraform.Schemas) { } } -func (v *OperationJSON) resourceDrift(oldState, newState *states.State, schemas *terraform.Schemas) error { - if newState.ManagedResourcesEqual(oldState) { - // Nothing to do, because we only detect and report drift for managed - // resource instances. - return nil - } - var changes []*json.ResourceInstanceChange - for _, ms := range oldState.Modules { - for _, rs := range ms.Resources { - if rs.Addr.Resource.Mode != addrs.ManagedResourceMode { - // Drift reporting is only for managed resources - continue - } - - provider := rs.ProviderConfig.Provider - for key, oldIS := range rs.Instances { - if oldIS.Current == nil { - // Not interested in instances that only have deposed objects - continue - } - addr := rs.Addr.Instance(key) - newIS := newState.ResourceInstance(addr) - - schema, _ := schemas.ResourceTypeConfig( - provider, - addr.Resource.Resource.Mode, - addr.Resource.Resource.Type, - ) - if schema == nil { - return fmt.Errorf("no schema found for %s (in provider %s)", addr, provider) - } - ty := schema.ImpliedType() - - oldObj, err := oldIS.Current.Decode(ty) - if err != nil { - return fmt.Errorf("failed to decode previous run data for %s: %s", addr, err) - } - - var newObj *states.ResourceInstanceObject - if newIS != nil && newIS.Current != nil { - newObj, err = newIS.Current.Decode(ty) - if err != nil { - return fmt.Errorf("failed to decode refreshed data for %s: %s", addr, err) - } - } - - var oldVal, newVal cty.Value - oldVal = oldObj.Value - if newObj != nil { - newVal = newObj.Value - } else { - newVal = cty.NullVal(ty) - } - - if oldVal.RawEquals(newVal) { - // No drift if the two values are semantically equivalent - continue - } - - // We can only detect updates and deletes as drift. - action := plans.Update - if newVal.IsNull() { - action = plans.Delete - } - - change := &plans.ResourceInstanceChangeSrc{ - Addr: addr, - ChangeSrc: plans.ChangeSrc{ - Action: action, - }, - } - changes = append(changes, json.NewResourceInstanceChange(change)) - } - } - } - - // Sort the change structs lexically by address to give stable output - sort.Slice(changes, func(i, j int) bool { return changes[i].Resource.Addr < changes[j].Resource.Addr }) - - for _, change := range changes { - v.view.ResourceDrift(change) - } - - return nil -} - func (v *OperationJSON) PlannedChange(change *plans.ResourceInstanceChangeSrc) { if change.Action == plans.Delete && change.Addr.Resource.Resource.Mode == addrs.DataResourceMode { // Avoid rendering data sources on deletion diff --git a/internal/command/views/operation_test.go b/internal/command/views/operation_test.go index 56ced35779a2..aa86fe1445ac 100644 --- a/internal/command/views/operation_test.go +++ b/internal/command/views/operation_test.go @@ -479,29 +479,35 @@ func TestOperationJSON_plan(t *testing.T) { Changes: &plans.Changes{ Resources: []*plans.ResourceInstanceChangeSrc{ { - Addr: boop.Instance(addrs.IntKey(0)).Absolute(root), - ChangeSrc: plans.ChangeSrc{Action: plans.CreateThenDelete}, + Addr: boop.Instance(addrs.IntKey(0)).Absolute(root), + PrevRunAddr: boop.Instance(addrs.IntKey(0)).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.CreateThenDelete}, }, { - Addr: boop.Instance(addrs.IntKey(1)).Absolute(root), - ChangeSrc: plans.ChangeSrc{Action: plans.Create}, + Addr: boop.Instance(addrs.IntKey(1)).Absolute(root), + PrevRunAddr: boop.Instance(addrs.IntKey(1)).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.Create}, }, { - Addr: boop.Instance(addrs.IntKey(0)).Absolute(vpc), - ChangeSrc: plans.ChangeSrc{Action: plans.Delete}, + Addr: boop.Instance(addrs.IntKey(0)).Absolute(vpc), + PrevRunAddr: boop.Instance(addrs.IntKey(0)).Absolute(vpc), + ChangeSrc: plans.ChangeSrc{Action: plans.Delete}, }, { - Addr: beep.Instance(addrs.NoKey).Absolute(root), - ChangeSrc: plans.ChangeSrc{Action: plans.DeleteThenCreate}, + Addr: beep.Instance(addrs.NoKey).Absolute(root), + PrevRunAddr: beep.Instance(addrs.NoKey).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.DeleteThenCreate}, }, { - Addr: beep.Instance(addrs.NoKey).Absolute(vpc), - ChangeSrc: plans.ChangeSrc{Action: plans.Update}, + Addr: beep.Instance(addrs.NoKey).Absolute(vpc), + PrevRunAddr: beep.Instance(addrs.NoKey).Absolute(vpc), + ChangeSrc: plans.ChangeSrc{Action: plans.Update}, }, // Data source deletion should not show up in the logs { - Addr: derp.Instance(addrs.NoKey).Absolute(root), - ChangeSrc: plans.ChangeSrc{Action: plans.Delete}, + Addr: derp.Instance(addrs.NoKey).Absolute(root), + PrevRunAddr: derp.Instance(addrs.NoKey).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.Delete}, }, }, }, @@ -623,74 +629,175 @@ func TestOperationJSON_plan(t *testing.T) { testJSONViewOutputEquals(t, done(t).Stdout(), want) } -func TestOperationJSON_planDrift(t *testing.T) { +func TestOperationJSON_planDriftWithMove(t *testing.T) { streams, done := terminal.StreamsForTesting(t) v := &OperationJSON{view: NewJSONView(NewView(streams))} root := addrs.RootModuleInstance boop := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "boop"} beep := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "beep"} - derp := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "derp"} + blep := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "blep"} + honk := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "honk"} plan := &plans.Plan{ + UIMode: plans.NormalMode, + Changes: &plans.Changes{ + Resources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: honk.Instance(addrs.StringKey("bonk")).Absolute(root), + PrevRunAddr: honk.Instance(addrs.IntKey(0)).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.NoOp}, + }, + }, + }, + DriftedResources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: beep.Instance(addrs.NoKey).Absolute(root), + PrevRunAddr: beep.Instance(addrs.NoKey).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.Delete}, + }, + { + Addr: boop.Instance(addrs.NoKey).Absolute(root), + PrevRunAddr: blep.Instance(addrs.NoKey).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.Update}, + }, + // Move-only resource drift should not be present in normal mode plans + { + Addr: honk.Instance(addrs.StringKey("bonk")).Absolute(root), + PrevRunAddr: honk.Instance(addrs.IntKey(0)).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.NoOp}, + }, + }, + } + v.Plan(plan, testSchemas()) + + want := []map[string]interface{}{ + // Drift detected: delete + { + "@level": "info", + "@message": "test_resource.beep: Drift detected (delete)", + "@module": "terraform.ui", + "type": "resource_drift", + "change": map[string]interface{}{ + "action": "delete", + "resource": map[string]interface{}{ + "addr": "test_resource.beep", + "implied_provider": "test", + "module": "", + "resource": "test_resource.beep", + "resource_key": nil, + "resource_name": "beep", + "resource_type": "test_resource", + }, + }, + }, + // Drift detected: update with move + { + "@level": "info", + "@message": "test_resource.boop: Drift detected (update)", + "@module": "terraform.ui", + "type": "resource_drift", + "change": map[string]interface{}{ + "action": "update", + "resource": map[string]interface{}{ + "addr": "test_resource.boop", + "implied_provider": "test", + "module": "", + "resource": "test_resource.boop", + "resource_key": nil, + "resource_name": "boop", + "resource_type": "test_resource", + }, + "previous_resource": map[string]interface{}{ + "addr": "test_resource.blep", + "implied_provider": "test", + "module": "", + "resource": "test_resource.blep", + "resource_key": nil, + "resource_name": "blep", + "resource_type": "test_resource", + }, + }, + }, + // Move-only change + { + "@level": "info", + "@message": `test_resource.honk["bonk"]: Plan to move`, + "@module": "terraform.ui", + "type": "planned_change", + "change": map[string]interface{}{ + "action": "move", + "resource": map[string]interface{}{ + "addr": `test_resource.honk["bonk"]`, + "implied_provider": "test", + "module": "", + "resource": `test_resource.honk["bonk"]`, + "resource_key": "bonk", + "resource_name": "honk", + "resource_type": "test_resource", + }, + "previous_resource": map[string]interface{}{ + "addr": `test_resource.honk[0]`, + "implied_provider": "test", + "module": "", + "resource": `test_resource.honk[0]`, + "resource_key": float64(0), + "resource_name": "honk", + "resource_type": "test_resource", + }, + }, + }, + // No changes + { + "@level": "info", + "@message": "Plan: 0 to add, 0 to change, 0 to destroy.", + "@module": "terraform.ui", + "type": "change_summary", + "changes": map[string]interface{}{ + "operation": "plan", + "add": float64(0), + "change": float64(0), + "remove": float64(0), + }, + }, + } + + testJSONViewOutputEquals(t, done(t).Stdout(), want) +} + +func TestOperationJSON_planDriftWithMoveRefreshOnly(t *testing.T) { + streams, done := terminal.StreamsForTesting(t) + v := &OperationJSON{view: NewJSONView(NewView(streams))} + + root := addrs.RootModuleInstance + boop := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "boop"} + beep := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "beep"} + blep := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "blep"} + honk := addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "test_resource", Name: "honk"} + + plan := &plans.Plan{ + UIMode: plans.RefreshOnlyMode, Changes: &plans.Changes{ Resources: []*plans.ResourceInstanceChangeSrc{}, }, - PrevRunState: states.BuildState(func(state *states.SyncState) { - // Update - state.SetResourceInstanceCurrent( - boop.Instance(addrs.NoKey).Absolute(root), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"bar"}`), - }, - root.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - // Delete - state.SetResourceInstanceCurrent( - beep.Instance(addrs.NoKey).Absolute(root), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"boop"}`), - }, - root.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - // No-op - state.SetResourceInstanceCurrent( - derp.Instance(addrs.NoKey).Absolute(root), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"boop"}`), - }, - root.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - }), - PriorState: states.BuildState(func(state *states.SyncState) { - // Update - state.SetResourceInstanceCurrent( - boop.Instance(addrs.NoKey).Absolute(root), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"baz"}`), - }, - root.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - // Delete - state.SetResourceInstanceCurrent( - beep.Instance(addrs.NoKey).Absolute(root), - nil, - root.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - // No-op - state.SetResourceInstanceCurrent( - derp.Instance(addrs.NoKey).Absolute(root), - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{"foo":"boop"}`), - }, - root.ProviderConfigDefault(addrs.NewDefaultProvider("test")), - ) - }), + DriftedResources: []*plans.ResourceInstanceChangeSrc{ + { + Addr: beep.Instance(addrs.NoKey).Absolute(root), + PrevRunAddr: beep.Instance(addrs.NoKey).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.Delete}, + }, + { + Addr: boop.Instance(addrs.NoKey).Absolute(root), + PrevRunAddr: blep.Instance(addrs.NoKey).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.Update}, + }, + // Move-only resource drift should be present in refresh-only plans + { + Addr: honk.Instance(addrs.StringKey("bonk")).Absolute(root), + PrevRunAddr: honk.Instance(addrs.IntKey(0)).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.NoOp}, + }, + }, } v.Plan(plan, testSchemas()) @@ -731,6 +838,43 @@ func TestOperationJSON_planDrift(t *testing.T) { "resource_name": "boop", "resource_type": "test_resource", }, + "previous_resource": map[string]interface{}{ + "addr": "test_resource.blep", + "implied_provider": "test", + "module": "", + "resource": "test_resource.blep", + "resource_key": nil, + "resource_name": "blep", + "resource_type": "test_resource", + }, + }, + }, + // Drift detected: Move-only change + { + "@level": "info", + "@message": `test_resource.honk["bonk"]: Drift detected (move)`, + "@module": "terraform.ui", + "type": "resource_drift", + "change": map[string]interface{}{ + "action": "move", + "resource": map[string]interface{}{ + "addr": `test_resource.honk["bonk"]`, + "implied_provider": "test", + "module": "", + "resource": `test_resource.honk["bonk"]`, + "resource_key": "bonk", + "resource_name": "honk", + "resource_type": "test_resource", + }, + "previous_resource": map[string]interface{}{ + "addr": `test_resource.honk[0]`, + "implied_provider": "test", + "module": "", + "resource": `test_resource.honk[0]`, + "resource_key": float64(0), + "resource_name": "honk", + "resource_type": "test_resource", + }, }, }, // No changes @@ -846,20 +990,23 @@ func TestOperationJSON_plannedChange(t *testing.T) { // Replace requested by user v.PlannedChange(&plans.ResourceInstanceChangeSrc{ Addr: boop.Instance(addrs.IntKey(0)).Absolute(root), + PrevRunAddr: boop.Instance(addrs.IntKey(0)).Absolute(root), ChangeSrc: plans.ChangeSrc{Action: plans.DeleteThenCreate}, ActionReason: plans.ResourceInstanceReplaceByRequest, }) // Simple create v.PlannedChange(&plans.ResourceInstanceChangeSrc{ - Addr: boop.Instance(addrs.IntKey(1)).Absolute(root), - ChangeSrc: plans.ChangeSrc{Action: plans.Create}, + Addr: boop.Instance(addrs.IntKey(1)).Absolute(root), + PrevRunAddr: boop.Instance(addrs.IntKey(1)).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.Create}, }) // Data source deletion v.PlannedChange(&plans.ResourceInstanceChangeSrc{ - Addr: derp.Instance(addrs.NoKey).Absolute(root), - ChangeSrc: plans.ChangeSrc{Action: plans.Delete}, + Addr: derp.Instance(addrs.NoKey).Absolute(root), + PrevRunAddr: derp.Instance(addrs.NoKey).Absolute(root), + ChangeSrc: plans.ChangeSrc{Action: plans.Delete}, }) // Expect only two messages, as the data source deletion should be a no-op diff --git a/website/docs/internals/machine-readable-ui.html.md b/website/docs/internals/machine-readable-ui.html.md index 250481eb7306..53b3e47e0946 100644 --- a/website/docs/internals/machine-readable-ui.html.md +++ b/website/docs/internals/machine-readable-ui.html.md @@ -124,7 +124,8 @@ This message does not include details about the exact changes which caused the c At the end of a plan or before an apply, Terraform will emit a `planned_change` message for each resource which has changes to apply. This message has an embedded `change` object with the following keys: - `resource`: object describing the address of the resource to be changed; see [resource object](#resource-object) below for details -- `action`: the action planned to be taken for the resource. Values: `noop`, `create`, `read`, `update`, `replace`, `delete`. +- `previous_resource`: object describing the previous address of the resource, if this change includes a configuration-driven move +- `action`: the action planned to be taken for the resource. Values: `noop`, `create`, `read`, `update`, `replace`, `delete`, `move`. - `reason`: an optional reason for the change, currently only used when the action is `replace`. Values: - `tainted`: resource was marked as tainted - `requested`: user requested that the resource be replaced, for example via the `-replace` plan flag From 4fe75bead33dc03346bc8eeb20c878f0936c536c Mon Sep 17 00:00:00 2001 From: Paddy Date: Tue, 21 Sep 2021 05:45:04 -0700 Subject: [PATCH 084/644] Remove panic debugging information (#29512) It was out of date. Fixes #1113. --- website/docs/internals/debugging.html.md | 60 ------------------------ 1 file changed, 60 deletions(-) diff --git a/website/docs/internals/debugging.html.md b/website/docs/internals/debugging.html.md index 5262c62b1aa8..885065ee4e77 100644 --- a/website/docs/internals/debugging.html.md +++ b/website/docs/internals/debugging.html.md @@ -26,63 +26,3 @@ the same level arguments as `TF_LOG`, but only activate a subset of the logs. To persist logged output you can set `TF_LOG_PATH` in order to force the log to always be appended to a specific file when logging is enabled. Note that even when `TF_LOG_PATH` is set, `TF_LOG` must be set in order for any logging to be enabled. If you find a bug with Terraform, please include the detailed log by using a service such as gist. - -## Interpreting a Crash Log - -If Terraform ever crashes (a "panic" in the Go runtime), it saves a log file -with the debug logs from the session as well as the panic message and backtrace -to `crash.log`. Generally speaking, this log file is meant to be passed along -to the developers via a GitHub Issue. As a user, you're not required to dig -into this file. - -However, if you are interested in figuring out what might have gone wrong -before filing an issue, here are the basic details of how to read a crash -log. - -The most interesting part of a crash log is the panic message itself and the -backtrace immediately following. So the first thing to do is to search the file -for `panic: `, which should jump you right to this message. It will look -something like this: - -```text -panic: runtime error: invalid memory address or nil pointer dereference - -goroutine 123 [running]: -panic(0xabc100, 0xd93000a0a0) - /opt/go/src/runtime/panic.go:464 +0x3e6 -github.com/hashicorp/terraform/builtin/providers/aws.resourceAwsSomeResourceCreate(...) - /opt/gopath/src/github.com/hashicorp/terraform/builtin/providers/aws/resource_aws_some_resource.go:123 +0x123 -github.com/hashicorp/terraform/helper/schema.(*Resource).Refresh(...) - /opt/gopath/src/github.com/hashicorp/terraform/helper/schema/resource.go:209 +0x123 -github.com/hashicorp/terraform/helper/schema.(*Provider).Refresh(...) - /opt/gopath/src/github.com/hashicorp/terraform/helper/schema/provider.go:187 +0x123 -github.com/hashicorp/terraform/rpc.(*ResourceProviderServer).Refresh(...) - /opt/gopath/src/github.com/hashicorp/terraform/rpc/resource_provider.go:345 +0x6a -reflect.Value.call(...) - /opt/go/src/reflect/value.go:435 +0x120d -reflect.Value.Call(...) - /opt/go/src/reflect/value.go:303 +0xb1 -net/rpc.(*service).call(...) - /opt/go/src/net/rpc/server.go:383 +0x1c2 -created by net/rpc.(*Server).ServeCodec - /opt/go/src/net/rpc/server.go:477 +0x49d -``` - -The key part of this message is the first two lines that involve `hashicorp/terraform`. In this example: - -```text -github.com/hashicorp/terraform/builtin/providers/aws.resourceAwsSomeResourceCreate(...) - /opt/gopath/src/github.com/hashicorp/terraform/builtin/providers/aws/resource_aws_some_resource.go:123 +0x123 -``` - -The first line tells us that the method that failed is -`resourceAwsSomeResourceCreate`, which we can deduce that involves the creation -of a (fictional) `aws_some_resource`. - -The second line points to the exact line of code that caused the panic, -which--combined with the panic message itself--is normally enough for a -developer to quickly figure out the cause of the issue. - -As a user, this information can help work around the problem in a pinch, since -it should hopefully point to the area of the code base in which the crash is -happening. From 8684a85e26bbaa9b17e3ed7c043c92443554fa6b Mon Sep 17 00:00:00 2001 From: Chris Arcand Date: Tue, 21 Sep 2021 22:00:32 -0500 Subject: [PATCH 085/644] command: Ensure all answers were used in command.testInputResponseMap Remove answers from testInputResponse as they are given, and raise an error during cleanup if any answers remain unused. This enables tests to ensure that the expected mock answers are actually used in a test; previously, an entire branch of code including an input sequence could be omitted and the test(s) would not fail. The only test that had unused answers in this map is one leftover from legacy state migrations, a prompt that was removed in 7c93b2e5e637bdee37c5e505d13121d9bfee223d --- internal/command/command_test.go | 4 ++++ internal/command/init_test.go | 5 ----- internal/command/ui_input.go | 1 + 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/internal/command/command_test.go b/internal/command/command_test.go index 70da240e86ff..d1a43cf659ca 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -716,6 +716,10 @@ func testInputMap(t *testing.T, answers map[string]string) func() { // Return the cleanup return func() { + if len(testInputResponseMap) > 0 { + t.Fatalf("expected no unused answers provided to command.testInputMap, got: %v", testInputResponseMap) + } + test = true testInputResponseMap = nil } diff --git a/internal/command/init_test.go b/internal/command/init_test.go index a0a5e0b5123b..b75da1f4f899 100644 --- a/internal/command/init_test.go +++ b/internal/command/init_test.go @@ -540,11 +540,6 @@ func TestInit_backendConfigFileChange(t *testing.T) { defer os.RemoveAll(td) defer testChdir(t, td)() - // Ask input - defer testInputMap(t, map[string]string{ - "backend-migrate-to-new": "no", - })() - ui := new(cli.MockUi) view, _ := testView(t) c := &InitCommand{ diff --git a/internal/command/ui_input.go b/internal/command/ui_input.go index 930bbd84202a..071982dec283 100644 --- a/internal/command/ui_input.go +++ b/internal/command/ui_input.go @@ -90,6 +90,7 @@ func (i *UIInput) Input(ctx context.Context, opts *terraform.InputOpts) (string, return "", fmt.Errorf("unexpected input request in test: %s", opts.Id) } + delete(testInputResponseMap, opts.Id) return v, nil } From d054102d38c54be67edccdd8f191e1beecd68252 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 21 Sep 2021 11:00:52 -0700 Subject: [PATCH 086/644] addrs: AbsResource.UniqueKey distinct from AbsResourceInstance.UniqueKey The whole point of UniqueKey is to deal with the fact that we have some distinct address types which have an identical string representation, but unfortunately that fact caused us to not notice that we'd incorrectly made AbsResource.UniqueKey return a no-key instance UniqueKey instead of its own distinct unique key type. --- internal/addrs/resource.go | 6 ++- internal/addrs/resource_test.go | 71 +++++++++++++++++++++++++++++++++ 2 files changed, 76 insertions(+), 1 deletion(-) diff --git a/internal/addrs/resource.go b/internal/addrs/resource.go index 2c69d2f70517..0ebf89d999ec 100644 --- a/internal/addrs/resource.go +++ b/internal/addrs/resource.go @@ -194,8 +194,12 @@ func (r AbsResource) absMoveableSigil() { // AbsResource is moveable } +type absResourceKey string + +func (r absResourceKey) uniqueKeySigil() {} + func (r AbsResource) UniqueKey() UniqueKey { - return absResourceInstanceKey(r.String()) + return absResourceKey(r.String()) } // AbsResourceInstance is an absolute address for a resource instance under a diff --git a/internal/addrs/resource_test.go b/internal/addrs/resource_test.go index fbaa981f4de1..d68d2b5d4a27 100644 --- a/internal/addrs/resource_test.go +++ b/internal/addrs/resource_test.go @@ -215,6 +215,77 @@ func TestAbsResourceInstanceEqual_false(t *testing.T) { } } +func TestAbsResourceUniqueKey(t *testing.T) { + resourceAddr1 := Resource{ + Mode: ManagedResourceMode, + Type: "a", + Name: "b1", + }.Absolute(RootModuleInstance) + resourceAddr2 := Resource{ + Mode: ManagedResourceMode, + Type: "a", + Name: "b2", + }.Absolute(RootModuleInstance) + resourceAddr3 := Resource{ + Mode: ManagedResourceMode, + Type: "a", + Name: "in_module", + }.Absolute(RootModuleInstance.Child("boop", NoKey)) + + tests := []struct { + Reciever AbsResource + Other UniqueKeyer + WantEqual bool + }{ + { + resourceAddr1, + resourceAddr1, + true, + }, + { + resourceAddr1, + resourceAddr2, + false, + }, + { + resourceAddr1, + resourceAddr3, + false, + }, + { + resourceAddr3, + resourceAddr3, + true, + }, + { + resourceAddr1, + resourceAddr1.Instance(NoKey), + false, // no-key instance key is distinct from its resource even though they have the same String result + }, + { + resourceAddr1, + resourceAddr1.Instance(IntKey(1)), + false, + }, + } + + for _, test := range tests { + t.Run(fmt.Sprintf("%s matches %T %s?", test.Reciever, test.Other, test.Other), func(t *testing.T) { + rKey := test.Reciever.UniqueKey() + oKey := test.Other.UniqueKey() + + gotEqual := rKey == oKey + if gotEqual != test.WantEqual { + t.Errorf( + "wrong result\nreceiver: %s\nother: %s (%T)\ngot: %t\nwant: %t", + test.Reciever, test.Other, test.Other, + gotEqual, test.WantEqual, + ) + } + }) + } +} + func TestConfigResourceEqual_true(t *testing.T) { resources := []ConfigResource{ { From 83f03766739394cafd5983ebc0d5fb46b2a20133 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 21 Sep 2021 12:57:12 -0700 Subject: [PATCH 087/644] refactoring: ApplyMoves new return type When we originally stubbed ApplyMoves we didn't know yet how exactly we'd be using the result, so we made it a double-indexed map allowing looking up moves in both directions. However, in practice we only actually need to look up old addresses by new addresses, and so this commit first removes the double indexing so that each move is only represented by one element in the map. We also need to describe situations where a move was blocked, because in a future commit we'll generate some warnings in those cases. Therefore ApplyMoves now returns a MoveResults object which contains both a map of changes and a map of blocks. The map of blocks isn't used yet as of this commit, but we'll use it in a later commit to produce warnings within the "terraform" package. --- internal/refactoring/move_execute.go | 143 ++++++--- internal/refactoring/move_execute_test.go | 288 ++++++++++++------ internal/refactoring/move_validate_test.go | 23 -- internal/terraform/context_plan.go | 12 +- internal/terraform/context_walk.go | 3 +- internal/terraform/eval_context.go | 2 +- internal/terraform/eval_context_builtin.go | 4 +- internal/terraform/eval_context_mock.go | 4 +- internal/terraform/graph_walk_context.go | 12 +- .../node_resource_abstract_instance.go | 10 +- 10 files changed, 321 insertions(+), 180 deletions(-) diff --git a/internal/refactoring/move_execute.go b/internal/refactoring/move_execute.go index 66c0d6c0a0ea..322569803a23 100644 --- a/internal/refactoring/move_execute.go +++ b/internal/refactoring/move_execute.go @@ -10,10 +10,6 @@ import ( "github.com/hashicorp/terraform/internal/states" ) -type MoveResult struct { - From, To addrs.AbsResourceInstance -} - // ApplyMoves modifies in-place the given state object so that any existing // objects that are matched by a "from" argument of one of the move statements // will be moved to instead appear at the "to" argument of that statement. @@ -29,8 +25,11 @@ type MoveResult struct { // // ApplyMoves expects exclusive access to the given state while it's running. // Don't read or write any part of the state structure until ApplyMoves returns. -func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey]MoveResult { - results := make(map[addrs.UniqueKey]MoveResult) +func ApplyMoves(stmts []MoveStatement, state *states.State) MoveResults { + ret := MoveResults{ + Changes: make(map[addrs.UniqueKey]MoveSuccess), + Blocked: make(map[addrs.UniqueKey]MoveBlocked), + } // The methodology here is to construct a small graph of all of the move // statements where the edges represent where a particular statement @@ -44,7 +43,7 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] // at all. The separate validation step should detect this and return // an error. if len(g.Cycles()) != 0 { - return results + return ret } // The starting nodes are the ones that don't depend on any other nodes. @@ -57,11 +56,33 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] if startNodes.Len() == 0 { log.Println("[TRACE] refactoring.ApplyMoves: No 'moved' statements to consider in this configuration") - return results + return ret } log.Printf("[TRACE] refactoring.ApplyMoves: Processing 'moved' statements in the configuration\n%s", logging.Indent(g.String())) + recordOldAddr := func(oldAddr, newAddr addrs.AbsResourceInstance) { + oldAddrKey := oldAddr.UniqueKey() + newAddrKey := newAddr.UniqueKey() + if prevMove, exists := ret.Changes[oldAddrKey]; exists { + // If the old address was _already_ the result of a move then + // we'll replace that entry so that our results summarize a chain + // of moves into a single entry. + delete(ret.Changes, oldAddrKey) + oldAddr = prevMove.From + } + ret.Changes[newAddrKey] = MoveSuccess{ + From: oldAddr, + To: newAddr, + } + } + recordBlockage := func(newAddr, wantedAddr addrs.AbsMoveable) { + ret.Blocked[newAddr.UniqueKey()] = MoveBlocked{ + Wanted: wantedAddr, + Actual: newAddr, + } + } + g.ReverseDepthFirstWalk(startNodes, func(v dag.Vertex, depth int) error { stmt := v.(*MoveStatement) @@ -83,11 +104,9 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] // If we already have a module at the new address then // we'll skip this move and let the existing object take // priority. - // TODO: This should probably generate a user-visible - // warning, but we'd need to rethink the signature of this - // function to achieve that. if ms := state.Module(newAddr); ms != nil { log.Printf("[WARN] Skipped moving %s to %s, because there's already another module instance at the destination", modAddr, newAddr) + recordBlockage(modAddr, newAddr) continue } @@ -98,12 +117,7 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] for key := range rs.Instances { oldInst := relAddr.Instance(key).Absolute(modAddr) newInst := relAddr.Instance(key).Absolute(newAddr) - result := MoveResult{ - From: oldInst, - To: newInst, - } - results[oldInst.UniqueKey()] = result - results[newInst.UniqueKey()] = result + recordOldAddr(oldInst, newInst) } } @@ -121,23 +135,16 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] // If we already have a resource at the new address then // we'll skip this move and let the existing object take // priority. - // TODO: This should probably generate a user-visible - // warning, but we'd need to rethink the signature of this - // function to achieve that. if rs := state.Resource(newAddr); rs != nil { log.Printf("[WARN] Skipped moving %s to %s, because there's already another resource at the destination", rAddr, newAddr) + recordBlockage(rAddr, newAddr) continue } for key := range rs.Instances { oldInst := rAddr.Instance(key) newInst := newAddr.Instance(key) - result := MoveResult{ - From: oldInst, - To: newInst, - } - results[oldInst.UniqueKey()] = result - results[newInst.UniqueKey()] = result + recordOldAddr(oldInst, newInst) } state.MoveAbsResource(rAddr, newAddr) continue @@ -150,17 +157,13 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] // If we already have a resource instance at the new // address then we'll skip this move and let the existing // object take priority. - // TODO: This should probably generate a user-visible - // warning, but we'd need to rethink the signature of this - // function to achieve that. if is := state.ResourceInstance(newAddr); is != nil { log.Printf("[WARN] Skipped moving %s to %s, because there's already another resource instance at the destination", iAddr, newAddr) + recordBlockage(iAddr, newAddr) continue } - result := MoveResult{From: iAddr, To: newAddr} - results[iAddr.UniqueKey()] = result - results[newAddr.UniqueKey()] = result + recordOldAddr(iAddr, newAddr) state.MoveAbsResourceInstance(iAddr, newAddr) continue @@ -175,17 +178,7 @@ func ApplyMoves(stmts []MoveStatement, state *states.State) map[addrs.UniqueKey] return nil }) - // FIXME: In the case of either chained or nested moves, "results" will - // be left in a pretty interesting shape where the "old" address will - // refer to a result that describes only the first step, while the "new" - // address will refer to a result that describes only the last step. - // To make that actually useful we'll need a different strategy where - // the result describes the _effective_ source and destination, skipping - // over any intermediate steps we took to get there, so that ultimately - // we'll have enough information to annotate items in the plan with the - // addresses the originally moved from. - - return results + return ret } // buildMoveStatementGraph constructs a dependency graph of the given move @@ -218,3 +211,67 @@ func buildMoveStatementGraph(stmts []MoveStatement) *dag.AcyclicGraph { return g } + +// MoveResults describes the outcome of an ApplyMoves call. +type MoveResults struct { + // Changes is a map from the unique keys of the final new resource + // instance addresses to an object describing what changed. + // + // This includes one entry for each resource instance address that was + // the destination of a move statement. It doesn't include resource + // instances that were not affected by moves at all, but it does include + // resource instance addresses that were "blocked" (also recorded in + // BlockedAddrs) if and only if they were able to move at least + // partially along a chain before being blocked. + // + // In the return value from ApplyMoves, all of the keys are guaranteed to + // be unique keys derived from addrs.AbsResourceInstance values. + Changes map[addrs.UniqueKey]MoveSuccess + + // Blocked is a map from the unique keys of the final new + // resource instances addresses to information about where they "wanted" + // to move, but were blocked by a pre-existing object at the same address. + // + // "Blocking" can arise in unusual situations where multiple points along + // a move chain were already bound to objects, and thus only one of them + // can actually adopt the final position in the chain. It can also + // occur in other similar situations, such as if a configuration contains + // a move of an entire module and a move of an individual resource into + // that module, such that the individual resource would collide with a + // resource in the whole module that was moved. + // + // In the return value from ApplyMoves, all of the keys are guaranteed to + // be unique keys derived from values of addrs.AbsMoveable types. + Blocked map[addrs.UniqueKey]MoveBlocked +} + +type MoveSuccess struct { + From addrs.AbsResourceInstance + To addrs.AbsResourceInstance +} + +type MoveBlocked struct { + Wanted addrs.AbsMoveable + Actual addrs.AbsMoveable +} + +// AddrMoved returns true if and only if the given resource instance moved to +// a new address in the ApplyMoves call that the receiver is describing. +// +// If AddrMoved returns true, you can pass the same address to method OldAddr +// to find its original address prior to moving. +func (rs MoveResults) AddrMoved(newAddr addrs.AbsResourceInstance) bool { + _, ok := rs.Changes[newAddr.UniqueKey()] + return ok +} + +// OldAddr returns the old address of the given resource instance address, or +// just returns back the same address if the given instance wasn't affected by +// any move statements. +func (rs MoveResults) OldAddr(newAddr addrs.AbsResourceInstance) addrs.AbsResourceInstance { + change, ok := rs.Changes[newAddr.UniqueKey()] + if !ok { + return newAddr + } + return change.From +} diff --git a/internal/refactoring/move_execute_test.go b/internal/refactoring/move_execute_test.go index f3ab1ec1e31f..e913df9f4723 100644 --- a/internal/refactoring/move_execute_test.go +++ b/internal/refactoring/move_execute_test.go @@ -115,13 +115,16 @@ func TestApplyMoves(t *testing.T) { }.Instance(addrs.IntKey(0)).Absolute(moduleBarKey), } - emptyResults := map[addrs.UniqueKey]MoveResult{} + emptyResults := MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{}, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, + } tests := map[string]struct { Stmts []MoveStatement State *states.State - WantResults map[addrs.UniqueKey]MoveResult + WantResults MoveResults WantInstanceAddrs []string }{ "no moves and empty state": { @@ -161,15 +164,14 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - instAddrs["foo.from"].UniqueKey(): { - From: instAddrs["foo.from"], - To: instAddrs["foo.to"], - }, - instAddrs["foo.to"].UniqueKey(): { - From: instAddrs["foo.from"], - To: instAddrs["foo.to"], + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["foo.to"].UniqueKey(): { + From: instAddrs["foo.from"], + To: instAddrs["foo.to"], + }, }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, []string{ `foo.to`, @@ -189,15 +191,14 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - instAddrs["foo.from[0]"].UniqueKey(): { - From: instAddrs["foo.from[0]"], - To: instAddrs["foo.to[0]"], - }, - instAddrs["foo.to[0]"].UniqueKey(): { - From: instAddrs["foo.from[0]"], - To: instAddrs["foo.to[0]"], + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["foo.to[0]"].UniqueKey(): { + From: instAddrs["foo.from[0]"], + To: instAddrs["foo.to[0]"], + }, }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, []string{ `foo.to[0]`, @@ -218,19 +219,14 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - instAddrs["foo.from"].UniqueKey(): { - From: instAddrs["foo.from"], - To: instAddrs["foo.mid"], - }, - instAddrs["foo.mid"].UniqueKey(): { - From: instAddrs["foo.mid"], - To: instAddrs["foo.to"], - }, - instAddrs["foo.to"].UniqueKey(): { - From: instAddrs["foo.mid"], - To: instAddrs["foo.to"], + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["foo.to"].UniqueKey(): { + From: instAddrs["foo.from"], + To: instAddrs["foo.to"], + }, }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, []string{ `foo.to`, @@ -251,15 +247,14 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - instAddrs["foo.from[0]"].UniqueKey(): { - From: instAddrs["foo.from[0]"], - To: instAddrs["module.boo.foo.to[0]"], - }, - instAddrs["module.boo.foo.to[0]"].UniqueKey(): { - From: instAddrs["foo.from[0]"], - To: instAddrs["module.boo.foo.to[0]"], + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["module.boo.foo.to[0]"].UniqueKey(): { + From: instAddrs["foo.from[0]"], + To: instAddrs["module.boo.foo.to[0]"], + }, }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, []string{ `module.boo.foo.to[0]`, @@ -280,15 +275,14 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - instAddrs["module.boo.foo.from[0]"].UniqueKey(): { - From: instAddrs["module.boo.foo.from[0]"], - To: instAddrs["module.bar[0].foo.to[0]"], - }, - instAddrs["module.bar[0].foo.to[0]"].UniqueKey(): { - From: instAddrs["module.boo.foo.from[0]"], - To: instAddrs["module.bar[0].foo.to[0]"], + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["module.bar[0].foo.to[0]"].UniqueKey(): { + From: instAddrs["module.boo.foo.from[0]"], + To: instAddrs["module.bar[0].foo.to[0]"], + }, }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, []string{ `module.bar[0].foo.to[0]`, @@ -309,15 +303,14 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - instAddrs["module.boo.foo.from[0]"].UniqueKey(): { - From: instAddrs["module.boo.foo.from[0]"], - To: instAddrs["module.bar[0].foo.from[0]"], - }, - instAddrs["module.bar[0].foo.from[0]"].UniqueKey(): { - From: instAddrs["module.boo.foo.from[0]"], - To: instAddrs["module.bar[0].foo.from[0]"], + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["module.bar[0].foo.from[0]"].UniqueKey(): { + From: instAddrs["module.boo.foo.from[0]"], + To: instAddrs["module.bar[0].foo.from[0]"], + }, }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, []string{ `module.bar[0].foo.from[0]`, @@ -339,19 +332,14 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - instAddrs["module.boo.foo.from[0]"].UniqueKey(): { - From: instAddrs["module.boo.foo.from[0]"], - To: instAddrs["module.bar[0].foo.from[0]"], - }, - instAddrs["module.bar[0].foo.from[0]"].UniqueKey(): { - From: instAddrs["module.bar[0].foo.from[0]"], - To: instAddrs["module.bar[0].foo.to[0]"], - }, - instAddrs["module.bar[0].foo.to[0]"].UniqueKey(): { - From: instAddrs["module.bar[0].foo.from[0]"], - To: instAddrs["module.bar[0].foo.to[0]"], + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["module.bar[0].foo.to[0]"].UniqueKey(): { + From: instAddrs["module.boo.foo.from[0]"], + To: instAddrs["module.bar[0].foo.to[0]"], + }, }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, []string{ `module.bar[0].foo.to[0]`, @@ -373,19 +361,14 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - instAddrs["module.boo.foo.from[0]"].UniqueKey(): { - From: instAddrs["module.boo.foo.from[0]"], - To: instAddrs["module.bar[0].foo.from[0]"], - }, - instAddrs["module.bar[0].foo.from[0]"].UniqueKey(): { - From: instAddrs["module.bar[0].foo.from[0]"], - To: instAddrs["module.bar[0].foo.to[0]"], - }, - instAddrs["module.bar[0].foo.to[0]"].UniqueKey(): { - From: instAddrs["module.bar[0].foo.from[0]"], - To: instAddrs["module.bar[0].foo.to[0]"], + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["module.bar[0].foo.to[0]"].UniqueKey(): { + From: instAddrs["module.boo.foo.from[0]"], + To: instAddrs["module.bar[0].foo.to[0]"], + }, }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, []string{ `module.bar[0].foo.to[0]`, @@ -414,9 +397,16 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ + MoveResults{ // Nothing moved, because the module.b address is already // occupied by another module. + Changes: map[addrs.UniqueKey]MoveSuccess{}, + Blocked: map[addrs.UniqueKey]MoveBlocked{ + instAddrs["module.bar[0].foo.from"].Module.UniqueKey(): { + Wanted: instAddrs["module.boo.foo.to[0]"].Module, + Actual: instAddrs["module.bar[0].foo.from"].Module, + }, + }, }, []string{ `module.bar[0].foo.from`, @@ -446,9 +436,16 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - // Nothing moved, because the module.b address is already - // occupied by another module. + MoveResults{ + // Nothing moved, because the from.to address is already + // occupied by another resource. + Changes: map[addrs.UniqueKey]MoveSuccess{}, + Blocked: map[addrs.UniqueKey]MoveBlocked{ + instAddrs["foo.from"].ContainingResource().UniqueKey(): { + Wanted: instAddrs["foo.to"].ContainingResource(), + Actual: instAddrs["foo.from"].ContainingResource(), + }, + }, }, []string{ `foo.from`, @@ -478,22 +475,141 @@ func TestApplyMoves(t *testing.T) { providerAddr, ) }), - map[addrs.UniqueKey]MoveResult{ - // Nothing moved, because the module.b address is already - // occupied by another module. + MoveResults{ + // Nothing moved, because the from.to[0] address is already + // occupied by another resource instance. + Changes: map[addrs.UniqueKey]MoveSuccess{}, + Blocked: map[addrs.UniqueKey]MoveBlocked{ + instAddrs["foo.from"].UniqueKey(): { + Wanted: instAddrs["foo.to[0]"], + Actual: instAddrs["foo.from"], + }, + }, }, []string{ `foo.from`, `foo.to[0]`, }, }, + + // FIXME: This test seems to flap between the result the test case + // currently records and the "more intuitive" results included inline, + // which suggests we have a missing edge in our move dependency graph. + // (The MoveResults commented out below predates some changes to that + // struct, so will need updating once we uncomment this test.) + /* + "move module and then move resource into it": { + []MoveStatement{ + testMoveStatement(t, "", "module.bar[0]", "module.boo"), + testMoveStatement(t, "", "foo.from", "module.boo.foo.from"), + }, + states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + instAddrs["module.bar[0].foo.to"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + s.SetResourceInstanceCurrent( + instAddrs["foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + }), + MoveResults{ + // FIXME: This result is counter-intuitive, because ApplyMoves + // handled the resource move first and then the module move + // collided with it. It would be arguably more intuitive to + // complete the module move first and then let the "smaller" + // resource move merge into it. + // (The arguably-more-intuitive results are commented out + // in the maps below.) + + Changes: map[addrs.UniqueKey]MoveSuccess{ + //instAddrs["module.boo.foo.to"].UniqueKey(): instAddrs["module.bar[0].foo.to"], + //instAddrs["module.boo.foo.from"].UniqueKey(): instAddrs["foo.from"], + instAddrs["module.boo.foo.from"].UniqueKey(): instAddrs["foo.from"], + }, + Blocked: map[addrs.UniqueKey]MoveBlocked{ + // intuitive result: nothing blocked + instAddrs["module.bar[0].foo.to"].Module.UniqueKey(): instAddrs["module.boo.foo.from"].Module, + }, + }, + []string{ + //`foo.from`, + //`module.boo.foo.from`, + `module.bar[0].foo.to`, + `module.boo.foo.from`, + }, + }, + */ + + // FIXME: This test seems to flap between the result the test case + // currently records and the "more intuitive" results included inline, + // which suggests we have a missing edge in our move dependency graph. + // (The MoveResults commented out below predates some changes to that + // struct, so will need updating once we uncomment this test.) + /* + "module move collides with resource move": { + []MoveStatement{ + testMoveStatement(t, "", "module.bar[0]", "module.boo"), + testMoveStatement(t, "", "foo.from", "module.boo.foo.from"), + }, + states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + instAddrs["module.bar[0].foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + s.SetResourceInstanceCurrent( + instAddrs["foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + }), + MoveResults{ + // FIXME: This result is counter-intuitive, because ApplyMoves + // handled the resource move first and then it was the + // module move that collided, whereas it would arguably have + // been better to let the module take priority and have only + // the one resource move be ignored due to the collision. + // (The arguably-more-intuitive results are commented out + // in the maps below.) + Changes: map[addrs.UniqueKey]MoveSuccess{ + //instAddrs["module.boo.foo.from"].UniqueKey(): instAddrs["module.bar[0].foo.from"], + instAddrs["module.boo.foo.from"].UniqueKey(): instAddrs["foo.from"], + }, + Blocked: map[addrs.UniqueKey]MoveBlocked{ + //instAddrs["foo.from"].UniqueKey(): instAddrs["module.bar[0].foo.from"], + instAddrs["module.bar[0].foo.from"].Module.UniqueKey(): instAddrs["module.boo.foo.from"].Module, + }, + }, + []string{ + //`foo.from`, + //`module.boo.foo.from`, + `module.bar[0].foo.from`, + `module.boo.foo.from`, + }, + }, + */ } for name, test := range tests { t.Run(name, func(t *testing.T) { var stmtsBuf strings.Builder for _, stmt := range test.Stmts { - fmt.Fprintf(&stmtsBuf, "- from: %s\n to: %s\n", stmt.From, stmt.To) + fmt.Fprintf(&stmtsBuf, "• from: %s\n to: %s\n", stmt.From, stmt.To) } t.Logf("move statements:\n%s", stmtsBuf.String()) diff --git a/internal/refactoring/move_validate_test.go b/internal/refactoring/move_validate_test.go index 7ec8ab2f9e28..49f8acd86bff 100644 --- a/internal/refactoring/move_validate_test.go +++ b/internal/refactoring/move_validate_test.go @@ -325,29 +325,6 @@ Each resource can have moved from only one source resource.`, Each resource can have moved from only one source resource.`, }, - /* - // FIXME: This rule requires a deeper analysis to understand that - // module.single already contains a test.single and thus moving - // it to module.foo implicitly also moves module.single.test.single - // module.foo.test.single. - "two different moves to nested test.single by different paths": { - Statements: []MoveStatement{ - makeTestMoveStmt(t, - ``, - `test.beep`, - `module.foo.test.single`, - ), - makeTestMoveStmt(t, - ``, - `module.single`, - `module.foo`, - ), - }, - WantError: `Ambiguous move statements: A statement at test:1,1 declared that test.beep moved to module.foo.test.single, but this statement instead declares that module.single.test.single moved there. - - Each resource can have moved from only one source resource.`, - }, - */ "move from resource in another module package": { Statements: []MoveStatement{ makeTestMoveStmt(t, diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index f70a8f67b746..a48676ffa2bf 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -296,7 +296,7 @@ func (c *Context) destroyPlan(config *configs.Config, prevRunState *states.State return destroyPlan, diags } -func (c *Context) prePlanFindAndApplyMoves(config *configs.Config, prevRunState *states.State, targets []addrs.Targetable) ([]refactoring.MoveStatement, map[addrs.UniqueKey]refactoring.MoveResult) { +func (c *Context) prePlanFindAndApplyMoves(config *configs.Config, prevRunState *states.State, targets []addrs.Targetable) ([]refactoring.MoveStatement, refactoring.MoveResults) { explicitMoveStmts := refactoring.FindMoveStatements(config) implicitMoveStmts := refactoring.ImpliedMoveStatements(config, prevRunState, explicitMoveStmts) var moveStmts []refactoring.MoveStatement @@ -309,7 +309,7 @@ func (c *Context) prePlanFindAndApplyMoves(config *configs.Config, prevRunState return moveStmts, moveResults } -func (c *Context) prePlanVerifyTargetedMoves(moveResults map[addrs.UniqueKey]refactoring.MoveResult, targets []addrs.Targetable) tfdiags.Diagnostics { +func (c *Context) prePlanVerifyTargetedMoves(moveResults refactoring.MoveResults, targets []addrs.Targetable) tfdiags.Diagnostics { if len(targets) < 1 { return nil // the following only matters when targeting } @@ -317,7 +317,7 @@ func (c *Context) prePlanVerifyTargetedMoves(moveResults map[addrs.UniqueKey]ref var diags tfdiags.Diagnostics var excluded []addrs.AbsResourceInstance - for _, result := range moveResults { + for _, result := range moveResults.Changes { fromMatchesTarget := false toMatchesTarget := false for _, targetAddr := range targets { @@ -475,10 +475,10 @@ func (c *Context) planGraph(config *configs.Config, prevRunState *states.State, } } -func (c *Context) driftedResources(config *configs.Config, oldState, newState *states.State, moves map[addrs.UniqueKey]refactoring.MoveResult) ([]*plans.ResourceInstanceChangeSrc, tfdiags.Diagnostics) { +func (c *Context) driftedResources(config *configs.Config, oldState, newState *states.State, moves refactoring.MoveResults) ([]*plans.ResourceInstanceChangeSrc, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics - if newState.ManagedResourcesEqual(oldState) && len(moves) == 0 { + if newState.ManagedResourcesEqual(oldState) && len(moves.Changes) == 0 { // Nothing to do, because we only detect and report drift for managed // resource instances. return nil, diags @@ -510,7 +510,7 @@ func (c *Context) driftedResources(config *configs.Config, oldState, newState *s // Previous run address defaults to the current address, but // can differ if the resource moved before refreshing prevRunAddr := addr - if move, ok := moves[addr.UniqueKey()]; ok { + if move, ok := moves.Changes[addr.UniqueKey()]; ok { prevRunAddr = move.From } diff --git a/internal/terraform/context_walk.go b/internal/terraform/context_walk.go index e041f80b2e43..166341513cce 100644 --- a/internal/terraform/context_walk.go +++ b/internal/terraform/context_walk.go @@ -3,7 +3,6 @@ package terraform import ( "log" - "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/instances" "github.com/hashicorp/terraform/internal/plans" @@ -25,7 +24,7 @@ type graphWalkOpts struct { Config *configs.Config RootVariableValues InputValues - MoveResults map[addrs.UniqueKey]refactoring.MoveResult + MoveResults refactoring.MoveResults } func (c *Context) walk(graph *Graph, operation walkOperation, opts *graphWalkOpts) (*ContextGraphWalker, tfdiags.Diagnostics) { diff --git a/internal/terraform/eval_context.go b/internal/terraform/eval_context.go index 61b4f2448f88..4b5a3a5c2cac 100644 --- a/internal/terraform/eval_context.go +++ b/internal/terraform/eval_context.go @@ -174,7 +174,7 @@ type EvalContext interface { // This data structure is created prior to the graph walk and read-only // thereafter, so callers must not modify the returned map or any other // objects accessible through it. - MoveResults() map[addrs.UniqueKey]refactoring.MoveResult + MoveResults() refactoring.MoveResults // WithPath returns a copy of the context with the internal path set to the // path argument. diff --git a/internal/terraform/eval_context_builtin.go b/internal/terraform/eval_context_builtin.go index ea83e82799b3..ecbac446e7cc 100644 --- a/internal/terraform/eval_context_builtin.go +++ b/internal/terraform/eval_context_builtin.go @@ -69,7 +69,7 @@ type BuiltinEvalContext struct { RefreshStateValue *states.SyncState PrevRunStateValue *states.SyncState InstanceExpanderValue *instances.Expander - MoveResultsValue map[addrs.UniqueKey]refactoring.MoveResult + MoveResultsValue refactoring.MoveResults } // BuiltinEvalContext implements EvalContext @@ -368,6 +368,6 @@ func (ctx *BuiltinEvalContext) InstanceExpander() *instances.Expander { return ctx.InstanceExpanderValue } -func (ctx *BuiltinEvalContext) MoveResults() map[addrs.UniqueKey]refactoring.MoveResult { +func (ctx *BuiltinEvalContext) MoveResults() refactoring.MoveResults { return ctx.MoveResultsValue } diff --git a/internal/terraform/eval_context_mock.go b/internal/terraform/eval_context_mock.go index 52a06c3ee719..edcdaac6b62c 100644 --- a/internal/terraform/eval_context_mock.go +++ b/internal/terraform/eval_context_mock.go @@ -132,7 +132,7 @@ type MockEvalContext struct { PrevRunStateState *states.SyncState MoveResultsCalled bool - MoveResultsResults map[addrs.UniqueKey]refactoring.MoveResult + MoveResultsResults refactoring.MoveResults InstanceExpanderCalled bool InstanceExpanderExpander *instances.Expander @@ -353,7 +353,7 @@ func (c *MockEvalContext) PrevRunState() *states.SyncState { return c.PrevRunStateState } -func (c *MockEvalContext) MoveResults() map[addrs.UniqueKey]refactoring.MoveResult { +func (c *MockEvalContext) MoveResults() refactoring.MoveResults { c.MoveResultsCalled = true return c.MoveResultsResults } diff --git a/internal/terraform/graph_walk_context.go b/internal/terraform/graph_walk_context.go index 39a97032c555..9e9e2a88fd49 100644 --- a/internal/terraform/graph_walk_context.go +++ b/internal/terraform/graph_walk_context.go @@ -25,12 +25,12 @@ type ContextGraphWalker struct { // Configurable values Context *Context - State *states.SyncState // Used for safe concurrent access to state - RefreshState *states.SyncState // Used for safe concurrent access to state - PrevRunState *states.SyncState // Used for safe concurrent access to state - Changes *plans.ChangesSync // Used for safe concurrent writes to changes - InstanceExpander *instances.Expander // Tracks our gradual expansion of module and resource instances - MoveResults map[addrs.UniqueKey]refactoring.MoveResult // Read-only record of earlier processing of move statements + State *states.SyncState // Used for safe concurrent access to state + RefreshState *states.SyncState // Used for safe concurrent access to state + PrevRunState *states.SyncState // Used for safe concurrent access to state + Changes *plans.ChangesSync // Used for safe concurrent writes to changes + InstanceExpander *instances.Expander // Tracks our gradual expansion of module and resource instances + MoveResults refactoring.MoveResults // Read-only record of earlier processing of move statements Operation walkOperation StopContext context.Context RootVariableValues InputValues diff --git a/internal/terraform/node_resource_abstract_instance.go b/internal/terraform/node_resource_abstract_instance.go index e73939a8e32d..3cab2b181a99 100644 --- a/internal/terraform/node_resource_abstract_instance.go +++ b/internal/terraform/node_resource_abstract_instance.go @@ -2282,13 +2282,5 @@ func (n *NodeAbstractResourceInstance) prevRunAddr(ctx EvalContext) addrs.AbsRes func resourceInstancePrevRunAddr(ctx EvalContext, currentAddr addrs.AbsResourceInstance) addrs.AbsResourceInstance { table := ctx.MoveResults() - - result, ok := table[currentAddr.UniqueKey()] - if !ok { - // If there's no entry in the table then we'll assume it didn't move - // at all, and so its previous address is the same as the current one. - return currentAddr - } - - return result.From + return table.OldAddr(currentAddr) } From 6b4e73af48f1cb78e51a634b077f73548db39ddc Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 22 Sep 2021 12:09:26 -0400 Subject: [PATCH 088/644] skip the blocktoattr fixup with nested types If structural types are being used, we can be assured that the legacy SDK SchemaConfigModeAttr is not being used, and the fixup is not needed. This prevents inadvertent mapping of blocks to structural attributes, and allows us to skip the fixup overhead when possible. --- internal/lang/blocktoattr/fixup.go | 27 +++++++++++++++++++++++ internal/lang/blocktoattr/fixup_test.go | 29 +++++++++++++++++++++++++ 2 files changed, 56 insertions(+) diff --git a/internal/lang/blocktoattr/fixup.go b/internal/lang/blocktoattr/fixup.go index 1864e3e5016c..90eb260d7289 100644 --- a/internal/lang/blocktoattr/fixup.go +++ b/internal/lang/blocktoattr/fixup.go @@ -12,6 +12,10 @@ import ( // type in the schema to be written with HCL block syntax as multiple nested // blocks with the attribute name as the block type. // +// The fixup is only applied in the absence of structural attribute types. The +// presence of these types indicate the use of a provider which does not +// support mapping blocks to attributes. +// // This partially restores some of the block/attribute confusion from HCL 1 // so that existing patterns that depended on that confusion can continue to // be used in the short term while we settle on a longer-term strategy. @@ -28,6 +32,10 @@ func FixUpBlockAttrs(body hcl.Body, schema *configschema.Block) hcl.Body { schema = &configschema.Block{} } + if skipFixup(schema) { + return body + } + return &fixupBody{ original: body, schema: schema, @@ -35,6 +43,25 @@ func FixUpBlockAttrs(body hcl.Body, schema *configschema.Block) hcl.Body { } } +// skipFixup detects any use of Attribute.NestedType. Because the fixup was +// only supported for the legacy SDK, there is no situation where structural +// attributes are used where the fixup is expected. +func skipFixup(schema *configschema.Block) bool { + for _, attr := range schema.Attributes { + if attr.NestedType != nil { + return true + } + } + + for _, block := range schema.BlockTypes { + if skipFixup(&block.Block) { + return true + } + } + + return false +} + type fixupBody struct { original hcl.Body schema *configschema.Block diff --git a/internal/lang/blocktoattr/fixup_test.go b/internal/lang/blocktoattr/fixup_test.go index 8c7640521b13..92799394fa95 100644 --- a/internal/lang/blocktoattr/fixup_test.go +++ b/internal/lang/blocktoattr/fixup_test.go @@ -400,6 +400,35 @@ container { }), wantErrs: true, }, + "no fixup allowed": { + src: ` + container { + foo = "one" + } + `, + schema: &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "container": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingList, + Attributes: map[string]*configschema.Attribute{ + "foo": { + Type: cty.String, + }, + }, + }, + }, + }, + }, + want: cty.ObjectVal(map[string]cty.Value{ + "container": cty.NullVal(cty.List( + cty.Object(map[string]cty.Type{ + "foo": cty.String, + }), + )), + }), + wantErrs: true, + }, } ctx := &hcl.EvalContext{ From 3f1c15c792ff18496774ecf951818068ab77c3ba Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 22 Sep 2021 10:31:31 -0700 Subject: [PATCH 089/644] Upgrade to Go 1.17.1 --- .go-version | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.go-version b/.go-version index 092afa15df4d..511a76e6faf8 100644 --- a/.go-version +++ b/.go-version @@ -1 +1 @@ -1.17.0 +1.17.1 From d14bbbb6f2a86118015b3de46246e14cdb4977b9 Mon Sep 17 00:00:00 2001 From: hc-github-team-tf-core Date: Wed, 22 Sep 2021 17:49:31 +0000 Subject: [PATCH 090/644] Release v1.1.0-alpha20210922 --- CHANGELOG.md | 2 +- version/version.go | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 281c399ccaec..474e343fd2a0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,7 +13,7 @@ NEW FEATURES: ENHANCEMENTS: * config: Terraform now checks the syntax of and normalizes module source addresses (the `source` argument in `module` blocks) during configuration decoding rather than only at module installation time. This is largely just an internal refactoring, but a visible benefit of this change is that the `terraform init` messages about module downloading will now show the canonical module package address Terraform is downloading from, after interpreting the special shorthands for common cases like GitHub URLs. ([#28854](https://github.com/hashicorp/terraform/issues/28854)) -* cli: Terraform will now report explicitly in the UI if it automatically moves a resource instance to a new address as a result of adding or removing the `count` argument from an existing resource. For example, if you previously had `resource "aws_subnet" "example"` _without_ `count`, you might have `aws_subnet.example` already bound to a remote object in your state. If you add `count = 1` to that resource then Terraform would previously silently rebind the object to `aws_subnet.example[0]` as part of planning, whereas now Terraform will mention that it did so explicitly in the plan description. [GH-29605] +* cli: Terraform will now report explicitly in the UI if it automatically moves a resource instance to a new address as a result of adding or removing the `count` argument from an existing resource. For example, if you previously had `resource "aws_subnet" "example"` _without_ `count`, you might have `aws_subnet.example` already bound to a remote object in your state. If you add `count = 1` to that resource then Terraform would previously silently rebind the object to `aws_subnet.example[0]` as part of planning, whereas now Terraform will mention that it did so explicitly in the plan description. ([#29605](https://github.com/hashicorp/terraform/issues/29605)) BUG FIXES: diff --git a/version/version.go b/version/version.go index 86f22153dde3..5faa69bee97e 100644 --- a/version/version.go +++ b/version/version.go @@ -16,7 +16,7 @@ var Version = "1.1.0" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release // such as "dev" (in development), "beta", "rc1", etc. -var Prerelease = "dev" +var Prerelease = "alpha20210922" // SemVer is an instance of version.Version. This has the secondary // benefit of verifying during tests and init time that our version is a From cd1d30ea95a0f16818c5d7915fd1983994a26b0d Mon Sep 17 00:00:00 2001 From: hc-github-team-tf-core Date: Wed, 22 Sep 2021 18:12:02 +0000 Subject: [PATCH 091/644] Cleanup after v1.1.0-alpha20210922 release --- version/version.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/version/version.go b/version/version.go index 5faa69bee97e..86f22153dde3 100644 --- a/version/version.go +++ b/version/version.go @@ -16,7 +16,7 @@ var Version = "1.1.0" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release // such as "dev" (in development), "beta", "rc1", etc. -var Prerelease = "alpha20210922" +var Prerelease = "dev" // SemVer is an instance of version.Version. This has the secondary // benefit of verifying during tests and init time that our version is a From 8706a18c4b69850cee1736f3af26869317313f3b Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 22 Sep 2021 15:59:25 -0400 Subject: [PATCH 092/644] refine the skipFixup heuristic We can also rule out some attribute types as indicating something other than the legacy SDK. - Tuple types were not generated at all. - There were no single objects types, the convention was to use a block list or set of length 1. - Maps of objects were not possible to generate, since named blocks were not implemented. - Nested collections were not supported, but when they were generated they would have primitive types. --- internal/lang/blocktoattr/fixup.go | 34 ++++++++++++++++++++++--- internal/lang/blocktoattr/fixup_test.go | 34 ++++++++++++++++++++++++- 2 files changed, 64 insertions(+), 4 deletions(-) diff --git a/internal/lang/blocktoattr/fixup.go b/internal/lang/blocktoattr/fixup.go index 90eb260d7289..5d05a86f2f5f 100644 --- a/internal/lang/blocktoattr/fixup.go +++ b/internal/lang/blocktoattr/fixup.go @@ -1,6 +1,8 @@ package blocktoattr import ( + "log" + "github.com/hashicorp/hcl/v2" "github.com/hashicorp/hcl/v2/hcldec" "github.com/hashicorp/terraform/internal/configs/configschema" @@ -33,6 +35,10 @@ func FixUpBlockAttrs(body hcl.Body, schema *configschema.Block) hcl.Body { } if skipFixup(schema) { + // we don't have any context for the resource name or type, but + // hopefully this could help locate the evaluation in the logs if there + // were a problem + log.Println("[DEBUG] skipping FixUpBlockAttrs") return body } @@ -43,14 +49,36 @@ func FixUpBlockAttrs(body hcl.Body, schema *configschema.Block) hcl.Body { } } -// skipFixup detects any use of Attribute.NestedType. Because the fixup was -// only supported for the legacy SDK, there is no situation where structural -// attributes are used where the fixup is expected. +// skipFixup detects any use of Attribute.NestedType, or Types which could not +// be generate by the legacy SDK when taking SchemaConfigModeAttr into account. func skipFixup(schema *configschema.Block) bool { for _, attr := range schema.Attributes { if attr.NestedType != nil { return true } + ty := attr.Type + + // Lists and sets of objects could be generated by + // SchemaConfigModeAttr, but some other combinations can be ruled out. + + // Tuples and objects could not be generated at all. + if ty.IsTupleType() || ty.IsObjectType() { + return true + } + + // A map of objects was not possible. + if ty.IsMapType() && ty.ElementType().IsObjectType() { + return true + } + + // Nested collections were not really supported, but could be generated + // with string types (though we conservatively limit this to primitive types) + if ty.IsCollectionType() { + ety := ty.ElementType() + if ety.IsCollectionType() && !ety.ElementType().IsPrimitiveType() { + return true + } + } } for _, block := range schema.BlockTypes { diff --git a/internal/lang/blocktoattr/fixup_test.go b/internal/lang/blocktoattr/fixup_test.go index 92799394fa95..36ab48041c9a 100644 --- a/internal/lang/blocktoattr/fixup_test.go +++ b/internal/lang/blocktoattr/fixup_test.go @@ -400,7 +400,7 @@ container { }), wantErrs: true, }, - "no fixup allowed": { + "no fixup allowed with NestedType": { src: ` container { foo = "one" @@ -429,6 +429,38 @@ container { }), wantErrs: true, }, + "no fixup allowed new types": { + src: ` + container { + foo = "one" + } + `, + schema: &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + // This could be a ConfigModeAttr fixup + "container": { + Type: cty.List(cty.Object(map[string]cty.Type{ + "foo": cty.String, + })), + }, + // But the presence of this type means it must have been + // declared by a new SDK + "new_type": { + Type: cty.Object(map[string]cty.Type{ + "boo": cty.String, + }), + }, + }, + }, + want: cty.ObjectVal(map[string]cty.Value{ + "container": cty.NullVal(cty.List( + cty.Object(map[string]cty.Type{ + "foo": cty.String, + }), + )), + }), + wantErrs: true, + }, } ctx := &hcl.EvalContext{ From 60bc7aa05dbd99dd9e1809a4bec7d1f392f5730d Mon Sep 17 00:00:00 2001 From: Chris Arcand Date: Tue, 21 Sep 2021 23:29:02 -0500 Subject: [PATCH 093/644] command: Auto-select single workspace if necessary When initializing a backend, if the currently selected workspace does not exist, the user is prompted to select from the list of workspaces the backend provides. Instead, we should automatically select the only workspace available _if_ that's all that's there. Although with being a nice bit of polish, this enables future improvments with Terraform Cloud in potentially removing the implicit depenency on always using the 'default' workspace when the current configuration is mapped to a single TFC workspace. --- internal/command/meta_backend.go | 14 +++- internal/command/meta_backend_test.go | 79 ++++++++++++++++++- .../.terraform/environment | 1 + .../.terraform/terraform.tfstate | 23 ++++++ .../main.tf | 7 ++ .../terraform.tfstate | 13 +++ .../terraform.tfstate.d/foo/terraform.tfstate | 13 +++ .../.terraform/environment | 1 + .../.terraform/terraform.tfstate | 23 ++++++ .../main.tf | 7 ++ 10 files changed, 177 insertions(+), 4 deletions(-) create mode 100644 internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/.terraform/environment create mode 100644 internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/.terraform/terraform.tfstate create mode 100644 internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/main.tf create mode 100644 internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate create mode 100644 internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate.d/foo/terraform.tfstate create mode 100644 internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/.terraform/environment create mode 100644 internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/.terraform/terraform.tfstate create mode 100644 internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/main.tf diff --git a/internal/command/meta_backend.go b/internal/command/meta_backend.go index 82ecd89c7850..f467579ba973 100644 --- a/internal/command/meta_backend.go +++ b/internal/command/meta_backend.go @@ -196,13 +196,19 @@ func (m *Meta) selectWorkspace(b backend.Backend) error { var list strings.Builder for i, w := range workspaces { if w == workspace { + log.Printf("[TRACE] Meta.selectWorkspace: the currently selected workspace is present in the configured backend (%s)", workspace) return nil } fmt.Fprintf(&list, "%d. %s\n", i+1, w) } - // If the selected workspace doesn't exist, ask the user to select - // a workspace from the list of existing workspaces. + // If the backend only has a single workspace, select that as the current workspace + if len(workspaces) == 1 { + log.Printf("[TRACE] Meta.selectWorkspace: automatically selecting the single workspace provided by the backend (%s)", workspaces[0]) + return m.SetWorkspace(workspaces[0]) + } + + // Otherwise, ask the user to select a workspace from the list of existing workspaces. v, err := m.UIInput().Input(context.Background(), &terraform.InputOpts{ Id: "select-workspace", Query: fmt.Sprintf( @@ -220,7 +226,9 @@ func (m *Meta) selectWorkspace(b backend.Backend) error { return fmt.Errorf("Failed to select workspace: input not a valid number") } - return m.SetWorkspace(workspaces[idx-1]) + workspace = workspaces[idx-1] + log.Printf("[TRACE] Meta.selectWorkspace: setting the current workpace according to user selection (%s)", workspace) + return m.SetWorkspace(workspace) } // BackendForPlan is similar to Backend, but uses backend settings that were diff --git a/internal/command/meta_backend_test.go b/internal/command/meta_backend_test.go index 82dbb0355a4b..be8fec9708e9 100644 --- a/internal/command/meta_backend_test.go +++ b/internal/command/meta_backend_test.go @@ -789,6 +789,84 @@ func TestMetaBackend_reconfigureChange(t *testing.T) { } } +// Initializing a backend which supports workspaces and does *not* have +// the currently selected workspace should prompt the user with a list of +// workspaces to choose from to select a valid one, if more than one workspace +// is available. +func TestMetaBackend_initSelectedWorkspaceDoesNotExist(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + testCopyDir(t, testFixturePath("init-backend-selected-workspace-doesnt-exist-multi"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + // Setup the meta + m := testMetaBackend(t, nil) + + defer testInputMap(t, map[string]string{ + "select-workspace": "2", + })() + + // Get the backend + _, diags := m.Backend(&BackendOpts{Init: true}) + if diags.HasErrors() { + t.Fatal(diags.Err()) + } + + expected := "foo" + actual, err := m.Workspace() + if err != nil { + t.Fatal(err) + } + + if actual != expected { + t.Fatalf("expected selected workspace to be %q, but was %q", expected, actual) + } +} + +// Initializing a backend which supports workspaces and does *not* have the +// currently selected workspace - and which only has a single workspace - should +// automatically select that single workspace. +func TestMetaBackend_initSelectedWorkspaceDoesNotExistAutoSelect(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + testCopyDir(t, testFixturePath("init-backend-selected-workspace-doesnt-exist-single"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + // Setup the meta + m := testMetaBackend(t, nil) + + // this should not ask for input + m.input = false + + // Assert test precondition: The current selected workspace is "bar" + previousName, err := m.Workspace() + if err != nil { + t.Fatal(err) + } + + if previousName != "bar" { + t.Fatalf("expected test fixture to start with 'bar' as the current selected workspace") + } + + // Get the backend + _, diags := m.Backend(&BackendOpts{Init: true}) + if diags.HasErrors() { + t.Fatal(diags.Err()) + } + + expected := "default" + actual, err := m.Workspace() + if err != nil { + t.Fatal(err) + } + + if actual != expected { + t.Fatalf("expected selected workspace to be %q, but was %q", expected, actual) + } +} + // Changing a configured backend, copying state func TestMetaBackend_configuredChangeCopy(t *testing.T) { // Create a temporary working directory that is empty @@ -1267,7 +1345,6 @@ func TestMetaBackend_configuredChangeCopy_multiToNoDefaultWithoutDefault(t *test // Ask input defer testInputMap(t, map[string]string{ "backend-migrate-multistate-to-multistate": "yes", - "select-workspace": "1", })() // Setup the meta diff --git a/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/.terraform/environment b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/.terraform/environment new file mode 100644 index 000000000000..5716ca5987cb --- /dev/null +++ b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/.terraform/environment @@ -0,0 +1 @@ +bar diff --git a/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/.terraform/terraform.tfstate b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/.terraform/terraform.tfstate new file mode 100644 index 000000000000..19a90cc6b895 --- /dev/null +++ b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/.terraform/terraform.tfstate @@ -0,0 +1,23 @@ +{ + "version": 3, + "serial": 2, + "lineage": "2f3864a6-1d3e-1999-0f84-36cdb61179d3", + "backend": { + "type": "local", + "config": { + "path": null, + "workspace_dir": null + }, + "hash": 666019178 + }, + "modules": [ + { + "path": [ + "root" + ], + "outputs": {}, + "resources": {}, + "depends_on": [] + } + ] +} diff --git a/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/main.tf b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/main.tf new file mode 100644 index 000000000000..da6f209e14ea --- /dev/null +++ b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/main.tf @@ -0,0 +1,7 @@ +terraform { + backend "local" {} +} + +output "foo" { + value = "bar" +} diff --git a/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate new file mode 100644 index 000000000000..47de0a47e7d8 --- /dev/null +++ b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate @@ -0,0 +1,13 @@ +{ + "version": 4, + "terraform_version": "1.1.0", + "serial": 1, + "lineage": "cc4bb587-aa35-87ad-b3b7-7abdb574f2a1", + "outputs": { + "foo": { + "value": "bar", + "type": "string" + } + }, + "resources": [] +} diff --git a/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate.d/foo/terraform.tfstate b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate.d/foo/terraform.tfstate new file mode 100644 index 000000000000..70021d04ad47 --- /dev/null +++ b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate.d/foo/terraform.tfstate @@ -0,0 +1,13 @@ +{ + "version": 4, + "terraform_version": "1.1.0", + "serial": 1, + "lineage": "8ad3c77d-51aa-d90a-4f12-176f538b6e8b", + "outputs": { + "foo": { + "value": "bar", + "type": "string" + } + }, + "resources": [] +} diff --git a/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/.terraform/environment b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/.terraform/environment new file mode 100644 index 000000000000..5716ca5987cb --- /dev/null +++ b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/.terraform/environment @@ -0,0 +1 @@ +bar diff --git a/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/.terraform/terraform.tfstate b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/.terraform/terraform.tfstate new file mode 100644 index 000000000000..19a90cc6b895 --- /dev/null +++ b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/.terraform/terraform.tfstate @@ -0,0 +1,23 @@ +{ + "version": 3, + "serial": 2, + "lineage": "2f3864a6-1d3e-1999-0f84-36cdb61179d3", + "backend": { + "type": "local", + "config": { + "path": null, + "workspace_dir": null + }, + "hash": 666019178 + }, + "modules": [ + { + "path": [ + "root" + ], + "outputs": {}, + "resources": {}, + "depends_on": [] + } + ] +} diff --git a/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/main.tf b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/main.tf new file mode 100644 index 000000000000..da6f209e14ea --- /dev/null +++ b/internal/command/testdata/init-backend-selected-workspace-doesnt-exist-single/main.tf @@ -0,0 +1,7 @@ +terraform { + backend "local" {} +} + +output "foo" { + value = "bar" +} From 171cdbbf939e12153d93a1c06806a7ad507ced3c Mon Sep 17 00:00:00 2001 From: Chris Arcand Date: Wed, 22 Sep 2021 15:55:56 -0500 Subject: [PATCH 094/644] command: Clean up testInputResponseMap before failing on unused answers If you don't, the unused answers will persist in the package-level var and bleed in to other tests. --- internal/command/command_test.go | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/internal/command/command_test.go b/internal/command/command_test.go index d1a43cf659ca..929bfb43975b 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -716,12 +716,15 @@ func testInputMap(t *testing.T, answers map[string]string) func() { // Return the cleanup return func() { - if len(testInputResponseMap) > 0 { - t.Fatalf("expected no unused answers provided to command.testInputMap, got: %v", testInputResponseMap) - } + var unusedAnswers = testInputResponseMap + // First, clean up! test = true testInputResponseMap = nil + + if len(unusedAnswers) > 0 { + t.Fatalf("expected no unused answers provided to command.testInputMap, got: %v", unusedAnswers) + } } } From ceb580ec40ba9cfa4f3e19ba83301dc3062c0ee8 Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Thu, 23 Sep 2021 16:38:08 -0400 Subject: [PATCH 095/644] core: Fix refresh-only interaction with orphans When planning in refresh-only mode, we must not remove orphaned resources due to changed count or for_each values from the planned state. This was previously happening because we failed to pass through the plan's skip-plan-changes flag to the instance orphan node. --- internal/terraform/context_plan2_test.go | 142 +++++++++++++++++++++++ internal/terraform/node_resource_plan.go | 3 + 2 files changed, 145 insertions(+) diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 1167f234c00d..5b388148e59d 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -1335,6 +1335,148 @@ func TestContext2Plan_refreshOnlyMode_deposed(t *testing.T) { } } +func TestContext2Plan_refreshOnlyMode_orphan(t *testing.T) { + addr := mustAbsResourceAddr("test_object.a") + + // The configuration, the prior state, and the refresh result intentionally + // have different values for "test_string" so we can observe that the + // refresh took effect but the configuration change wasn't considered. + m := testModuleInline(t, map[string]string{ + "main.tf": ` + resource "test_object" "a" { + arg = "after" + count = 1 + } + + output "out" { + value = test_object.a.*.arg + } + `, + }) + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent(addr.Instance(addrs.IntKey(0)), &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{"arg":"before"}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + s.SetResourceInstanceCurrent(addr.Instance(addrs.IntKey(1)), &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{"arg":"before"}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + }) + + p := simpleMockProvider() + p.GetProviderSchemaResponse = &providers.GetProviderSchemaResponse{ + Provider: providers.Schema{Block: simpleTestSchema()}, + ResourceTypes: map[string]providers.Schema{ + "test_object": { + Block: &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "arg": {Type: cty.String, Optional: true}, + }, + }, + }, + }, + } + p.ReadResourceFn = func(req providers.ReadResourceRequest) providers.ReadResourceResponse { + newVal, err := cty.Transform(req.PriorState, func(path cty.Path, v cty.Value) (cty.Value, error) { + if len(path) == 1 && path[0] == (cty.GetAttrStep{Name: "arg"}) { + return cty.StringVal("current"), nil + } + return v, nil + }) + if err != nil { + // shouldn't get here + t.Fatalf("ReadResourceFn transform failed") + return providers.ReadResourceResponse{} + } + return providers.ReadResourceResponse{ + NewState: newVal, + } + } + p.UpgradeResourceStateFn = func(req providers.UpgradeResourceStateRequest) (resp providers.UpgradeResourceStateResponse) { + // We should've been given the prior state JSON as our input to upgrade. + if !bytes.Contains(req.RawStateJSON, []byte("before")) { + t.Fatalf("UpgradeResourceState request doesn't contain the 'before' object\n%s", req.RawStateJSON) + } + + // We'll put something different in "arg" as part of upgrading, just + // so that we can verify below that PrevRunState contains the upgraded + // (but NOT refreshed) version of the object. + resp.UpgradedState = cty.ObjectVal(map[string]cty.Value{ + "arg": cty.StringVal("upgraded"), + }) + return resp + } + + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.RefreshOnlyMode, + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + if !p.UpgradeResourceStateCalled { + t.Errorf("Provider's UpgradeResourceState wasn't called; should've been") + } + if !p.ReadResourceCalled { + t.Errorf("Provider's ReadResource wasn't called; should've been") + } + + if got, want := len(plan.Changes.Resources), 0; got != want { + t.Errorf("plan contains resource changes; want none\n%s", spew.Sdump(plan.Changes.Resources)) + } + + if rState := plan.PriorState.Resource(addr); rState == nil { + t.Errorf("%s has no prior state at all after plan", addr) + } else { + for i := 0; i < 2; i++ { + instKey := addrs.IntKey(i) + if obj := rState.Instance(instKey).Current; obj == nil { + t.Errorf("%s%s has no object after plan", addr, instKey) + } else if got, want := obj.AttrsJSON, `"current"`; !bytes.Contains(got, []byte(want)) { + // Should've saved the result of refreshing + t.Errorf("%s%s has wrong prior state after plan\ngot:\n%s\n\nwant substring: %s", addr, instKey, got, want) + } + } + } + if rState := plan.PrevRunState.Resource(addr); rState == nil { + t.Errorf("%s has no prior state at all after plan", addr) + } else { + for i := 0; i < 2; i++ { + instKey := addrs.IntKey(i) + if obj := rState.Instance(instKey).Current; obj == nil { + t.Errorf("%s%s has no object after plan", addr, instKey) + } else if got, want := obj.AttrsJSON, `"upgraded"`; !bytes.Contains(got, []byte(want)) { + // Should've saved the result of upgrading + t.Errorf("%s%s has wrong prior state after plan\ngot:\n%s\n\nwant substring: %s", addr, instKey, got, want) + } + } + } + + // The output value should also have updated. If not, it's likely that we + // skipped updating the working state to match the refreshed state when we + // were evaluating the resource. + if outChangeSrc := plan.Changes.OutputValue(addrs.RootModuleInstance.OutputValue("out")); outChangeSrc == nil { + t.Errorf("no change planned for output value 'out'") + } else { + outChange, err := outChangeSrc.Decode() + if err != nil { + t.Fatalf("failed to decode output value 'out': %s", err) + } + got := outChange.After + want := cty.TupleVal([]cty.Value{cty.StringVal("current"), cty.StringVal("current")}) + if !want.RawEquals(got) { + t.Errorf("wrong value for output value 'out'\ngot: %#v\nwant: %#v", got, want) + } + } +} + func TestContext2Plan_invalidSensitiveModuleOutput(t *testing.T) { m := testModuleInline(t, map[string]string{ "child/main.tf": ` diff --git a/internal/terraform/node_resource_plan.go b/internal/terraform/node_resource_plan.go index c732575e5650..a9f3ca0e5dfd 100644 --- a/internal/terraform/node_resource_plan.go +++ b/internal/terraform/node_resource_plan.go @@ -132,6 +132,8 @@ func (n *nodeExpandPlannableResource) DynamicExpand(ctx EvalContext) (*Graph, er return &NodePlannableResourceInstanceOrphan{ NodeAbstractResourceInstance: a, + skipRefresh: n.skipRefresh, + skipPlanChanges: n.skipPlanChanges, } } @@ -351,6 +353,7 @@ func (n *NodePlannableResource) DynamicExpand(ctx EvalContext) (*Graph, error) { return &NodePlannableResourceInstanceOrphan{ NodeAbstractResourceInstance: a, skipRefresh: n.skipRefresh, + skipPlanChanges: n.skipPlanChanges, } } From 9c078c27cf39062302ad20680777014aae643c12 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 14 Sep 2021 09:13:13 -0400 Subject: [PATCH 096/644] temp path clean for some backend tests --- internal/backend/local/backend_apply_test.go | 18 +++------ internal/backend/local/backend_local_test.go | 9 ++--- internal/backend/local/backend_plan_test.go | 40 +++++++------------ .../backend/local/backend_refresh_test.go | 18 +++------ internal/backend/local/testing.go | 16 +++----- 5 files changed, 35 insertions(+), 66 deletions(-) diff --git a/internal/backend/local/backend_apply_test.go b/internal/backend/local/backend_apply_test.go index 07d5f8a6e2ca..4ffc0fa0a871 100644 --- a/internal/backend/local/backend_apply_test.go +++ b/internal/backend/local/backend_apply_test.go @@ -27,8 +27,7 @@ import ( ) func TestLocal_applyBasic(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", applyFixtureSchema()) p.ApplyResourceChangeResponse = &providers.ApplyResourceChangeResponse{NewState: cty.ObjectVal(map[string]cty.Value{ @@ -73,8 +72,7 @@ test_instance.foo: } func TestLocal_applyEmptyDir(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", &terraform.ProviderSchema{}) p.ApplyResourceChangeResponse = &providers.ApplyResourceChangeResponse{NewState: cty.ObjectVal(map[string]cty.Value{"id": cty.StringVal("yes")})} @@ -108,8 +106,7 @@ func TestLocal_applyEmptyDir(t *testing.T) { } func TestLocal_applyEmptyDirDestroy(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", &terraform.ProviderSchema{}) p.ApplyResourceChangeResponse = &providers.ApplyResourceChangeResponse{} @@ -139,8 +136,7 @@ func TestLocal_applyEmptyDirDestroy(t *testing.T) { } func TestLocal_applyError(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) schema := &terraform.ProviderSchema{ ResourceTypes: map[string]*configschema.Block{ @@ -208,8 +204,7 @@ test_instance.foo: } func TestLocal_applyBackendFail(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", applyFixtureSchema()) @@ -272,8 +267,7 @@ test_instance.foo: (tainted) } func TestLocal_applyRefreshFalse(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState()) diff --git a/internal/backend/local/backend_local_test.go b/internal/backend/local/backend_local_test.go index fae2b1ae0953..13070c2d3c8d 100644 --- a/internal/backend/local/backend_local_test.go +++ b/internal/backend/local/backend_local_test.go @@ -25,8 +25,7 @@ import ( func TestLocalRun(t *testing.T) { configDir := "./testdata/empty" - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) _, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir) defer configCleanup() @@ -53,8 +52,7 @@ func TestLocalRun(t *testing.T) { func TestLocalRun_error(t *testing.T) { configDir := "./testdata/invalid" - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) // This backend will return an error when asked to RefreshState, which // should then cause LocalRun to return with the state unlocked. @@ -85,8 +83,7 @@ func TestLocalRun_error(t *testing.T) { func TestLocalRun_stalePlan(t *testing.T) { configDir := "./testdata/apply" - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) _, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir) defer configCleanup() diff --git a/internal/backend/local/backend_plan_test.go b/internal/backend/local/backend_plan_test.go index 6bae23555f3a..2a9f3f8287b0 100644 --- a/internal/backend/local/backend_plan_test.go +++ b/internal/backend/local/backend_plan_test.go @@ -23,8 +23,7 @@ import ( ) func TestLocal_planBasic(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", planFixtureSchema()) op, configCleanup, done := testOperationPlan(t, "./testdata/plan") @@ -53,8 +52,7 @@ func TestLocal_planBasic(t *testing.T) { } func TestLocal_planInAutomation(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) TestLocalProvider(t, b, "test", planFixtureSchema()) const msg = `You didn't use the -out option` @@ -85,8 +83,7 @@ func TestLocal_planInAutomation(t *testing.T) { } func TestLocal_planNoConfig(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) TestLocalProvider(t, b, "test", &terraform.ProviderSchema{}) op, configCleanup, done := testOperationPlan(t, "./testdata/empty") @@ -116,8 +113,7 @@ func TestLocal_planNoConfig(t *testing.T) { // This test validates the state lacking behavior when the inner call to // Context() fails func TestLocal_plan_context_error(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) // This is an intentionally-invalid value to make terraform.NewContext fail // when b.Operation calls it. @@ -157,8 +153,7 @@ func TestLocal_plan_context_error(t *testing.T) { } func TestLocal_planOutputsChanged(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) testStateFile(t, b.StatePath, states.BuildState(func(ss *states.SyncState) { ss.SetOutputValue(addrs.AbsOutputValue{ Module: addrs.RootModuleInstance, @@ -239,8 +234,7 @@ state, without changing any real infrastructure. // Module outputs should not cause the plan to be rendered func TestLocal_planModuleOutputsChanged(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) testStateFile(t, b.StatePath, states.BuildState(func(ss *states.SyncState) { ss.SetOutputValue(addrs.AbsOutputValue{ Module: addrs.RootModuleInstance.Child("mod", addrs.NoKey), @@ -286,8 +280,7 @@ No changes. Your infrastructure matches the configuration. } func TestLocal_planTainted(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState_tainted()) outDir := t.TempDir() @@ -343,8 +336,7 @@ Plan: 1 to add, 0 to change, 1 to destroy.` } func TestLocal_planDeposedOnly(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, states.BuildState(func(ss *states.SyncState) { ss.SetResourceInstanceDeposed( @@ -457,8 +449,8 @@ Plan: 1 to add, 0 to change, 1 to destroy.` } func TestLocal_planTainted_createBeforeDestroy(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) + p := TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState_tainted()) outDir := t.TempDir() @@ -514,8 +506,7 @@ Plan: 1 to add, 0 to change, 1 to destroy.` } func TestLocal_planRefreshFalse(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState()) @@ -546,8 +537,7 @@ func TestLocal_planRefreshFalse(t *testing.T) { } func TestLocal_planDestroy(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState()) @@ -599,8 +589,7 @@ func TestLocal_planDestroy(t *testing.T) { } func TestLocal_planDestroy_withDataSources(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState_withDataSource()) @@ -675,8 +664,7 @@ func getAddrs(resources []*plans.ResourceInstanceChangeSrc) []string { } func TestLocal_planOutPathNoChange(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) TestLocalProvider(t, b, "test", planFixtureSchema()) testStateFile(t, b.StatePath, testPlanState()) diff --git a/internal/backend/local/backend_refresh_test.go b/internal/backend/local/backend_refresh_test.go index 7fafee1b40b8..0e502267cc64 100644 --- a/internal/backend/local/backend_refresh_test.go +++ b/internal/backend/local/backend_refresh_test.go @@ -22,8 +22,7 @@ import ( ) func TestLocal_refresh(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", refreshFixtureSchema()) testStateFile(t, b.StatePath, testRefreshState()) @@ -58,8 +57,7 @@ test_instance.foo: } func TestLocal_refreshInput(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) schema := &terraform.ProviderSchema{ Provider: &configschema.Block{ @@ -121,8 +119,7 @@ test_instance.foo: } func TestLocal_refreshValidate(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", refreshFixtureSchema()) testStateFile(t, b.StatePath, testRefreshState()) p.ReadResourceFn = nil @@ -151,8 +148,7 @@ test_instance.foo: } func TestLocal_refreshValidateProviderConfigured(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) schema := &terraform.ProviderSchema{ Provider: &configschema.Block{ @@ -204,8 +200,7 @@ test_instance.foo: // This test validates the state lacking behavior when the inner call to // Context() fails func TestLocal_refresh_context_error(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) testStateFile(t, b.StatePath, testRefreshState()) op, configCleanup, done := testOperationRefresh(t, "./testdata/apply") defer configCleanup() @@ -225,8 +220,7 @@ func TestLocal_refresh_context_error(t *testing.T) { } func TestLocal_refreshEmptyState(t *testing.T) { - b, cleanup := TestLocal(t) - defer cleanup() + b := TestLocal(t) p := TestLocalProvider(t, b, "test", refreshFixtureSchema()) testStateFile(t, b.StatePath, states.NewState()) diff --git a/internal/backend/local/testing.go b/internal/backend/local/testing.go index bfff7f003584..d4fe51d9736f 100644 --- a/internal/backend/local/testing.go +++ b/internal/backend/local/testing.go @@ -1,7 +1,6 @@ package local import ( - "os" "path/filepath" "testing" @@ -21,9 +20,12 @@ import ( // // No operations will be called on the returned value, so you can still set // public fields without any locks. -func TestLocal(t *testing.T) (*Local, func()) { +func TestLocal(t *testing.T) *Local { t.Helper() - tempDir := t.TempDir() + tempDir, err := filepath.EvalSymlinks(t.TempDir()) + if err != nil { + t.Fatal(err) + } local := New() local.StatePath = filepath.Join(tempDir, "state.tfstate") @@ -32,13 +34,7 @@ func TestLocal(t *testing.T) (*Local, func()) { local.StateWorkspaceDir = filepath.Join(tempDir, "state.tfstate.d") local.ContextOpts = &terraform.ContextOpts{} - cleanup := func() { - if err := os.RemoveAll(tempDir); err != nil { - t.Fatal("error cleanup up test:", err) - } - } - - return local, cleanup + return local } // TestLocalProvider modifies the ContextOpts of the *Local parameter to From 7b99861b1c0acf65ab994e06be13eb08ac6f353f Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 22 Sep 2021 16:57:01 -0700 Subject: [PATCH 097/644] refactoring: Don't implicitly move for resources with for_each Our previous rule for implicitly moving from IntKey(0) to NoKey would apply that move even when the current resource configuration uses for_each, because we were only considering whether "count" were set. Previously this was relatively harmless because the resource instance in question would end up planned for deletion anyway: neither an IntKey nor a NoKey are valid keys for for_each. Now that we're going to be announcing these moves explicitly in the UI, it would be confusing to see Terraform report that IntKey moved to NoKey in a situation where the config changed from count to for_each, so to address that we'll only generate the implied statement if neither repetition argument is set. --- internal/refactoring/move_statement.go | 2 +- internal/refactoring/move_statement_test.go | 14 ++++++++++++-- .../move-statement-implied.tf | 8 ++++++++ 3 files changed, 21 insertions(+), 3 deletions(-) diff --git a/internal/refactoring/move_statement.go b/internal/refactoring/move_statement.go index baaf9d519b85..a363602c3992 100644 --- a/internal/refactoring/move_statement.go +++ b/internal/refactoring/move_statement.go @@ -124,7 +124,7 @@ func impliedMoveStatements(cfg *configs.Config, prevRunState *states.State, expl fromKey = addrs.NoKey toKey = addrs.IntKey(0) } - default: + case rCfg.Count == nil && rCfg.ForEach == nil: // no repetition at all if riState := rState.Instances[addrs.IntKey(0)]; riState != nil { fromKey = addrs.IntKey(0) toKey = addrs.NoKey diff --git a/internal/refactoring/move_statement_test.go b/internal/refactoring/move_statement_test.go index 93164f94cc5e..9724b1fe1c27 100644 --- a/internal/refactoring/move_statement_test.go +++ b/internal/refactoring/move_statement_test.go @@ -58,6 +58,16 @@ func TestImpliedMoveStatements(t *testing.T) { instObjState(), providerAddr, ) + s.SetResourceInstanceCurrent( + resourceAddr("now_for_each_formerly_count").Instance(addrs.IntKey(0)), + instObjState(), + providerAddr, + ) + s.SetResourceInstanceCurrent( + resourceAddr("now_for_each_formerly_no_count").Instance(addrs.NoKey), + instObjState(), + providerAddr, + ) // This "ambiguous" resource is representing a rare but possible // situation where we end up having a mixture of different index @@ -113,8 +123,8 @@ func TestImpliedMoveStatements(t *testing.T) { Implied: true, DeclRange: tfdiags.SourceRange{ Filename: "testdata/move-statement-implied/move-statement-implied.tf", - Start: tfdiags.SourcePos{Line: 42, Column: 1, Byte: 709}, - End: tfdiags.SourcePos{Line: 42, Column: 27, Byte: 735}, + Start: tfdiags.SourcePos{Line: 50, Column: 1, Byte: 858}, + End: tfdiags.SourcePos{Line: 50, Column: 27, Byte: 884}, }, }, } diff --git a/internal/refactoring/testdata/move-statement-implied/move-statement-implied.tf b/internal/refactoring/testdata/move-statement-implied/move-statement-implied.tf index 1de3238bdb34..142ffe702ae3 100644 --- a/internal/refactoring/testdata/move-statement-implied/move-statement-implied.tf +++ b/internal/refactoring/testdata/move-statement-implied/move-statement-implied.tf @@ -39,6 +39,14 @@ moved { to = foo.now_count_explicit[1] } +resource "foo" "now_for_each_formerly_count" { + for_each = { a = 1 } +} + +resource "foo" "now_for_each_formerly_no_count" { + for_each = { a = 1 } +} + resource "foo" "ambiguous" { # this one doesn't have count in the config, but the test should # set it up to have both no-key and zero-key instances in the From a1a713cf281d9d07ef2dad96d625703051e3e1c6 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 22 Sep 2021 17:58:41 -0700 Subject: [PATCH 098/644] core: Report ActionReasons when we plan to delete "orphans" There are a few different reasons why a resource instance tracked in the prior state might be considered an "orphan", but previously we reported them all identically in the planned changes. In order to help users understand the reason for a surprising planned delete, we'll now try to specify an additional reason for the planned deletion, covering all of the main reasons why that could happen. This commit only introduces the new detail to the plans.Changes result, though it also incidentally exposes it as part of the JSON plan result in order to keep that working without returning errors in these new cases. We'll expose this information in the human-oriented UI output in a subsequent commit. --- internal/command/jsonplan/plan.go | 10 ++ .../show-json/basic-delete/output.json | 1 + internal/plans/changes.go | 34 ++++ .../plans/internal/planproto/planfile.pb.go | 54 +++++-- .../plans/internal/planproto/planfile.proto | 5 + internal/plans/planfile/tfplan.go | 20 +++ ...sourceinstancechangeactionreason_string.go | 33 +++- internal/terraform/context_apply_test.go | 147 ++++++++++++++++++ internal/terraform/context_plan_test.go | 37 ++++- .../terraform/node_resource_plan_orphan.go | 112 +++++++++++++ website/docs/internals/json-format.html.md | 14 ++ 11 files changed, 445 insertions(+), 22 deletions(-) diff --git a/internal/command/jsonplan/plan.go b/internal/command/jsonplan/plan.go index b2bf2cceb80b..c0b726422871 100644 --- a/internal/command/jsonplan/plan.go +++ b/internal/command/jsonplan/plan.go @@ -342,6 +342,16 @@ func (p *plan) marshalResourceChanges(resources []*plans.ResourceInstanceChangeS r.ActionReason = "replace_because_tainted" case plans.ResourceInstanceReplaceByRequest: r.ActionReason = "replace_by_request" + case plans.ResourceInstanceDeleteBecauseNoResourceConfig: + r.ActionReason = "delete_because_no_resource_config" + case plans.ResourceInstanceDeleteBecauseWrongRepetition: + r.ActionReason = "delete_because_wrong_repetition" + case plans.ResourceInstanceDeleteBecauseCountIndex: + r.ActionReason = "delete_because_count_index" + case plans.ResourceInstanceDeleteBecauseEachKey: + r.ActionReason = "delete_because_each_key" + case plans.ResourceInstanceDeleteBecauseNoModule: + r.ActionReason = "delete_because_no_module" default: return nil, fmt.Errorf("resource %s has an unsupported action reason %s", r.Address, rc.ActionReason) } diff --git a/internal/command/testdata/show-json/basic-delete/output.json b/internal/command/testdata/show-json/basic-delete/output.json index 9ebea2058f78..ae6e67f760b3 100644 --- a/internal/command/testdata/show-json/basic-delete/output.json +++ b/internal/command/testdata/show-json/basic-delete/output.json @@ -60,6 +60,7 @@ "type": "test_instance", "provider_name": "registry.terraform.io/hashicorp/test", "name": "test-delete", + "action_reason": "delete_because_no_resource_config", "change": { "actions": [ "delete" diff --git a/internal/plans/changes.go b/internal/plans/changes.go index c9aa38fd3863..510eb2c7cfee 100644 --- a/internal/plans/changes.go +++ b/internal/plans/changes.go @@ -349,6 +349,40 @@ const ( // the ResourceInstanceChange object to give information about specifically // which arguments changed in a non-updatable way. ResourceInstanceReplaceBecauseCannotUpdate ResourceInstanceChangeActionReason = 'F' + + // ResourceInstanceDeleteBecauseNoResourceConfig indicates that the + // resource instance is planned to be deleted because there's no + // corresponding resource configuration block in the configuration. + ResourceInstanceDeleteBecauseNoResourceConfig ResourceInstanceChangeActionReason = 'N' + + // ResourceInstanceDeleteBecauseWrongRepetition indicates that the + // resource instance is planned to be deleted because the instance key + // type isn't consistent with the repetition mode selected in the + // resource configuration. + ResourceInstanceDeleteBecauseWrongRepetition ResourceInstanceChangeActionReason = 'W' + + // ResourceInstanceDeleteBecauseCountIndex indicates that the resource + // instance is planned to be deleted because its integer instance key + // is out of range for the current configured resource "count" value. + ResourceInstanceDeleteBecauseCountIndex ResourceInstanceChangeActionReason = 'C' + + // ResourceInstanceDeleteBecauseEachKey indicates that the resource + // instance is planned to be deleted because its string instance key + // isn't one of the keys included in the current configured resource + // "for_each" value. + ResourceInstanceDeleteBecauseEachKey ResourceInstanceChangeActionReason = 'E' + + // ResourceInstanceDeleteBecauseNoModule indicates that the resource + // instance is planned to be deleted because it belongs to a module + // instance that's no longer declared in the configuration. + // + // This is less specific than the reasons we return for the various ways + // a resource instance itself can be no longer declared, including both + // the total removal of a module block and changes to its count/for_each + // arguments. This difference in detail is out of pragmatism, because + // potentially multiple nested modules could all contribute conflicting + // specific reasons for a particular instance to no longer be declared. + ResourceInstanceDeleteBecauseNoModule ResourceInstanceChangeActionReason = 'M' ) // OutputChange describes a change to an output value. diff --git a/internal/plans/internal/planproto/planfile.pb.go b/internal/plans/internal/planproto/planfile.pb.go index a612c03a3887..beb50852ab15 100644 --- a/internal/plans/internal/planproto/planfile.pb.go +++ b/internal/plans/internal/planproto/planfile.pb.go @@ -145,10 +145,15 @@ func (Action) EnumDescriptor() ([]byte, []int) { type ResourceInstanceActionReason int32 const ( - ResourceInstanceActionReason_NONE ResourceInstanceActionReason = 0 - ResourceInstanceActionReason_REPLACE_BECAUSE_TAINTED ResourceInstanceActionReason = 1 - ResourceInstanceActionReason_REPLACE_BY_REQUEST ResourceInstanceActionReason = 2 - ResourceInstanceActionReason_REPLACE_BECAUSE_CANNOT_UPDATE ResourceInstanceActionReason = 3 + ResourceInstanceActionReason_NONE ResourceInstanceActionReason = 0 + ResourceInstanceActionReason_REPLACE_BECAUSE_TAINTED ResourceInstanceActionReason = 1 + ResourceInstanceActionReason_REPLACE_BY_REQUEST ResourceInstanceActionReason = 2 + ResourceInstanceActionReason_REPLACE_BECAUSE_CANNOT_UPDATE ResourceInstanceActionReason = 3 + ResourceInstanceActionReason_DELETE_BECAUSE_NO_RESOURCE_CONFIG ResourceInstanceActionReason = 4 + ResourceInstanceActionReason_DELETE_BECAUSE_WRONG_REPETITION ResourceInstanceActionReason = 5 + ResourceInstanceActionReason_DELETE_BECAUSE_COUNT_INDEX ResourceInstanceActionReason = 6 + ResourceInstanceActionReason_DELETE_BECAUSE_EACH_KEY ResourceInstanceActionReason = 7 + ResourceInstanceActionReason_DELETE_BECAUSE_NO_MODULE ResourceInstanceActionReason = 8 ) // Enum value maps for ResourceInstanceActionReason. @@ -158,12 +163,22 @@ var ( 1: "REPLACE_BECAUSE_TAINTED", 2: "REPLACE_BY_REQUEST", 3: "REPLACE_BECAUSE_CANNOT_UPDATE", + 4: "DELETE_BECAUSE_NO_RESOURCE_CONFIG", + 5: "DELETE_BECAUSE_WRONG_REPETITION", + 6: "DELETE_BECAUSE_COUNT_INDEX", + 7: "DELETE_BECAUSE_EACH_KEY", + 8: "DELETE_BECAUSE_NO_MODULE", } ResourceInstanceActionReason_value = map[string]int32{ - "NONE": 0, - "REPLACE_BECAUSE_TAINTED": 1, - "REPLACE_BY_REQUEST": 2, - "REPLACE_BECAUSE_CANNOT_UPDATE": 3, + "NONE": 0, + "REPLACE_BECAUSE_TAINTED": 1, + "REPLACE_BY_REQUEST": 2, + "REPLACE_BECAUSE_CANNOT_UPDATE": 3, + "DELETE_BECAUSE_NO_RESOURCE_CONFIG": 4, + "DELETE_BECAUSE_WRONG_REPETITION": 5, + "DELETE_BECAUSE_COUNT_INDEX": 6, + "DELETE_BECAUSE_EACH_KEY": 7, + "DELETE_BECAUSE_NO_MODULE": 8, } ) @@ -1085,7 +1100,7 @@ var file_planfile_proto_rawDesc = []byte{ 0x12, 0x16, 0x0a, 0x12, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x06, 0x12, 0x16, 0x0a, 0x12, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x07, - 0x2a, 0x80, 0x01, 0x0a, 0x1c, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, + 0x2a, 0xa7, 0x02, 0x0a, 0x1c, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4e, 0x45, 0x10, 0x00, 0x12, 0x1b, 0x0a, 0x17, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x54, @@ -1093,11 +1108,22 @@ var file_planfile_proto_rawDesc = []byte{ 0x41, 0x43, 0x45, 0x5f, 0x42, 0x59, 0x5f, 0x52, 0x45, 0x51, 0x55, 0x45, 0x53, 0x54, 0x10, 0x02, 0x12, 0x21, 0x0a, 0x1d, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x43, 0x41, 0x4e, 0x4e, 0x4f, 0x54, 0x5f, 0x55, 0x50, 0x44, 0x41, 0x54, - 0x45, 0x10, 0x03, 0x42, 0x42, 0x5a, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, - 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, - 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, - 0x6c, 0x61, 0x6e, 0x73, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, - 0x61, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, + 0x45, 0x10, 0x03, 0x12, 0x25, 0x0a, 0x21, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, + 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x4e, 0x4f, 0x5f, 0x52, 0x45, 0x53, 0x4f, 0x55, 0x52, 0x43, + 0x45, 0x5f, 0x43, 0x4f, 0x4e, 0x46, 0x49, 0x47, 0x10, 0x04, 0x12, 0x23, 0x0a, 0x1f, 0x44, 0x45, + 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x57, 0x52, 0x4f, + 0x4e, 0x47, 0x5f, 0x52, 0x45, 0x50, 0x45, 0x54, 0x49, 0x54, 0x49, 0x4f, 0x4e, 0x10, 0x05, 0x12, + 0x1e, 0x0a, 0x1a, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, + 0x45, 0x5f, 0x43, 0x4f, 0x55, 0x4e, 0x54, 0x5f, 0x49, 0x4e, 0x44, 0x45, 0x58, 0x10, 0x06, 0x12, + 0x1b, 0x0a, 0x17, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, + 0x45, 0x5f, 0x45, 0x41, 0x43, 0x48, 0x5f, 0x4b, 0x45, 0x59, 0x10, 0x07, 0x12, 0x1c, 0x0a, 0x18, + 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x4e, + 0x4f, 0x5f, 0x4d, 0x4f, 0x44, 0x55, 0x4c, 0x45, 0x10, 0x08, 0x42, 0x42, 0x5a, 0x40, 0x67, 0x69, + 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, + 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x2f, 0x69, 0x6e, 0x74, + 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x73, 0x2f, 0x69, 0x6e, 0x74, 0x65, + 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, + 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( diff --git a/internal/plans/internal/planproto/planfile.proto b/internal/plans/internal/planproto/planfile.proto index d1427bfbea47..1bbe7425e502 100644 --- a/internal/plans/internal/planproto/planfile.proto +++ b/internal/plans/internal/planproto/planfile.proto @@ -131,6 +131,11 @@ enum ResourceInstanceActionReason { REPLACE_BECAUSE_TAINTED = 1; REPLACE_BY_REQUEST = 2; REPLACE_BECAUSE_CANNOT_UPDATE = 3; + DELETE_BECAUSE_NO_RESOURCE_CONFIG = 4; + DELETE_BECAUSE_WRONG_REPETITION = 5; + DELETE_BECAUSE_COUNT_INDEX = 6; + DELETE_BECAUSE_EACH_KEY = 7; + DELETE_BECAUSE_NO_MODULE = 8; } message ResourceInstanceChange { diff --git a/internal/plans/planfile/tfplan.go b/internal/plans/planfile/tfplan.go index 8cfd3694fb4f..87d21822efc8 100644 --- a/internal/plans/planfile/tfplan.go +++ b/internal/plans/planfile/tfplan.go @@ -228,6 +228,16 @@ func resourceChangeFromTfplan(rawChange *planproto.ResourceInstanceChange) (*pla ret.ActionReason = plans.ResourceInstanceReplaceBecauseTainted case planproto.ResourceInstanceActionReason_REPLACE_BY_REQUEST: ret.ActionReason = plans.ResourceInstanceReplaceByRequest + case planproto.ResourceInstanceActionReason_DELETE_BECAUSE_NO_RESOURCE_CONFIG: + ret.ActionReason = plans.ResourceInstanceDeleteBecauseNoResourceConfig + case planproto.ResourceInstanceActionReason_DELETE_BECAUSE_WRONG_REPETITION: + ret.ActionReason = plans.ResourceInstanceDeleteBecauseWrongRepetition + case planproto.ResourceInstanceActionReason_DELETE_BECAUSE_COUNT_INDEX: + ret.ActionReason = plans.ResourceInstanceDeleteBecauseCountIndex + case planproto.ResourceInstanceActionReason_DELETE_BECAUSE_EACH_KEY: + ret.ActionReason = plans.ResourceInstanceDeleteBecauseEachKey + case planproto.ResourceInstanceActionReason_DELETE_BECAUSE_NO_MODULE: + ret.ActionReason = plans.ResourceInstanceDeleteBecauseNoModule default: return nil, fmt.Errorf("resource has invalid action reason %s", rawChange.ActionReason) } @@ -499,6 +509,16 @@ func resourceChangeToTfplan(change *plans.ResourceInstanceChangeSrc) (*planproto ret.ActionReason = planproto.ResourceInstanceActionReason_REPLACE_BECAUSE_TAINTED case plans.ResourceInstanceReplaceByRequest: ret.ActionReason = planproto.ResourceInstanceActionReason_REPLACE_BY_REQUEST + case plans.ResourceInstanceDeleteBecauseNoResourceConfig: + ret.ActionReason = planproto.ResourceInstanceActionReason_DELETE_BECAUSE_NO_RESOURCE_CONFIG + case plans.ResourceInstanceDeleteBecauseWrongRepetition: + ret.ActionReason = planproto.ResourceInstanceActionReason_DELETE_BECAUSE_WRONG_REPETITION + case plans.ResourceInstanceDeleteBecauseCountIndex: + ret.ActionReason = planproto.ResourceInstanceActionReason_DELETE_BECAUSE_COUNT_INDEX + case plans.ResourceInstanceDeleteBecauseEachKey: + ret.ActionReason = planproto.ResourceInstanceActionReason_DELETE_BECAUSE_EACH_KEY + case plans.ResourceInstanceDeleteBecauseNoModule: + ret.ActionReason = planproto.ResourceInstanceActionReason_DELETE_BECAUSE_NO_MODULE default: return nil, fmt.Errorf("resource %s has unsupported action reason %s", change.Addr, change.ActionReason) } diff --git a/internal/plans/resourceinstancechangeactionreason_string.go b/internal/plans/resourceinstancechangeactionreason_string.go index 0731f67591a9..135e6d2c638d 100644 --- a/internal/plans/resourceinstancechangeactionreason_string.go +++ b/internal/plans/resourceinstancechangeactionreason_string.go @@ -12,25 +12,46 @@ func _() { _ = x[ResourceInstanceReplaceBecauseTainted-84] _ = x[ResourceInstanceReplaceByRequest-82] _ = x[ResourceInstanceReplaceBecauseCannotUpdate-70] + _ = x[ResourceInstanceDeleteBecauseNoResourceConfig-78] + _ = x[ResourceInstanceDeleteBecauseWrongRepetition-87] + _ = x[ResourceInstanceDeleteBecauseCountIndex-67] + _ = x[ResourceInstanceDeleteBecauseEachKey-69] + _ = x[ResourceInstanceDeleteBecauseNoModule-77] } const ( _ResourceInstanceChangeActionReason_name_0 = "ResourceInstanceChangeNoReason" - _ResourceInstanceChangeActionReason_name_1 = "ResourceInstanceReplaceBecauseCannotUpdate" - _ResourceInstanceChangeActionReason_name_2 = "ResourceInstanceReplaceByRequest" - _ResourceInstanceChangeActionReason_name_3 = "ResourceInstanceReplaceBecauseTainted" + _ResourceInstanceChangeActionReason_name_1 = "ResourceInstanceDeleteBecauseCountIndex" + _ResourceInstanceChangeActionReason_name_2 = "ResourceInstanceDeleteBecauseEachKeyResourceInstanceReplaceBecauseCannotUpdate" + _ResourceInstanceChangeActionReason_name_3 = "ResourceInstanceDeleteBecauseNoModuleResourceInstanceDeleteBecauseNoResourceConfig" + _ResourceInstanceChangeActionReason_name_4 = "ResourceInstanceReplaceByRequest" + _ResourceInstanceChangeActionReason_name_5 = "ResourceInstanceReplaceBecauseTainted" + _ResourceInstanceChangeActionReason_name_6 = "ResourceInstanceDeleteBecauseWrongRepetition" +) + +var ( + _ResourceInstanceChangeActionReason_index_2 = [...]uint8{0, 36, 78} + _ResourceInstanceChangeActionReason_index_3 = [...]uint8{0, 37, 82} ) func (i ResourceInstanceChangeActionReason) String() string { switch { case i == 0: return _ResourceInstanceChangeActionReason_name_0 - case i == 70: + case i == 67: return _ResourceInstanceChangeActionReason_name_1 + case 69 <= i && i <= 70: + i -= 69 + return _ResourceInstanceChangeActionReason_name_2[_ResourceInstanceChangeActionReason_index_2[i]:_ResourceInstanceChangeActionReason_index_2[i+1]] + case 77 <= i && i <= 78: + i -= 77 + return _ResourceInstanceChangeActionReason_name_3[_ResourceInstanceChangeActionReason_index_3[i]:_ResourceInstanceChangeActionReason_index_3[i+1]] case i == 82: - return _ResourceInstanceChangeActionReason_name_2 + return _ResourceInstanceChangeActionReason_name_4 case i == 84: - return _ResourceInstanceChangeActionReason_name_3 + return _ResourceInstanceChangeActionReason_name_5 + case i == 87: + return _ResourceInstanceChangeActionReason_name_6 default: return "ResourceInstanceChangeActionReason(" + strconv.FormatInt(int64(i), 10) + ")" } diff --git a/internal/terraform/context_apply_test.go b/internal/terraform/context_apply_test.go index babf067a5e69..a525ab52d19c 100644 --- a/internal/terraform/context_apply_test.go +++ b/internal/terraform/context_apply_test.go @@ -2127,6 +2127,30 @@ func TestContext2Apply_countDecreaseToOneCorrupted(t *testing.T) { t.Fatalf("wrong plan result\ngot:\n%s\nwant:\n%s", got, want) } } + { + change := plan.Changes.ResourceInstance(mustResourceInstanceAddr("aws_instance.foo[0]")) + if change == nil { + t.Fatalf("no planned change for instance zero") + } + if got, want := change.Action, plans.Delete; got != want { + t.Errorf("wrong action for instance zero %s; want %s", got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceDeleteBecauseWrongRepetition; got != want { + t.Errorf("wrong action reason for instance zero %s; want %s", got, want) + } + } + { + change := plan.Changes.ResourceInstance(mustResourceInstanceAddr("aws_instance.foo")) + if change == nil { + t.Fatalf("no planned change for no-key instance") + } + if got, want := change.Action, plans.NoOp; got != want { + t.Errorf("wrong action for no-key instance %s; want %s", got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason for no-key instance %s; want %s", got, want) + } + } s, diags := ctx.Apply(plan, m) if diags.HasErrors() { @@ -2562,6 +2586,20 @@ func TestContext2Apply_orphanResource(t *testing.T) { }) plan, diags = ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) + { + addr := mustResourceInstanceAddr("test_thing.one[0]") + change := plan.Changes.ResourceInstance(addr) + if change == nil { + t.Fatalf("no planned change for %s", addr) + } + if got, want := change.Action, plans.Delete; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceDeleteBecauseNoResourceConfig; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + } + state, diags = ctx.Apply(plan, m) assertNoErrors(t, diags) @@ -2613,6 +2651,22 @@ func TestContext2Apply_moduleOrphanInheritAlias(t *testing.T) { plan, diags := ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) + { + addr := mustResourceInstanceAddr("module.child.aws_instance.bar") + change := plan.Changes.ResourceInstance(addr) + if change == nil { + t.Fatalf("no planned change for %s", addr) + } + if got, want := change.Action, plans.Delete; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + // This should ideally be ResourceInstanceDeleteBecauseNoModule, but + // the codepath deciding this doesn't currently have enough information + // to differentiate, and so this is a compromise. + if got, want := change.ActionReason, plans.ResourceInstanceDeleteBecauseNoResourceConfig; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + } state, diags = ctx.Apply(plan, m) if diags.HasErrors() { @@ -8898,6 +8952,43 @@ func TestContext2Apply_scaleInMultivarRef(t *testing.T) { }, }) assertNoErrors(t, diags) + { + addr := mustResourceInstanceAddr("aws_instance.one[0]") + change := plan.Changes.ResourceInstance(addr) + if change == nil { + t.Fatalf("no planned change for %s", addr) + } + // This test was originally written with Terraform v0.11 and earlier + // in mind, so it declares a no-key instance of aws_instance.one, + // but its configuration sets count (to zero) and so we end up first + // moving the no-key instance to the zero key and then planning to + // destroy the zero key. + if got, want := change.PrevRunAddr, mustResourceInstanceAddr("aws_instance.one"); !want.Equal(got) { + t.Errorf("wrong previous run address for %s %s; want %s", addr, got, want) + } + if got, want := change.Action, plans.Delete; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceDeleteBecauseCountIndex; got != want { + t.Errorf("wrong action reason for %s %s; want %s", addr, got, want) + } + } + { + addr := mustResourceInstanceAddr("aws_instance.two") + change := plan.Changes.ResourceInstance(addr) + if change == nil { + t.Fatalf("no planned change for %s", addr) + } + if got, want := change.PrevRunAddr, mustResourceInstanceAddr("aws_instance.two"); !want.Equal(got) { + t.Errorf("wrong previous run address for %s %s; want %s", addr, got, want) + } + if got, want := change.Action, plans.Update; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason for %s %s; want %s", addr, got, want) + } + } // Applying the plan should now succeed _, diags = ctx.Apply(plan, m) @@ -10960,6 +11051,38 @@ locals { if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } + { + addr := mustResourceInstanceAddr("test_instance.a[0]") + change := plan.Changes.ResourceInstance(addr) + if change == nil { + t.Fatalf("no planned change for %s", addr) + } + if got, want := change.PrevRunAddr, mustResourceInstanceAddr("test_instance.a[0]"); !want.Equal(got) { + t.Errorf("wrong previous run address for %s %s; want %s", addr, got, want) + } + if got, want := change.Action, plans.NoOp; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason for %s %s; want %s", addr, got, want) + } + } + { + addr := mustResourceInstanceAddr("test_instance.a[1]") + change := plan.Changes.ResourceInstance(addr) + if change == nil { + t.Fatalf("no planned change for %s", addr) + } + if got, want := change.PrevRunAddr, mustResourceInstanceAddr("test_instance.a[1]"); !want.Equal(got) { + t.Errorf("wrong previous run address for %s %s; want %s", addr, got, want) + } + if got, want := change.Action, plans.Delete; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceDeleteBecauseCountIndex; got != want { + t.Errorf("wrong action reason for %s %s; want %s", addr, got, want) + } + } state, diags = ctx.Apply(plan, m) if diags.HasErrors() { @@ -10991,6 +11114,30 @@ locals { if diags.HasErrors() { t.Fatal(diags.ErrWithWarnings()) } + { + addr := mustResourceInstanceAddr("test_instance.a[0]") + change := plan.Changes.ResourceInstance(addr) + if change == nil { + t.Fatalf("no planned change for %s", addr) + } + if got, want := change.PrevRunAddr, mustResourceInstanceAddr("test_instance.a[0]"); !want.Equal(got) { + t.Errorf("wrong previous run address for %s %s; want %s", addr, got, want) + } + if got, want := change.Action, plans.Delete; got != want { + t.Errorf("wrong action for %s %s; want %s", addr, got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceDeleteBecauseCountIndex; got != want { + t.Errorf("wrong action reason for %s %s; want %s", addr, got, want) + } + } + { + addr := mustResourceInstanceAddr("test_instance.a[1]") + change := plan.Changes.ResourceInstance(addr) + if change != nil { + // It was already removed in the previous plan/apply + t.Errorf("unexpected planned change for %s", addr) + } + } state, diags = ctx.Apply(plan, m) if diags.HasErrors() { diff --git a/internal/terraform/context_plan_test.go b/internal/terraform/context_plan_test.go index 475bcb661f13..9cb4c8925573 100644 --- a/internal/terraform/context_plan_test.go +++ b/internal/terraform/context_plan_test.go @@ -3559,7 +3559,7 @@ func TestContext2Plan_orphan(t *testing.T) { if res.Action != plans.Delete { t.Fatalf("resource %s should be removed", i) } - if got, want := ric.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + if got, want := ric.ActionReason, plans.ResourceInstanceDeleteBecauseNoResourceConfig; got != want { t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) } case "aws_instance.foo": @@ -6138,8 +6138,41 @@ resource "test_instance" "b" { }, }) - _, diags := ctx.Plan(m, state, DefaultPlanOpts) + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) assertNoErrors(t, diags) + + t.Run("test_instance.a[0]", func(t *testing.T) { + instAddr := mustResourceInstanceAddr("test_instance.a[0]") + change := plan.Changes.ResourceInstance(instAddr) + if change == nil { + t.Fatalf("no planned change for %s", instAddr) + } + if got, want := change.PrevRunAddr, instAddr; !want.Equal(got) { + t.Errorf("wrong previous run address for %s %s; want %s", instAddr, got, want) + } + if got, want := change.Action, plans.Delete; got != want { + t.Errorf("wrong action for %s %s; want %s", instAddr, got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceDeleteBecauseWrongRepetition; got != want { + t.Errorf("wrong action reason for %s %s; want %s", instAddr, got, want) + } + }) + t.Run("test_instance.b", func(t *testing.T) { + instAddr := mustResourceInstanceAddr("test_instance.b") + change := plan.Changes.ResourceInstance(instAddr) + if change == nil { + t.Fatalf("no planned change for %s", instAddr) + } + if got, want := change.PrevRunAddr, instAddr; !want.Equal(got) { + t.Errorf("wrong previous run address for %s %s; want %s", instAddr, got, want) + } + if got, want := change.Action, plans.Delete; got != want { + t.Errorf("wrong action for %s %s; want %s", instAddr, got, want) + } + if got, want := change.ActionReason, plans.ResourceInstanceDeleteBecauseWrongRepetition; got != want { + t.Errorf("wrong action reason for %s %s; want %s", instAddr, got, want) + } + }) } func TestContext2Plan_targetedModuleInstance(t *testing.T) { diff --git a/internal/terraform/node_resource_plan_orphan.go b/internal/terraform/node_resource_plan_orphan.go index 2c28d88a33dd..7f9a683f1ad9 100644 --- a/internal/terraform/node_resource_plan_orphan.go +++ b/internal/terraform/node_resource_plan_orphan.go @@ -134,6 +134,11 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon return diags } + // We might be able to offer an approximate reason for why we are + // planning to delete this object. (This is best-effort; we might + // sometimes not have a reason.) + change.ActionReason = n.deleteActionReason(ctx) + diags = diags.Append(n.writeChange(ctx, change, "")) if diags.HasErrors() { return diags @@ -148,3 +153,110 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon return diags } + +func (n *NodePlannableResourceInstanceOrphan) deleteActionReason(ctx EvalContext) plans.ResourceInstanceChangeActionReason { + cfg := n.Config + if cfg == nil { + // NOTE: We'd ideally detect if the containing module is what's missing + // and then use ResourceInstanceDeleteBecauseNoModule for that case, + // but we don't currently have access to the full configuration here, + // so we need to be less specific. + return plans.ResourceInstanceDeleteBecauseNoResourceConfig + } + + switch n.Addr.Resource.Key.(type) { + case nil: // no instance key at all + if cfg.Count != nil || cfg.ForEach != nil { + return plans.ResourceInstanceDeleteBecauseWrongRepetition + } + case addrs.IntKey: + if cfg.Count == nil { + // This resource isn't using "count" at all, then + return plans.ResourceInstanceDeleteBecauseWrongRepetition + } + + expander := ctx.InstanceExpander() + if expander == nil { + break // only for tests that produce an incomplete MockEvalContext + } + insts := expander.ExpandResource(n.Addr.ContainingResource()) + + declared := false + for _, inst := range insts { + if n.Addr.Equal(inst) { + declared = true + } + } + if !declared { + // This instance key is outside of the configured range + return plans.ResourceInstanceDeleteBecauseCountIndex + } + case addrs.StringKey: + if cfg.ForEach == nil { + // This resource isn't using "for_each" at all, then + return plans.ResourceInstanceDeleteBecauseWrongRepetition + } + + expander := ctx.InstanceExpander() + if expander == nil { + break // only for tests that produce an incomplete MockEvalContext + } + insts := expander.ExpandResource(n.Addr.ContainingResource()) + + declared := false + for _, inst := range insts { + if n.Addr.Equal(inst) { + declared = true + } + } + if !declared { + // This instance key is outside of the configured range + return plans.ResourceInstanceDeleteBecauseEachKey + } + } + + // If we get here then the instance key type matches the configured + // repetition mode, and so we need to consider whether the key itself + // is within the range of the repetition construct. + if expander := ctx.InstanceExpander(); expander != nil { // (sometimes nil in MockEvalContext in tests) + // First we'll check whether our containing module instance still + // exists, so we can talk about that differently in the reason. + declared := false + for _, inst := range expander.ExpandModule(n.Addr.Module.Module()) { + if n.Addr.Module.Equal(inst) { + declared = true + break + } + } + if !declared { + return plans.ResourceInstanceDeleteBecauseNoModule + } + + // Now we've proven that we're in a still-existing module instance, + // we'll see if our instance key matches something actually declared. + declared = false + for _, inst := range expander.ExpandResource(n.Addr.ContainingResource()) { + if n.Addr.Equal(inst) { + declared = true + break + } + } + if !declared { + // Because we already checked that the key _type_ was correct + // above, we can assume that any mismatch here is a range error, + // and thus we just need to decide which of the two range + // errors we're going to return. + switch n.Addr.Resource.Key.(type) { + case addrs.IntKey: + return plans.ResourceInstanceDeleteBecauseCountIndex + case addrs.StringKey: + return plans.ResourceInstanceDeleteBecauseEachKey + } + } + } + + // If we didn't find any specific reason to report, we'll report "no reason" + // as a fallback, which means the UI should just state it'll be deleted + // without any explicit reasoning. + return plans.ResourceInstanceChangeNoReason +} diff --git a/website/docs/internals/json-format.html.md b/website/docs/internals/json-format.html.md index ffe54c2cc8b6..aa4df209af18 100644 --- a/website/docs/internals/json-format.html.md +++ b/website/docs/internals/json-format.html.md @@ -149,6 +149,20 @@ For ease of consumption by callers, the plan representation includes a partial r // - "replace_by_request": the user explicitly called for this object // to be replaced as an option when creating the plan, which therefore // overrode what would have been a "no-op" or "update" action otherwise. + // - "delete_because_no_resource_config": Terraform found no resource + // configuration corresponding to this instance. + // - "delete_because_no_module": The resource instance belongs to a + // module instance that's no longer declared, perhaps due to changing + // the "count" or "for_each" argument on one of the containing modules. + // - "delete_because_wrong_repetition": The instance key portion of the + // resource address isn't of a suitable type for the corresponding + // resource's configured repetition mode (count, for_each, or neither). + // - "delete_because_count_index": The corresponding resource uses count, + // but the instance key is out of range for the currently-configured + // count value. + // - "delete_because_each_key": The corresponding resource uses for_each, + // but the instance key doesn't match any of the keys in the + // currently-configured for_each value. // // If there is no special reason to note, Terraform will omit this // property altogether. From 04f9e7148cd5cc8fa74ccd8af0c8dd2472d3a7f2 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 22 Sep 2021 18:23:35 -0700 Subject: [PATCH 099/644] command/format: Include deletion reasons in plan report The core runtime is now able to specify a reason for some situations when Terraform plans to delete a resource instance. This commit makes that information visible in the human-oriented UI. A previous commit already made the underlying data informing these new hints visible as part of the machine-oriented (JSON) plan output. This also removes the bold formatting from the existing "has moved to" hints, because subjectively it seemed like the result was emphasizing too many parts of the output and thus somewhat defeating the benefit of the emphasis in trying to create additional visual hierarchy for sighted users running Terraform in a terminal. Now only the first line containing the main action statement will be in bold, and all of the parenthesized follow-up notes will be unformatted. --- internal/command/format/diff.go | 27 +++- internal/command/format/diff_test.go | 229 ++++++++++++++++++++++++++- 2 files changed, 253 insertions(+), 3 deletions(-) diff --git a/internal/command/format/diff.go b/internal/command/format/diff.go index c70918af4ac2..ec777b5852fa 100644 --- a/internal/command/format/diff.go +++ b/internal/command/format/diff.go @@ -98,6 +98,31 @@ func ResourceChange( default: buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] delete (unknown reason %s)"), dispAddr, language)) } + // We can sometimes give some additional detail about why we're + // proposing to delete. We show this as additional notes, rather than + // as additional wording in the main action statement, in an attempt + // to make the "will be destroyed" message prominent and consistent + // in all cases, for easier scanning of this often-risky action. + switch change.ActionReason { + case plans.ResourceInstanceDeleteBecauseNoResourceConfig: + buf.WriteString(fmt.Sprintf("\n # (because %s is not in configuration)", addr.Resource.Resource)) + case plans.ResourceInstanceDeleteBecauseNoModule: + buf.WriteString(fmt.Sprintf("\n # (because %s is not in configuration)", addr.Module)) + case plans.ResourceInstanceDeleteBecauseWrongRepetition: + // We have some different variations of this one + switch addr.Resource.Key.(type) { + case nil: + buf.WriteString("\n # (because resource uses count or for_each)") + case addrs.IntKey: + buf.WriteString("\n # (because resource does not use count)") + case addrs.StringKey: + buf.WriteString("\n # (because resource does not use for_each)") + } + case plans.ResourceInstanceDeleteBecauseCountIndex: + buf.WriteString(fmt.Sprintf("\n # (because index %s is out of range for count)", addr.Resource.Key)) + case plans.ResourceInstanceDeleteBecauseEachKey: + buf.WriteString(fmt.Sprintf("\n # (because key %s is not in for_each map)", addr.Resource.Key)) + } if change.DeposedKey != states.NotDeposed { // Some extra context about this unusual situation. buf.WriteString(color.Color("\n # (left over from a partially-failed replacement of this instance)")) @@ -115,7 +140,7 @@ func ResourceChange( buf.WriteString(color.Color("[reset]\n")) if change.Moved() && change.Action != plans.NoOp { - buf.WriteString(fmt.Sprintf(color.Color("[bold] # [reset]([bold]%s[reset] has moved to [bold]%s[reset])\n"), change.PrevRunAddr.String(), dispAddr)) + buf.WriteString(fmt.Sprintf(color.Color(" # [reset](moved from %s)\n"), change.PrevRunAddr.String())) } if change.Moved() && change.Action == plans.NoOp { diff --git a/internal/command/format/diff_test.go b/internal/command/format/diff_test.go index b59a3af56412..6a720210ed7b 100644 --- a/internal/command/format/diff_test.go +++ b/internal/command/format/diff_test.go @@ -3504,6 +3504,229 @@ func TestResourceChange_nestedMap(t *testing.T) { runTestCases(t, testCases) } +func TestResourceChange_actionReason(t *testing.T) { + emptySchema := &configschema.Block{} + nullVal := cty.NullVal(cty.EmptyObject) + emptyVal := cty.EmptyObjectVal + + testCases := map[string]testCase{ + "delete for no particular reason": { + Action: plans.Delete, + ActionReason: plans.ResourceInstanceChangeNoReason, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example will be destroyed + - resource "test_instance" "example" {} +`, + }, + "delete because of wrong repetition mode (NoKey)": { + Action: plans.Delete, + ActionReason: plans.ResourceInstanceDeleteBecauseWrongRepetition, + Mode: addrs.ManagedResourceMode, + InstanceKey: addrs.NoKey, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example will be destroyed + # (because resource uses count or for_each) + - resource "test_instance" "example" {} +`, + }, + "delete because of wrong repetition mode (IntKey)": { + Action: plans.Delete, + ActionReason: plans.ResourceInstanceDeleteBecauseWrongRepetition, + Mode: addrs.ManagedResourceMode, + InstanceKey: addrs.IntKey(1), + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example[1] will be destroyed + # (because resource does not use count) + - resource "test_instance" "example" {} +`, + }, + "delete because of wrong repetition mode (StringKey)": { + Action: plans.Delete, + ActionReason: plans.ResourceInstanceDeleteBecauseWrongRepetition, + Mode: addrs.ManagedResourceMode, + InstanceKey: addrs.StringKey("a"), + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example["a"] will be destroyed + # (because resource does not use for_each) + - resource "test_instance" "example" {} +`, + }, + "delete because no resource configuration": { + Action: plans.Delete, + ActionReason: plans.ResourceInstanceDeleteBecauseNoResourceConfig, + ModuleInst: addrs.RootModuleInstance.Child("foo", addrs.NoKey), + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # module.foo.test_instance.example will be destroyed + # (because test_instance.example is not in configuration) + - resource "test_instance" "example" {} +`, + }, + "delete because no module": { + Action: plans.Delete, + ActionReason: plans.ResourceInstanceDeleteBecauseNoModule, + ModuleInst: addrs.RootModuleInstance.Child("foo", addrs.IntKey(1)), + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # module.foo[1].test_instance.example will be destroyed + # (because module.foo[1] is not in configuration) + - resource "test_instance" "example" {} +`, + }, + "delete because out of range for count": { + Action: plans.Delete, + ActionReason: plans.ResourceInstanceDeleteBecauseCountIndex, + Mode: addrs.ManagedResourceMode, + InstanceKey: addrs.IntKey(1), + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example[1] will be destroyed + # (because index [1] is out of range for count) + - resource "test_instance" "example" {} +`, + }, + "delete because out of range for for_each": { + Action: plans.Delete, + ActionReason: plans.ResourceInstanceDeleteBecauseEachKey, + Mode: addrs.ManagedResourceMode, + InstanceKey: addrs.StringKey("boop"), + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example["boop"] will be destroyed + # (because key ["boop"] is not in for_each map) + - resource "test_instance" "example" {} +`, + }, + "replace for no particular reason (delete first)": { + Action: plans.DeleteThenCreate, + ActionReason: plans.ResourceInstanceChangeNoReason, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example must be replaced +-/+ resource "test_instance" "example" {} +`, + }, + "replace for no particular reason (create first)": { + Action: plans.CreateThenDelete, + ActionReason: plans.ResourceInstanceChangeNoReason, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example must be replaced ++/- resource "test_instance" "example" {} +`, + }, + "replace by request (delete first)": { + Action: plans.DeleteThenCreate, + ActionReason: plans.ResourceInstanceReplaceByRequest, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example will be replaced, as requested +-/+ resource "test_instance" "example" {} +`, + }, + "replace by request (create first)": { + Action: plans.CreateThenDelete, + ActionReason: plans.ResourceInstanceReplaceByRequest, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example will be replaced, as requested ++/- resource "test_instance" "example" {} +`, + }, + "replace because tainted (delete first)": { + Action: plans.DeleteThenCreate, + ActionReason: plans.ResourceInstanceReplaceBecauseTainted, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example is tainted, so must be replaced +-/+ resource "test_instance" "example" {} +`, + }, + "replace because tainted (create first)": { + Action: plans.CreateThenDelete, + ActionReason: plans.ResourceInstanceReplaceBecauseTainted, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + ExpectedOutput: ` # test_instance.example is tainted, so must be replaced ++/- resource "test_instance" "example" {} +`, + }, + "replace because cannot update (delete first)": { + Action: plans.DeleteThenCreate, + ActionReason: plans.ResourceInstanceReplaceBecauseCannotUpdate, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + // This one has no special message, because the fuller explanation + // typically appears inline as a "# forces replacement" comment. + // (not shown here) + ExpectedOutput: ` # test_instance.example must be replaced +-/+ resource "test_instance" "example" {} +`, + }, + "replace because cannot update (create first)": { + Action: plans.CreateThenDelete, + ActionReason: plans.ResourceInstanceReplaceBecauseCannotUpdate, + Mode: addrs.ManagedResourceMode, + Before: emptyVal, + After: nullVal, + Schema: emptySchema, + RequiredReplace: cty.NewPathSet(), + // This one has no special message, because the fuller explanation + // typically appears inline as a "# forces replacement" comment. + // (not shown here) + ExpectedOutput: ` # test_instance.example must be replaced ++/- resource "test_instance" "example" {} +`, + }, + } + + runTestCases(t, testCases) +} + func TestResourceChange_sensitiveVariable(t *testing.T) { testCases := map[string]testCase{ "creation": { @@ -4479,7 +4702,7 @@ func TestResourceChange_moved(t *testing.T) { }, RequiredReplace: cty.NewPathSet(), ExpectedOutput: ` # test_instance.example will be updated in-place - # (test_instance.previous has moved to test_instance.example) + # (moved from test_instance.previous) ~ resource "test_instance" "example" { ~ bar = "baz" -> "boop" id = "12345" @@ -4524,7 +4747,9 @@ func TestResourceChange_moved(t *testing.T) { type testCase struct { Action plans.Action ActionReason plans.ResourceInstanceChangeActionReason + ModuleInst addrs.ModuleInstance Mode addrs.ResourceMode + InstanceKey addrs.InstanceKey DeposedKey states.DeposedKey Before cty.Value BeforeValMarks []cty.PathValueMarks @@ -4571,7 +4796,7 @@ func runTestCases(t *testing.T, testCases map[string]testCase) { Mode: tc.Mode, Type: "test_instance", Name: "example", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance) + }.Instance(tc.InstanceKey).Absolute(tc.ModuleInst) prevRunAddr := tc.PrevRunAddr // If no previous run address is given, reuse the current address From 1bff623fd9578df46a64ac984d64195d1c30554c Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Thu, 23 Sep 2021 11:18:29 -0700 Subject: [PATCH 100/644] core: Report a warning if any moves get blocked In most cases Terraform will be able to automatically fully resolve all of the pending move statements before creating a plan, but there are some edge cases where we can end up wanting to move one object to a location where another object is already declared. One relatively-obvious example is if someone uses "terraform state mv" in order to create a set of resource instance bindings that could never have arising in normal Terraform use. A less obvious example arises from the interactions between moves at different levels of granularity. If we are both moving a module to a new address and moving a resource into an instance of the new module at the same time, the old module might well have already had a resource of the same name and so the resource move will be unresolvable. In these situations Terraform will move the objects as far as possible, but because it's never valid for a move "from" address to still be declared in the configuration Terraform will inevitably always plan to destroy the objects that didn't find a final home. To give some additional explanation for that result, here we'll add a warning which describes what happened. This is not a particularly actionable warning because we don't really have enough information to guess what the user intended, but we do at least prompt that they might be able to use the "terraform state" family of subcommands to repair the ambiguous situation before planning, if they want a different result than what Terraform proposed. --- internal/terraform/context_plan.go | 30 ++++++++ internal/terraform/context_plan2_test.go | 96 ++++++++++++++++++++++++ 2 files changed, 126 insertions(+) diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index a48676ffa2bf..084a8ab8a782 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -1,6 +1,7 @@ package terraform import ( + "bytes" "fmt" "log" "sort" @@ -417,6 +418,14 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, r diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) diags = diags.Append(c.postPlanValidateMoves(config, moveStmts, walker.InstanceExpander.AllInstances())) + if len(moveResults.Blocked) > 0 && !diags.HasErrors() { + // If we had blocked moves and we're not going to be returning errors + // then we'll report the blockers as a warning. We do this only in the + // absense of errors because invalid move statements might well be + // the root cause of the blockers, and so better to give an actionable + // error message than a less-actionable warning. + diags = diags.Append(blockedMovesWarningDiag(moveResults)) + } prevRunState = walker.PrevRunState.Close() priorState := walker.RefreshState.Close() @@ -633,3 +642,24 @@ func (c *Context) PlanGraphForUI(config *configs.Config, prevRunState *states.St diags = diags.Append(moreDiags) return graph, diags } + +func blockedMovesWarningDiag(results refactoring.MoveResults) tfdiags.Diagnostic { + if len(results.Blocked) < 1 { + // Caller should check first + panic("request to render blocked moves warning without any blocked moves") + } + + var itemsBuf bytes.Buffer + for _, blocked := range results.Blocked { + fmt.Fprintf(&itemsBuf, "\n - %s could not move to %s", blocked.Actual, blocked.Wanted) + } + + return tfdiags.Sourceless( + tfdiags.Warning, + "Unresolved resource instance address changes", + fmt.Sprintf( + "Terraform tried to adjust resource instance addresses in the prior state based on change information recorded in the configuration, but some adjustments did not succeed due to existing objects already at the intended addresses:%s\n\nTerraform has planned to destroy these objects. If Terraform's proposed changes aren't appropriate, you must first resolve the conflicts using the \"terraform state\" subcommands and then create a new plan.", + itemsBuf.String(), + ), + ) +} diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 1167f234c00d..962142416590 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -791,6 +791,102 @@ func TestContext2Plan_movedResourceBasic(t *testing.T) { }) } +func TestContext2Plan_movedResourceCollision(t *testing.T) { + addrNoKey := mustResourceInstanceAddr("test_object.a") + addrZeroKey := mustResourceInstanceAddr("test_object.a[0]") + m := testModuleInline(t, map[string]string{ + "main.tf": ` + resource "test_object" "a" { + # No "count" set, so test_object.a[0] will want + # to implicitly move to test_object.a, but will get + # blocked by the existing object at that address. + } + `, + }) + + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent(addrNoKey, &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + s.SetResourceInstanceCurrent(addrZeroKey, &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.NormalMode, + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + // We should have a warning, though! We'll lightly abuse the "for RPC" + // feature of diagnostics to get some more-readily-comparable diagnostic + // values. + gotDiags := diags.ForRPC() + wantDiags := tfdiags.Diagnostics{ + tfdiags.Sourceless( + tfdiags.Warning, + "Unresolved resource instance address changes", + `Terraform tried to adjust resource instance addresses in the prior state based on change information recorded in the configuration, but some adjustments did not succeed due to existing objects already at the intended addresses: + - test_object.a[0] could not move to test_object.a + +Terraform has planned to destroy these objects. If Terraform's proposed changes aren't appropriate, you must first resolve the conflicts using the "terraform state" subcommands and then create a new plan.`, + ), + }.ForRPC() + if diff := cmp.Diff(wantDiags, gotDiags); diff != "" { + t.Errorf("wrong diagnostics\n%s", diff) + } + + t.Run(addrNoKey.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addrNoKey) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addrNoKey) + } + + if got, want := instPlan.Addr, addrNoKey; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addrNoKey; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.NoOp; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + }) + t.Run(addrZeroKey.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addrZeroKey) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addrZeroKey) + } + + if got, want := instPlan.Addr, addrZeroKey; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addrZeroKey; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.Delete; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceDeleteBecauseWrongRepetition; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + }) +} + func TestContext2Plan_movedResourceUntargeted(t *testing.T) { addrA := mustResourceInstanceAddr("test_object.a") addrB := mustResourceInstanceAddr("test_object.b") From d97ef10bb8cc01ab6975590ea6a410a6f586a64e Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Thu, 23 Sep 2021 11:27:44 -0700 Subject: [PATCH 101/644] core: Don't return other errors if move statements are invalid Because our validation rules depend on some dynamic results produced by actually running the plan, we deal with moves in a "backwards" order where we try to apply them first -- ignoring anything strange we might find -- and then validate the original statements only after planning. An unfortunate consequence of that approach is that when the move statements are invalid it's likely that move execution will not fully complete, and so the generated plan is likely to be incorrect and might well include errors resulting from the unresolved moves. To mitigate that, here we let any move validation errors supersede all other diagnostics that the plan phase might've generated, in the hope that it'll help the user focus on fixing the incorrect move statements without creating confusing by reporting errors that only appeared as a quick of how Terraform worked around the invalid move statements earlier. --- internal/terraform/context_plan.go | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 084a8ab8a782..001a2507ad78 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -417,7 +417,18 @@ func (c *Context) planWalk(config *configs.Config, prevRunState *states.State, r }) diags = diags.Append(walker.NonFatalDiagnostics) diags = diags.Append(walkDiags) - diags = diags.Append(c.postPlanValidateMoves(config, moveStmts, walker.InstanceExpander.AllInstances())) + moveValidateDiags := c.postPlanValidateMoves(config, moveStmts, walker.InstanceExpander.AllInstances()) + if moveValidateDiags.HasErrors() { + // If any of the move statements are invalid then those errors take + // precedence over any other errors because an incomplete move graph + // is quite likely to be the _cause_ of various errors. This oddity + // comes from the fact that we need to apply the moves before we + // actually validate them, because validation depends on the result + // of first trying to plan. + return nil, moveValidateDiags + } + diags = diags.Append(moveValidateDiags) // might just contain warnings + if len(moveResults.Blocked) > 0 && !diags.HasErrors() { // If we had blocked moves and we're not going to be returning errors // then we'll report the blockers as a warning. We do this only in the From 0f76e3a4e1eb1078ec19e17ea5834ea296365f93 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Thu, 23 Sep 2021 14:47:01 -0700 Subject: [PATCH 102/644] Update CHANGELOG.md --- CHANGELOG.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 474e343fd2a0..60a318aba4e0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,8 @@ UPGRADE NOTES: NEW FEATURES: +* cli: When Terraform plans to destroy a resource instance due to it no longer being declared in the configuration, the proposed plan output will now include a note hinting at what situation prompted that proposal, so you can more easily see what configuration change might avoid the object being destroyed. ([#29637](https://github.com/hashicorp/terraform/pull/29637)) +* cli: When Terraform automatically moves a singleton resource instance to index zero or vice-versa in response to adding or removing `count`, it'll report explicitly that it did so as part of the plan output. ([#29605](https://github.com/hashicorp/terraform/pull/29605)) * cli: The (currently-experimental) `terraform add` generates a starting point for a particular resource configuration. ([#28874](https://github.com/hashicorp/terraform/issues/28874)) * config: a new `type()` function, only available in `terraform console` ([#28501](https://github.com/hashicorp/terraform/issues/28501)) From 5d620303eb1b6bb6562a634991f5a59bc79902ea Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Thu, 23 Sep 2021 14:47:40 -0700 Subject: [PATCH 103/644] Update CHANGELOG.md --- CHANGELOG.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 60a318aba4e0..f5b11e243e4a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,10 +7,10 @@ UPGRADE NOTES: NEW FEATURES: -* cli: When Terraform plans to destroy a resource instance due to it no longer being declared in the configuration, the proposed plan output will now include a note hinting at what situation prompted that proposal, so you can more easily see what configuration change might avoid the object being destroyed. ([#29637](https://github.com/hashicorp/terraform/pull/29637)) -* cli: When Terraform automatically moves a singleton resource instance to index zero or vice-versa in response to adding or removing `count`, it'll report explicitly that it did so as part of the plan output. ([#29605](https://github.com/hashicorp/terraform/pull/29605)) -* cli: The (currently-experimental) `terraform add` generates a starting point for a particular resource configuration. ([#28874](https://github.com/hashicorp/terraform/issues/28874)) -* config: a new `type()` function, only available in `terraform console` ([#28501](https://github.com/hashicorp/terraform/issues/28501)) +* `terraform plan` and `terraform apply`: When Terraform plans to destroy a resource instance due to it no longer being declared in the configuration, the proposed plan output will now include a note hinting at what situation prompted that proposal, so you can more easily see what configuration change might avoid the object being destroyed. ([#29637](https://github.com/hashicorp/terraform/pull/29637)) +* `terraform plan` and `terraform apply`: When Terraform automatically moves a singleton resource instance to index zero or vice-versa in response to adding or removing `count`, it'll report explicitly that it did so as part of the plan output. ([#29605](https://github.com/hashicorp/terraform/pull/29605)) +* `terraform add`: The (currently-experimental) `terraform add` generates a starting point for a particular resource configuration. ([#28874](https://github.com/hashicorp/terraform/issues/28874)) +* config: a new `type()` function, available only in `terraform console`. ([#28501](https://github.com/hashicorp/terraform/issues/28501)) ENHANCEMENTS: From 57318ef56117139b12de8fffcf36f527eb7b66ac Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Fri, 24 Sep 2021 09:26:09 -0400 Subject: [PATCH 104/644] backend/remote: Support interop from 0.14 to 1.1 The previous conservative guarantee that we would not make backwards incompatible changes to the state file format until at least Terraform 1.1 can now be extended. Terraform 0.14 through 1.1 will be able to interoperably use state files, so we can update the remote backend version compatibility check accordingly. --- internal/backend/remote/backend.go | 6 +++--- internal/backend/remote/backend_test.go | 3 ++- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/internal/backend/remote/backend.go b/internal/backend/remote/backend.go index b4aa115a798b..5e09e7207d3f 100644 --- a/internal/backend/remote/backend.go +++ b/internal/backend/remote/backend.go @@ -917,9 +917,9 @@ func (b *Remote) VerifyWorkspaceTerraformVersion(workspaceName string) tfdiags.D // are aware of are: // // - 0.14.0 is guaranteed to be compatible with versions up to but not - // including 1.1.0 - v110 := version.Must(version.NewSemver("1.1.0")) - if tfversion.SemVer.LessThan(v110) && remoteVersion.LessThan(v110) { + // including 1.2.0 + v120 := version.Must(version.NewSemver("1.2.0")) + if tfversion.SemVer.LessThan(v120) && remoteVersion.LessThan(v120) { return diags } // - Any new Terraform state version will require at least minor patch diff --git a/internal/backend/remote/backend_test.go b/internal/backend/remote/backend_test.go index 3f0755a0e5ac..d00b61ae263d 100644 --- a/internal/backend/remote/backend_test.go +++ b/internal/backend/remote/backend_test.go @@ -566,7 +566,8 @@ func TestRemote_VerifyWorkspaceTerraformVersion(t *testing.T) { {"0.14.0", "0.13.5", false, false}, {"0.14.0", "0.14.1", true, false}, {"0.14.0", "1.0.99", true, false}, - {"0.14.0", "1.1.0", true, true}, + {"0.14.0", "1.1.0", true, false}, + {"0.14.0", "1.2.0", true, true}, {"1.2.0", "1.2.99", true, false}, {"1.2.0", "1.3.0", true, true}, {"0.15.0", "latest", true, false}, From 24a2bd630121cc3fd4dea49d79a63e70eb2de47b Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 14 Sep 2021 09:09:31 -0400 Subject: [PATCH 105/644] tfproto version 6.1 Minor version increase to deprecate min_items and max_items in nested types. Nested types have MinItems and MaxItems fields that were inherited from the block implementation, but were never validated by Terraform, and are not supported by the HCL decoder validations. Mark these fields as deprecated, indicating that the SDK should handle the required validation. --- docs/plugin-protocol/tfplugin6.1.proto | 321 +++++++++++++ internal/tfplugin6/tfplugin6.pb.go | 622 +++++++++++++------------ internal/tfplugin6/tfplugin6.proto | 2 +- 3 files changed, 635 insertions(+), 310 deletions(-) create mode 100644 docs/plugin-protocol/tfplugin6.1.proto diff --git a/docs/plugin-protocol/tfplugin6.1.proto b/docs/plugin-protocol/tfplugin6.1.proto new file mode 100644 index 000000000000..e8912cd0b799 --- /dev/null +++ b/docs/plugin-protocol/tfplugin6.1.proto @@ -0,0 +1,321 @@ +// Terraform Plugin RPC protocol version 6.1 +// +// This file defines version 6.1 of the RPC protocol. To implement a plugin +// against this protocol, copy this definition into your own codebase and +// use protoc to generate stubs for your target language. +// +// This file will not be updated. Any minor versions of protocol 6 to follow +// should copy this file and modify the copy while maintaing backwards +// compatibility. Breaking changes, if any are required, will come +// in a subsequent major version with its own separate proto definition. +// +// Note that only the proto files included in a release tag of Terraform are +// official protocol releases. Proto files taken from other commits may include +// incomplete changes or features that did not make it into a final release. +// In all reasonable cases, plugin developers should take the proto file from +// the tag of the most recent release of Terraform, and not from the main +// branch or any other development branch. +// +syntax = "proto3"; +option go_package = "github.com/hashicorp/terraform/internal/tfplugin6"; + +package tfplugin6; + +// DynamicValue is an opaque encoding of terraform data, with the field name +// indicating the encoding scheme used. +message DynamicValue { + bytes msgpack = 1; + bytes json = 2; +} + +message Diagnostic { + enum Severity { + INVALID = 0; + ERROR = 1; + WARNING = 2; + } + Severity severity = 1; + string summary = 2; + string detail = 3; + AttributePath attribute = 4; +} + +message AttributePath { + message Step { + oneof selector { + // Set "attribute_name" to represent looking up an attribute + // in the current object value. + string attribute_name = 1; + // Set "element_key_*" to represent looking up an element in + // an indexable collection type. + string element_key_string = 2; + int64 element_key_int = 3; + } + } + repeated Step steps = 1; +} + +message StopProvider { + message Request { + } + message Response { + string Error = 1; + } +} + +// RawState holds the stored state for a resource to be upgraded by the +// provider. It can be in one of two formats, the current json encoded format +// in bytes, or the legacy flatmap format as a map of strings. +message RawState { + bytes json = 1; + map flatmap = 2; +} + +enum StringKind { + PLAIN = 0; + MARKDOWN = 1; +} + +// Schema is the configuration schema for a Resource or Provider. +message Schema { + message Block { + int64 version = 1; + repeated Attribute attributes = 2; + repeated NestedBlock block_types = 3; + string description = 4; + StringKind description_kind = 5; + bool deprecated = 6; + } + + message Attribute { + string name = 1; + bytes type = 2; + Object nested_type = 10; + string description = 3; + bool required = 4; + bool optional = 5; + bool computed = 6; + bool sensitive = 7; + StringKind description_kind = 8; + bool deprecated = 9; + } + + message NestedBlock { + enum NestingMode { + INVALID = 0; + SINGLE = 1; + LIST = 2; + SET = 3; + MAP = 4; + GROUP = 5; + } + + string type_name = 1; + Block block = 2; + NestingMode nesting = 3; + int64 min_items = 4; + int64 max_items = 5; + } + + message Object { + enum NestingMode { + INVALID = 0; + SINGLE = 1; + LIST = 2; + SET = 3; + MAP = 4; + } + + repeated Attribute attributes = 1; + NestingMode nesting = 3; + int64 min_items = 4 [deprecated = true]; + int64 max_items = 5 [deprecated = true]; + } + + // The version of the schema. + // Schemas are versioned, so that providers can upgrade a saved resource + // state when the schema is changed. + int64 version = 1; + + // Block is the top level configuration block for this schema. + Block block = 2; +} + +service Provider { + //////// Information about what a provider supports/expects + rpc GetProviderSchema(GetProviderSchema.Request) returns (GetProviderSchema.Response); + rpc ValidateProviderConfig(ValidateProviderConfig.Request) returns (ValidateProviderConfig.Response); + rpc ValidateResourceConfig(ValidateResourceConfig.Request) returns (ValidateResourceConfig.Response); + rpc ValidateDataResourceConfig(ValidateDataResourceConfig.Request) returns (ValidateDataResourceConfig.Response); + rpc UpgradeResourceState(UpgradeResourceState.Request) returns (UpgradeResourceState.Response); + + //////// One-time initialization, called before other functions below + rpc ConfigureProvider(ConfigureProvider.Request) returns (ConfigureProvider.Response); + + //////// Managed Resource Lifecycle + rpc ReadResource(ReadResource.Request) returns (ReadResource.Response); + rpc PlanResourceChange(PlanResourceChange.Request) returns (PlanResourceChange.Response); + rpc ApplyResourceChange(ApplyResourceChange.Request) returns (ApplyResourceChange.Response); + rpc ImportResourceState(ImportResourceState.Request) returns (ImportResourceState.Response); + + rpc ReadDataSource(ReadDataSource.Request) returns (ReadDataSource.Response); + + //////// Graceful Shutdown + rpc StopProvider(StopProvider.Request) returns (StopProvider.Response); +} + +message GetProviderSchema { + message Request { + } + message Response { + Schema provider = 1; + map resource_schemas = 2; + map data_source_schemas = 3; + repeated Diagnostic diagnostics = 4; + Schema provider_meta = 5; + } +} + +message ValidateProviderConfig { + message Request { + DynamicValue config = 1; + } + message Response { + repeated Diagnostic diagnostics = 2; + } +} + +message UpgradeResourceState { + message Request { + string type_name = 1; + + // version is the schema_version number recorded in the state file + int64 version = 2; + + // raw_state is the raw states as stored for the resource. Core does + // not have access to the schema of prior_version, so it's the + // provider's responsibility to interpret this value using the + // appropriate older schema. The raw_state will be the json encoded + // state, or a legacy flat-mapped format. + RawState raw_state = 3; + } + message Response { + // new_state is a msgpack-encoded data structure that, when interpreted with + // the _current_ schema for this resource type, is functionally equivalent to + // that which was given in prior_state_raw. + DynamicValue upgraded_state = 1; + + // diagnostics describes any errors encountered during migration that could not + // be safely resolved, and warnings about any possibly-risky assumptions made + // in the upgrade process. + repeated Diagnostic diagnostics = 2; + } +} + +message ValidateResourceConfig { + message Request { + string type_name = 1; + DynamicValue config = 2; + } + message Response { + repeated Diagnostic diagnostics = 1; + } +} + +message ValidateDataResourceConfig { + message Request { + string type_name = 1; + DynamicValue config = 2; + } + message Response { + repeated Diagnostic diagnostics = 1; + } +} + +message ConfigureProvider { + message Request { + string terraform_version = 1; + DynamicValue config = 2; + } + message Response { + repeated Diagnostic diagnostics = 1; + } +} + +message ReadResource { + message Request { + string type_name = 1; + DynamicValue current_state = 2; + bytes private = 3; + DynamicValue provider_meta = 4; + } + message Response { + DynamicValue new_state = 1; + repeated Diagnostic diagnostics = 2; + bytes private = 3; + } +} + +message PlanResourceChange { + message Request { + string type_name = 1; + DynamicValue prior_state = 2; + DynamicValue proposed_new_state = 3; + DynamicValue config = 4; + bytes prior_private = 5; + DynamicValue provider_meta = 6; + } + + message Response { + DynamicValue planned_state = 1; + repeated AttributePath requires_replace = 2; + bytes planned_private = 3; + repeated Diagnostic diagnostics = 4; + } +} + +message ApplyResourceChange { + message Request { + string type_name = 1; + DynamicValue prior_state = 2; + DynamicValue planned_state = 3; + DynamicValue config = 4; + bytes planned_private = 5; + DynamicValue provider_meta = 6; + } + message Response { + DynamicValue new_state = 1; + bytes private = 2; + repeated Diagnostic diagnostics = 3; + } +} + +message ImportResourceState { + message Request { + string type_name = 1; + string id = 2; + } + + message ImportedResource { + string type_name = 1; + DynamicValue state = 2; + bytes private = 3; + } + + message Response { + repeated ImportedResource imported_resources = 1; + repeated Diagnostic diagnostics = 2; + } +} + +message ReadDataSource { + message Request { + string type_name = 1; + DynamicValue config = 2; + DynamicValue provider_meta = 3; + } + message Response { + DynamicValue state = 1; + repeated Diagnostic diagnostics = 2; + } +} diff --git a/internal/tfplugin6/tfplugin6.pb.go b/internal/tfplugin6/tfplugin6.pb.go index afd77a0eedbb..d3152b355b58 100644 --- a/internal/tfplugin6/tfplugin6.pb.go +++ b/internal/tfplugin6/tfplugin6.pb.go @@ -1,6 +1,6 @@ -// Terraform Plugin RPC protocol version 6.0 +// Terraform Plugin RPC protocol version 6.1 // -// This file defines version 6.0 of the RPC protocol. To implement a plugin +// This file defines version 6.1 of the RPC protocol. To implement a plugin // against this protocol, copy this definition into your own codebase and // use protoc to generate stubs for your target language. // @@ -1480,8 +1480,10 @@ type Schema_Object struct { Attributes []*Schema_Attribute `protobuf:"bytes,1,rep,name=attributes,proto3" json:"attributes,omitempty"` Nesting Schema_Object_NestingMode `protobuf:"varint,3,opt,name=nesting,proto3,enum=tfplugin6.Schema_Object_NestingMode" json:"nesting,omitempty"` - MinItems int64 `protobuf:"varint,4,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"` - MaxItems int64 `protobuf:"varint,5,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"` + // Deprecated: Do not use. + MinItems int64 `protobuf:"varint,4,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"` + // Deprecated: Do not use. + MaxItems int64 `protobuf:"varint,5,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"` } func (x *Schema_Object) Reset() { @@ -1530,6 +1532,7 @@ func (x *Schema_Object) GetNesting() Schema_Object_NestingMode { return Schema_Object_INVALID } +// Deprecated: Do not use. func (x *Schema_Object) GetMinItems() int64 { if x != nil { return x.MinItems @@ -1537,6 +1540,7 @@ func (x *Schema_Object) GetMinItems() int64 { return 0 } +// Deprecated: Do not use. func (x *Schema_Object) GetMaxItems() int64 { if x != nil { return x.MaxItems @@ -2974,7 +2978,7 @@ var file_tfplugin6_proto_rawDesc = []byte{ 0x74, 0x6d, 0x61, 0x70, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x8d, 0x0a, 0x0a, 0x06, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, + 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x95, 0x0a, 0x0a, 0x06, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x2d, 0x0a, 0x05, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, @@ -3039,7 +3043,7 @@ var file_tfplugin6_proto_rawDesc = []byte{ 0x53, 0x49, 0x4e, 0x47, 0x4c, 0x45, 0x10, 0x01, 0x12, 0x08, 0x0a, 0x04, 0x4c, 0x49, 0x53, 0x54, 0x10, 0x02, 0x12, 0x07, 0x0a, 0x03, 0x53, 0x45, 0x54, 0x10, 0x03, 0x12, 0x07, 0x0a, 0x03, 0x4d, 0x41, 0x50, 0x10, 0x04, 0x12, 0x09, 0x0a, 0x05, 0x47, 0x52, 0x4f, 0x55, 0x50, 0x10, 0x05, 0x1a, - 0x83, 0x02, 0x0a, 0x06, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x12, 0x3b, 0x0a, 0x0a, 0x61, 0x74, + 0x8b, 0x02, 0x0a, 0x06, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x12, 0x3b, 0x0a, 0x0a, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x1b, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x2e, 0x41, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x52, 0x0a, 0x61, 0x74, 0x74, @@ -3047,328 +3051,328 @@ var file_tfplugin6_proto_rawDesc = []byte{ 0x6e, 0x67, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x24, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x2e, 0x4f, 0x62, 0x6a, 0x65, 0x63, 0x74, 0x2e, 0x4e, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x67, 0x4d, 0x6f, 0x64, 0x65, 0x52, 0x07, - 0x6e, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x67, 0x12, 0x1b, 0x0a, 0x09, 0x6d, 0x69, 0x6e, 0x5f, 0x69, - 0x74, 0x65, 0x6d, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x6d, 0x69, 0x6e, 0x49, - 0x74, 0x65, 0x6d, 0x73, 0x12, 0x1b, 0x0a, 0x09, 0x6d, 0x61, 0x78, 0x5f, 0x69, 0x74, 0x65, 0x6d, - 0x73, 0x18, 0x05, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x6d, 0x61, 0x78, 0x49, 0x74, 0x65, 0x6d, - 0x73, 0x22, 0x42, 0x0a, 0x0b, 0x4e, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x67, 0x4d, 0x6f, 0x64, 0x65, - 0x12, 0x0b, 0x0a, 0x07, 0x49, 0x4e, 0x56, 0x41, 0x4c, 0x49, 0x44, 0x10, 0x00, 0x12, 0x0a, 0x0a, - 0x06, 0x53, 0x49, 0x4e, 0x47, 0x4c, 0x45, 0x10, 0x01, 0x12, 0x08, 0x0a, 0x04, 0x4c, 0x49, 0x53, - 0x54, 0x10, 0x02, 0x12, 0x07, 0x0a, 0x03, 0x53, 0x45, 0x54, 0x10, 0x03, 0x12, 0x07, 0x0a, 0x03, - 0x4d, 0x41, 0x50, 0x10, 0x04, 0x22, 0xd0, 0x04, 0x0a, 0x11, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, - 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x1a, 0x09, 0x0a, 0x07, 0x52, - 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0xaf, 0x04, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, - 0x6e, 0x73, 0x65, 0x12, 0x2d, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x18, - 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, - 0x36, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, - 0x65, 0x72, 0x12, 0x65, 0x0a, 0x10, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x73, - 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x3a, 0x2e, 0x74, - 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, 0x76, - 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, - 0x6e, 0x73, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x63, 0x68, 0x65, - 0x6d, 0x61, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0f, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, - 0x63, 0x65, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x12, 0x6c, 0x0a, 0x13, 0x64, 0x61, 0x74, - 0x61, 0x5f, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, - 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x3c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, - 0x6e, 0x36, 0x2e, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, - 0x68, 0x65, 0x6d, 0x61, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x2e, 0x44, 0x61, - 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x45, - 0x6e, 0x74, 0x72, 0x79, 0x52, 0x11, 0x64, 0x61, 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, - 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, - 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, - 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, - 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, - 0x12, 0x36, 0x0a, 0x0d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x6d, 0x65, 0x74, - 0x61, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, - 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x52, 0x0c, 0x70, 0x72, 0x6f, 0x76, - 0x69, 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, 0x61, 0x1a, 0x55, 0x0a, 0x14, 0x52, 0x65, 0x73, 0x6f, + 0x6e, 0x65, 0x73, 0x74, 0x69, 0x6e, 0x67, 0x12, 0x1f, 0x0a, 0x09, 0x6d, 0x69, 0x6e, 0x5f, 0x69, + 0x74, 0x65, 0x6d, 0x73, 0x18, 0x04, 0x20, 0x01, 0x28, 0x03, 0x42, 0x02, 0x18, 0x01, 0x52, 0x08, + 0x6d, 0x69, 0x6e, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x12, 0x1f, 0x0a, 0x09, 0x6d, 0x61, 0x78, 0x5f, + 0x69, 0x74, 0x65, 0x6d, 0x73, 0x18, 0x05, 0x20, 0x01, 0x28, 0x03, 0x42, 0x02, 0x18, 0x01, 0x52, + 0x08, 0x6d, 0x61, 0x78, 0x49, 0x74, 0x65, 0x6d, 0x73, 0x22, 0x42, 0x0a, 0x0b, 0x4e, 0x65, 0x73, + 0x74, 0x69, 0x6e, 0x67, 0x4d, 0x6f, 0x64, 0x65, 0x12, 0x0b, 0x0a, 0x07, 0x49, 0x4e, 0x56, 0x41, + 0x4c, 0x49, 0x44, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x53, 0x49, 0x4e, 0x47, 0x4c, 0x45, 0x10, + 0x01, 0x12, 0x08, 0x0a, 0x04, 0x4c, 0x49, 0x53, 0x54, 0x10, 0x02, 0x12, 0x07, 0x0a, 0x03, 0x53, + 0x45, 0x54, 0x10, 0x03, 0x12, 0x07, 0x0a, 0x03, 0x4d, 0x41, 0x50, 0x10, 0x04, 0x22, 0xd0, 0x04, + 0x0a, 0x11, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, + 0x65, 0x6d, 0x61, 0x1a, 0x09, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0xaf, + 0x04, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x2d, 0x0a, 0x08, 0x70, + 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x11, 0x2e, + 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, + 0x52, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, 0x65, 0x0a, 0x10, 0x72, 0x65, + 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x18, 0x02, + 0x20, 0x03, 0x28, 0x0b, 0x32, 0x3a, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, + 0x2e, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, 0x65, + 0x6d, 0x61, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, - 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, - 0x65, 0x79, 0x12, 0x27, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, - 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x63, - 0x68, 0x65, 0x6d, 0x61, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, - 0x57, 0x0a, 0x16, 0x44, 0x61, 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x63, 0x68, - 0x65, 0x6d, 0x61, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, - 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x27, 0x0a, 0x05, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, - 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x52, 0x05, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x99, 0x01, 0x0a, 0x16, 0x56, 0x61, 0x6c, - 0x69, 0x64, 0x61, 0x74, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x6e, - 0x66, 0x69, 0x67, 0x1a, 0x3a, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2f, - 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, - 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, - 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, - 0x43, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, - 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, - 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, - 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, - 0x74, 0x69, 0x63, 0x73, 0x22, 0x90, 0x02, 0x0a, 0x14, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, - 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x1a, 0x72, 0x0a, - 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, - 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, - 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, - 0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, - 0x30, 0x0a, 0x09, 0x72, 0x61, 0x77, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, - 0x28, 0x0b, 0x32, 0x13, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x52, - 0x61, 0x77, 0x53, 0x74, 0x61, 0x74, 0x65, 0x52, 0x08, 0x72, 0x61, 0x77, 0x53, 0x74, 0x61, 0x74, - 0x65, 0x1a, 0x83, 0x01, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x3e, - 0x0a, 0x0e, 0x75, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, + 0x52, 0x0f, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, + 0x73, 0x12, 0x6c, 0x0a, 0x13, 0x64, 0x61, 0x74, 0x61, 0x5f, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, + 0x5f, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x3c, + 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x47, 0x65, 0x74, 0x50, 0x72, + 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x2e, 0x52, 0x65, 0x73, + 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x2e, 0x44, 0x61, 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, + 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x11, 0x64, 0x61, + 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x12, + 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x04, + 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, + 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, + 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x12, 0x36, 0x0a, 0x0d, 0x70, 0x72, 0x6f, 0x76, + 0x69, 0x64, 0x65, 0x72, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, + 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x63, 0x68, 0x65, + 0x6d, 0x61, 0x52, 0x0c, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, 0x61, + 0x1a, 0x55, 0x0a, 0x14, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x63, 0x68, 0x65, + 0x6d, 0x61, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, + 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x27, 0x0a, 0x05, 0x76, 0x61, + 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x52, 0x05, 0x76, 0x61, + 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x57, 0x0a, 0x16, 0x44, 0x61, 0x74, 0x61, 0x53, + 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x73, 0x45, 0x6e, 0x74, 0x72, + 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, + 0x6b, 0x65, 0x79, 0x12, 0x27, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, + 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, + 0x63, 0x68, 0x65, 0x6d, 0x61, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, + 0x22, 0x99, 0x01, 0x0a, 0x16, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x50, 0x72, 0x6f, + 0x76, 0x69, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x3a, 0x0a, 0x07, 0x52, + 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, - 0x0d, 0x75, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x64, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x37, - 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x02, 0x20, - 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, - 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, - 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xb6, 0x01, 0x0a, 0x16, 0x56, 0x61, 0x6c, 0x69, - 0x64, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, - 0x69, 0x67, 0x1a, 0x57, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, - 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, - 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, - 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, - 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, - 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x43, 0x0a, 0x08, 0x52, - 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, - 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, - 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, - 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, - 0x22, 0xba, 0x01, 0x0a, 0x1a, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x44, 0x61, 0x74, - 0x61, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, - 0x57, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, - 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, - 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, - 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, - 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, - 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x43, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, - 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, - 0x69, 0x63, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, - 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, - 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xc1, 0x01, - 0x0a, 0x11, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, - 0x64, 0x65, 0x72, 0x1a, 0x67, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2b, - 0x0a, 0x11, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x5f, 0x76, 0x65, 0x72, 0x73, - 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x10, 0x74, 0x65, 0x72, 0x72, 0x61, - 0x66, 0x6f, 0x72, 0x6d, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x2f, 0x0a, 0x06, 0x63, - 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, - 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, - 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x43, 0x0a, 0x08, - 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, - 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, - 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, - 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, - 0x73, 0x22, 0xe3, 0x02, 0x0a, 0x0c, 0x52, 0x65, 0x61, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, - 0x63, 0x65, 0x1a, 0xbc, 0x01, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, - 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x63, - 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x02, 0x20, 0x01, + 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x43, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, + 0x6e, 0x73, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, + 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, + 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, + 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0x90, 0x02, 0x0a, + 0x14, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, + 0x53, 0x74, 0x61, 0x74, 0x65, 0x1a, 0x72, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, + 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, + 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x18, 0x0a, + 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x03, 0x52, 0x07, + 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x30, 0x0a, 0x09, 0x72, 0x61, 0x77, 0x5f, 0x73, + 0x74, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x13, 0x2e, 0x74, 0x66, 0x70, + 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x52, 0x61, 0x77, 0x53, 0x74, 0x61, 0x74, 0x65, 0x52, + 0x08, 0x72, 0x61, 0x77, 0x53, 0x74, 0x61, 0x74, 0x65, 0x1a, 0x83, 0x01, 0x0a, 0x08, 0x52, 0x65, + 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x3e, 0x0a, 0x0e, 0x75, 0x70, 0x67, 0x72, 0x61, 0x64, + 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, + 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, + 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0d, 0x75, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, + 0x64, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, + 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, + 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, + 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, + 0xb6, 0x01, 0x0a, 0x16, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x6f, + 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x57, 0x0a, 0x07, 0x52, 0x65, + 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, + 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, + 0x6d, 0x65, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, - 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x63, 0x75, 0x72, - 0x72, 0x65, 0x6e, 0x74, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, - 0x76, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, - 0x61, 0x74, 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, - 0x6d, 0x65, 0x74, 0x61, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, + 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, + 0x66, 0x69, 0x67, 0x1a, 0x43, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, + 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x01, + 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, + 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, + 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xba, 0x01, 0x0a, 0x1a, 0x56, 0x61, 0x6c, + 0x69, 0x64, 0x61, 0x74, 0x65, 0x44, 0x61, 0x74, 0x61, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, + 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x57, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, + 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, + 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, + 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, + 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, + 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, + 0x1a, 0x43, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x37, 0x0a, 0x0b, + 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, + 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, + 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, + 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xc1, 0x01, 0x0a, 0x11, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, + 0x75, 0x72, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x1a, 0x67, 0x0a, 0x07, 0x52, + 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2b, 0x0a, 0x11, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, + 0x6f, 0x72, 0x6d, 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x09, 0x52, 0x10, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x56, 0x65, 0x72, 0x73, + 0x69, 0x6f, 0x6e, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, + 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, + 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, + 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x43, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, + 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, + 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, + 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, + 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xe3, 0x02, 0x0a, 0x0c, 0x52, 0x65, + 0x61, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x1a, 0xbc, 0x01, 0x0a, 0x07, 0x52, + 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, + 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, + 0x61, 0x6d, 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x5f, 0x73, + 0x74, 0x61, 0x74, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, - 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, - 0x61, 0x1a, 0x93, 0x01, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x34, - 0x0a, 0x09, 0x6e, 0x65, 0x77, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, - 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, - 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x08, 0x6e, 0x65, 0x77, 0x53, - 0x74, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, - 0x69, 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, - 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, - 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x12, 0x18, 0x0a, - 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, - 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x22, 0xc4, 0x04, 0x0a, 0x12, 0x50, 0x6c, 0x61, 0x6e, - 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x1a, 0xbb, + 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x53, 0x74, 0x61, 0x74, + 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, + 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, + 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x18, 0x04, 0x20, 0x01, + 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, + 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x70, 0x72, 0x6f, + 0x76, 0x69, 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, 0x61, 0x1a, 0x93, 0x01, 0x0a, 0x08, 0x52, 0x65, + 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x34, 0x0a, 0x09, 0x6e, 0x65, 0x77, 0x5f, 0x73, 0x74, + 0x61, 0x74, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, + 0x75, 0x65, 0x52, 0x08, 0x6e, 0x65, 0x77, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x0b, + 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, + 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, + 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, + 0x73, 0x74, 0x69, 0x63, 0x73, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, + 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x22, + 0xc4, 0x04, 0x0a, 0x12, 0x50, 0x6c, 0x61, 0x6e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, + 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x1a, 0xbb, 0x02, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, + 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, + 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, + 0x38, 0x0a, 0x0b, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x02, + 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, + 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0a, 0x70, + 0x72, 0x69, 0x6f, 0x72, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x45, 0x0a, 0x12, 0x70, 0x72, 0x6f, + 0x70, 0x6f, 0x73, 0x65, 0x64, 0x5f, 0x6e, 0x65, 0x77, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, + 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, + 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x10, + 0x70, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x4e, 0x65, 0x77, 0x53, 0x74, 0x61, 0x74, 0x65, + 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, + 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, + 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, + 0x67, 0x12, 0x23, 0x0a, 0x0d, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x5f, 0x70, 0x72, 0x69, 0x76, 0x61, + 0x74, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0c, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x50, + 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, + 0x65, 0x72, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, + 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, + 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, + 0x4d, 0x65, 0x74, 0x61, 0x1a, 0xef, 0x01, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, + 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, + 0x74, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, + 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, + 0x65, 0x52, 0x0c, 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, + 0x43, 0x0a, 0x10, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x73, 0x5f, 0x72, 0x65, 0x70, 0x6c, + 0x61, 0x63, 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x18, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x41, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x50, + 0x61, 0x74, 0x68, 0x52, 0x0f, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x73, 0x52, 0x65, 0x70, + 0x6c, 0x61, 0x63, 0x65, 0x12, 0x27, 0x0a, 0x0f, 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x5f, + 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0e, 0x70, + 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x50, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, + 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x04, 0x20, 0x03, + 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, + 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, + 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xe4, 0x03, 0x0a, 0x13, 0x41, 0x70, 0x70, 0x6c, 0x79, + 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x1a, 0xb6, 0x02, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x38, 0x0a, 0x0b, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0a, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x53, 0x74, 0x61, 0x74, - 0x65, 0x12, 0x45, 0x0a, 0x12, 0x70, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x5f, 0x6e, 0x65, - 0x77, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, - 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, - 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x10, 0x70, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x64, - 0x4e, 0x65, 0x77, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, - 0x69, 0x67, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, + 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, + 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, - 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x23, 0x0a, 0x0d, 0x70, 0x72, 0x69, - 0x6f, 0x72, 0x5f, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0c, - 0x52, 0x0c, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x50, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x3c, - 0x0a, 0x0d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x18, - 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, - 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0c, - 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, 0x61, 0x1a, 0xef, 0x01, 0x0a, - 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x6c, 0x61, - 0x6e, 0x6e, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, - 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, - 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x70, 0x6c, 0x61, 0x6e, 0x6e, - 0x65, 0x64, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x43, 0x0a, 0x10, 0x72, 0x65, 0x71, 0x75, 0x69, - 0x72, 0x65, 0x73, 0x5f, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, - 0x0b, 0x32, 0x18, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x41, 0x74, - 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x50, 0x61, 0x74, 0x68, 0x52, 0x0f, 0x72, 0x65, 0x71, - 0x75, 0x69, 0x72, 0x65, 0x73, 0x52, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x12, 0x27, 0x0a, 0x0f, - 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x5f, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, - 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0e, 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x50, 0x72, - 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, - 0x74, 0x69, 0x63, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, - 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, - 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xe4, - 0x03, 0x0a, 0x13, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, - 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x1a, 0xb6, 0x02, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, - 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, - 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, - 0x38, 0x0a, 0x0b, 0x70, 0x72, 0x69, 0x6f, 0x72, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x02, - 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, - 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0a, 0x70, - 0x72, 0x69, 0x6f, 0x72, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x6c, 0x61, - 0x6e, 0x6e, 0x65, 0x64, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, + 0x65, 0x52, 0x0c, 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, + 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, + 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, + 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, + 0x12, 0x27, 0x0a, 0x0f, 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x5f, 0x70, 0x72, 0x69, 0x76, + 0x61, 0x74, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0e, 0x70, 0x6c, 0x61, 0x6e, 0x6e, + 0x65, 0x64, 0x50, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x72, 0x6f, + 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, - 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x70, 0x6c, 0x61, 0x6e, 0x6e, - 0x65, 0x64, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, - 0x67, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, + 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x70, 0x72, 0x6f, 0x76, 0x69, + 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, 0x61, 0x1a, 0x93, 0x01, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, + 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x34, 0x0a, 0x09, 0x6e, 0x65, 0x77, 0x5f, 0x73, 0x74, 0x61, 0x74, + 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, - 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x27, 0x0a, 0x0f, 0x70, 0x6c, 0x61, 0x6e, - 0x6e, 0x65, 0x64, 0x5f, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x05, 0x20, 0x01, 0x28, - 0x0c, 0x52, 0x0e, 0x70, 0x6c, 0x61, 0x6e, 0x6e, 0x65, 0x64, 0x50, 0x72, 0x69, 0x76, 0x61, 0x74, - 0x65, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x6d, 0x65, - 0x74, 0x61, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, - 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, - 0x65, 0x52, 0x0c, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, 0x61, 0x1a, - 0x93, 0x01, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x34, 0x0a, 0x09, - 0x6e, 0x65, 0x77, 0x5f, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, - 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, - 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x08, 0x6e, 0x65, 0x77, 0x53, 0x74, 0x61, - 0x74, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x02, 0x20, - 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x0b, - 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, - 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, - 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, - 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xed, 0x02, 0x0a, 0x13, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, - 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x1a, 0x36, 0x0a, - 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, - 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, - 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x02, 0x69, 0x64, 0x1a, 0x78, 0x0a, 0x10, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, - 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, - 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, - 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x2d, 0x0a, 0x05, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, - 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, - 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, - 0x73, 0x74, 0x61, 0x74, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, - 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x1a, - 0xa3, 0x01, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x5e, 0x0a, 0x12, - 0x69, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, - 0x65, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x2f, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, - 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x52, 0x65, 0x73, 0x6f, 0x75, - 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x2e, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, - 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x52, 0x11, 0x69, 0x6d, 0x70, 0x6f, 0x72, - 0x74, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x12, 0x37, 0x0a, 0x0b, - 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, - 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, - 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, - 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0x9c, 0x02, 0x0a, 0x0e, 0x52, 0x65, 0x61, 0x64, 0x44, 0x61, - 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x1a, 0x95, 0x01, 0x0a, 0x07, 0x52, 0x65, 0x71, - 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, - 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, - 0x65, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, - 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, - 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, - 0x69, 0x67, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x6d, - 0x65, 0x74, 0x61, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x52, 0x08, 0x6e, 0x65, 0x77, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, + 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, + 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, + 0x69, 0x63, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, + 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0xed, 0x02, + 0x0a, 0x13, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, + 0x53, 0x74, 0x61, 0x74, 0x65, 0x1a, 0x36, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, + 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, + 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x0e, 0x0a, + 0x02, 0x69, 0x64, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x02, 0x69, 0x64, 0x1a, 0x78, 0x0a, + 0x10, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, + 0x65, 0x12, 0x1b, 0x0a, 0x09, 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, + 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x2d, + 0x0a, 0x05, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, + 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, + 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x73, 0x74, 0x61, 0x74, 0x65, 0x12, 0x18, 0x0a, + 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, + 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x1a, 0xa3, 0x01, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, + 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x5e, 0x0a, 0x12, 0x69, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, + 0x5f, 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, + 0x32, 0x2f, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x49, 0x6d, 0x70, + 0x6f, 0x72, 0x74, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, + 0x2e, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, + 0x65, 0x52, 0x11, 0x69, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, + 0x72, 0x63, 0x65, 0x73, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, + 0x69, 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, + 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x22, 0x9c, 0x02, + 0x0a, 0x0e, 0x52, 0x65, 0x61, 0x64, 0x44, 0x61, 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, + 0x1a, 0x95, 0x01, 0x0a, 0x07, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1b, 0x0a, 0x09, + 0x74, 0x79, 0x70, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, + 0x08, 0x74, 0x79, 0x70, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x2f, 0x0a, 0x06, 0x63, 0x6f, 0x6e, + 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, - 0x75, 0x65, 0x52, 0x0c, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, 0x61, - 0x1a, 0x72, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x2d, 0x0a, 0x05, - 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, - 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, - 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x73, 0x74, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, - 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, - 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, - 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, - 0x74, 0x69, 0x63, 0x73, 0x2a, 0x25, 0x0a, 0x0a, 0x53, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x4b, 0x69, - 0x6e, 0x64, 0x12, 0x09, 0x0a, 0x05, 0x50, 0x4c, 0x41, 0x49, 0x4e, 0x10, 0x00, 0x12, 0x0c, 0x0a, - 0x08, 0x4d, 0x41, 0x52, 0x4b, 0x44, 0x4f, 0x57, 0x4e, 0x10, 0x01, 0x32, 0xcc, 0x09, 0x0a, 0x08, - 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, 0x60, 0x0a, 0x11, 0x47, 0x65, 0x74, 0x50, - 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x12, 0x24, 0x2e, - 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, - 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x2e, 0x52, 0x65, 0x71, 0x75, - 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, - 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, 0x65, 0x6d, - 0x61, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x6f, 0x0a, 0x16, 0x56, 0x61, - 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x43, 0x6f, - 0x6e, 0x66, 0x69, 0x67, 0x12, 0x29, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, - 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, - 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, - 0x2a, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, 0x6c, 0x69, - 0x64, 0x61, 0x74, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, - 0x69, 0x67, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x6f, 0x0a, 0x16, 0x56, - 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, - 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x29, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, - 0x36, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, - 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, - 0x1a, 0x2a, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, 0x6c, - 0x69, 0x64, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, - 0x66, 0x69, 0x67, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x7b, 0x0a, 0x1a, - 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x44, 0x61, 0x74, 0x61, 0x52, 0x65, 0x73, 0x6f, - 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x2d, 0x2e, 0x74, 0x66, 0x70, - 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x44, - 0x61, 0x74, 0x61, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, - 0x67, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2e, 0x2e, 0x74, 0x66, 0x70, 0x6c, - 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x44, 0x61, - 0x74, 0x61, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, - 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x69, 0x0a, 0x14, 0x55, 0x70, 0x67, - 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, - 0x65, 0x12, 0x27, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x55, 0x70, - 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, - 0x74, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x28, 0x2e, 0x74, 0x66, 0x70, - 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, - 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x70, - 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x60, 0x0a, 0x11, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, - 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, 0x24, 0x2e, 0x74, 0x66, 0x70, 0x6c, - 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x50, - 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, - 0x25, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x43, 0x6f, 0x6e, 0x66, - 0x69, 0x67, 0x75, 0x72, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x2e, 0x52, 0x65, - 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x51, 0x0a, 0x0c, 0x52, 0x65, 0x61, 0x64, 0x52, 0x65, - 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x12, 0x1f, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, - 0x6e, 0x36, 0x2e, 0x52, 0x65, 0x61, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x2e, - 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x20, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, - 0x69, 0x6e, 0x36, 0x2e, 0x52, 0x65, 0x61, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, - 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x63, 0x0a, 0x12, 0x50, 0x6c, 0x61, - 0x6e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, - 0x25, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x50, 0x6c, 0x61, 0x6e, - 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x2e, 0x52, - 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, - 0x6e, 0x36, 0x2e, 0x50, 0x6c, 0x61, 0x6e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, - 0x68, 0x61, 0x6e, 0x67, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x66, - 0x0a, 0x13, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, - 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x26, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, - 0x36, 0x2e, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, - 0x68, 0x61, 0x6e, 0x67, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, + 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x3c, 0x0a, 0x0d, 0x70, 0x72, + 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x18, 0x03, 0x20, 0x01, 0x28, + 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x79, + 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x0c, 0x70, 0x72, 0x6f, 0x76, + 0x69, 0x64, 0x65, 0x72, 0x4d, 0x65, 0x74, 0x61, 0x1a, 0x72, 0x0a, 0x08, 0x52, 0x65, 0x73, 0x70, + 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x2d, 0x0a, 0x05, 0x73, 0x74, 0x61, 0x74, 0x65, 0x18, 0x01, 0x20, + 0x01, 0x28, 0x0b, 0x32, 0x17, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, + 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x73, 0x74, + 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, + 0x63, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, + 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x44, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x52, + 0x0b, 0x64, 0x69, 0x61, 0x67, 0x6e, 0x6f, 0x73, 0x74, 0x69, 0x63, 0x73, 0x2a, 0x25, 0x0a, 0x0a, + 0x53, 0x74, 0x72, 0x69, 0x6e, 0x67, 0x4b, 0x69, 0x6e, 0x64, 0x12, 0x09, 0x0a, 0x05, 0x50, 0x4c, + 0x41, 0x49, 0x4e, 0x10, 0x00, 0x12, 0x0c, 0x0a, 0x08, 0x4d, 0x41, 0x52, 0x4b, 0x44, 0x4f, 0x57, + 0x4e, 0x10, 0x01, 0x32, 0xcc, 0x09, 0x0a, 0x08, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, + 0x12, 0x60, 0x0a, 0x11, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, + 0x63, 0x68, 0x65, 0x6d, 0x61, 0x12, 0x24, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, + 0x36, 0x2e, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, + 0x65, 0x6d, 0x61, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x74, 0x66, + 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x47, 0x65, 0x74, 0x50, 0x72, 0x6f, 0x76, 0x69, + 0x64, 0x65, 0x72, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, + 0x73, 0x65, 0x12, 0x6f, 0x0a, 0x16, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x50, 0x72, + 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x29, 0x2e, 0x74, + 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, + 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x2e, + 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2a, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, + 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x50, 0x72, 0x6f, 0x76, + 0x69, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, + 0x6e, 0x73, 0x65, 0x12, 0x6f, 0x0a, 0x16, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x52, + 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x29, 0x2e, + 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, + 0x74, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, + 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x2a, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, + 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, + 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x2e, 0x52, 0x65, 0x73, 0x70, + 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x7b, 0x0a, 0x1a, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, + 0x44, 0x61, 0x74, 0x61, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, + 0x69, 0x67, 0x12, 0x2d, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, + 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x44, 0x61, 0x74, 0x61, 0x52, 0x65, 0x73, 0x6f, 0x75, + 0x72, 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, + 0x74, 0x1a, 0x2e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x56, 0x61, + 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x44, 0x61, 0x74, 0x61, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, + 0x63, 0x65, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, + 0x65, 0x12, 0x69, 0x0a, 0x14, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, 0x6f, + 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x27, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x55, 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, + 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, + 0x73, 0x74, 0x1a, 0x28, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x55, + 0x70, 0x67, 0x72, 0x61, 0x64, 0x65, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, + 0x61, 0x74, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x60, 0x0a, 0x11, + 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, + 0x72, 0x12, 0x24, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x43, 0x6f, + 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x2e, + 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, + 0x69, 0x6e, 0x36, 0x2e, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x75, 0x72, 0x65, 0x50, 0x72, 0x6f, + 0x76, 0x69, 0x64, 0x65, 0x72, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x51, + 0x0a, 0x0c, 0x52, 0x65, 0x61, 0x64, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x12, 0x1f, + 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x52, 0x65, 0x61, 0x64, 0x52, + 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, + 0x20, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x52, 0x65, 0x61, 0x64, + 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, + 0x65, 0x12, 0x63, 0x0a, 0x12, 0x50, 0x6c, 0x61, 0x6e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, + 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x25, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, + 0x69, 0x6e, 0x36, 0x2e, 0x50, 0x6c, 0x61, 0x6e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, + 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x26, + 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x50, 0x6c, 0x61, 0x6e, 0x52, + 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x2e, 0x52, 0x65, + 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x66, 0x0a, 0x13, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x52, + 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x26, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x2e, 0x52, 0x65, - 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x66, 0x0a, 0x13, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, - 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x26, 0x2e, - 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, - 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, + 0x36, 0x2e, 0x41, 0x70, 0x70, 0x6c, 0x79, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x43, + 0x68, 0x61, 0x6e, 0x67, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x66, + 0x0a, 0x13, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, + 0x53, 0x74, 0x61, 0x74, 0x65, 0x12, 0x26, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, - 0x53, 0x74, 0x61, 0x74, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x57, - 0x0a, 0x0e, 0x52, 0x65, 0x61, 0x64, 0x44, 0x61, 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, - 0x12, 0x21, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x52, 0x65, 0x61, - 0x64, 0x44, 0x61, 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, - 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, - 0x52, 0x65, 0x61, 0x64, 0x44, 0x61, 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x2e, 0x52, - 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x51, 0x0a, 0x0c, 0x53, 0x74, 0x6f, 0x70, 0x50, - 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, 0x1f, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, - 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x74, 0x6f, 0x70, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, - 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x20, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, - 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x74, 0x6f, 0x70, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, - 0x72, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x42, 0x33, 0x5a, 0x31, 0x67, 0x69, - 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, - 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x2f, 0x69, 0x6e, 0x74, - 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x62, - 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, + 0x53, 0x74, 0x61, 0x74, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x27, 0x2e, + 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x49, 0x6d, 0x70, 0x6f, 0x72, 0x74, + 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x53, 0x74, 0x61, 0x74, 0x65, 0x2e, 0x52, 0x65, + 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x57, 0x0a, 0x0e, 0x52, 0x65, 0x61, 0x64, 0x44, 0x61, + 0x74, 0x61, 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x12, 0x21, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, + 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x52, 0x65, 0x61, 0x64, 0x44, 0x61, 0x74, 0x61, 0x53, 0x6f, 0x75, + 0x72, 0x63, 0x65, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x74, 0x66, + 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x52, 0x65, 0x61, 0x64, 0x44, 0x61, 0x74, 0x61, + 0x53, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, + 0x51, 0x0a, 0x0c, 0x53, 0x74, 0x6f, 0x70, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, + 0x1f, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x74, 0x6f, 0x70, + 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x2e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, + 0x1a, 0x20, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x2e, 0x53, 0x74, 0x6f, + 0x70, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x2e, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, + 0x73, 0x65, 0x42, 0x33, 0x5a, 0x31, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, + 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, 0x61, + 0x66, 0x6f, 0x72, 0x6d, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x74, 0x66, + 0x70, 0x6c, 0x75, 0x67, 0x69, 0x6e, 0x36, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( diff --git a/internal/tfplugin6/tfplugin6.proto b/internal/tfplugin6/tfplugin6.proto index ae47060038b5..70bb64b942f0 120000 --- a/internal/tfplugin6/tfplugin6.proto +++ b/internal/tfplugin6/tfplugin6.proto @@ -1 +1 @@ -../../docs/plugin-protocol/tfplugin6.0.proto \ No newline at end of file +../../docs/plugin-protocol/tfplugin6.1.proto \ No newline at end of file From 9847eaa9cfb0be406845d04477e160f45d6cf5f5 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 24 Sep 2021 12:26:00 -0400 Subject: [PATCH 106/644] remove usage of MinItems/MaxItems MinItems and MaxItems are not used on nested types in the protocol, so remove their usage in Terraform to prevent future confusion. --- internal/command/jsonprovider/attribute.go | 4 ---- internal/configs/configschema/decoder_spec.go | 6 +----- .../configs/configschema/internal_validate.go | 16 ++-------------- .../configschema/internal_validate_test.go | 15 --------------- internal/configs/configschema/schema.go | 7 ------- internal/plugin6/convert/schema.go | 4 ---- internal/plugin6/convert/schema_test.go | 2 -- 7 files changed, 3 insertions(+), 51 deletions(-) diff --git a/internal/command/jsonprovider/attribute.go b/internal/command/jsonprovider/attribute.go index 803e4e5f0a92..9425cd9e5893 100644 --- a/internal/command/jsonprovider/attribute.go +++ b/internal/command/jsonprovider/attribute.go @@ -22,8 +22,6 @@ type attribute struct { type nestedType struct { Attributes map[string]*attribute `json:"attributes,omitempty"` NestingMode string `json:"nesting_mode,omitempty"` - MinItems uint64 `json:"min_items,omitempty"` - MaxItems uint64 `json:"max_items,omitempty"` } func marshalStringKind(sk configschema.StringKind) string { @@ -55,8 +53,6 @@ func marshalAttribute(attr *configschema.Attribute) *attribute { if attr.NestedType != nil { nestedTy := nestedType{ - MinItems: uint64(attr.NestedType.MinItems), - MaxItems: uint64(attr.NestedType.MaxItems), NestingMode: nestingModeString(attr.NestedType.Nesting), } attrs := make(map[string]*attribute, len(attr.NestedType.Attributes)) diff --git a/internal/configs/configschema/decoder_spec.go b/internal/configs/configschema/decoder_spec.go index d127ccd6f4ae..d2d6616dd8e9 100644 --- a/internal/configs/configschema/decoder_spec.go +++ b/internal/configs/configschema/decoder_spec.go @@ -187,17 +187,13 @@ func (a *Attribute) decoderSpec(name string) hcldec.Spec { } if a.NestedType != nil { - // FIXME: a panic() is a bad UX. InternalValidate() can check Attribute - // schemas as well so a fix might be to call it when we get the schema - // from the provider in Context(). Since this could be a breaking - // change, we'd need to communicate well before adding that call. if a.Type != cty.NilType { panic("Invalid attribute schema: NestedType and Type cannot both be set. This is a bug in the provider.") } ty := a.NestedType.specType() ret.Type = ty - ret.Required = a.Required || a.NestedType.MinItems > 0 + ret.Required = a.Required return ret } diff --git a/internal/configs/configschema/internal_validate.go b/internal/configs/configschema/internal_validate.go index 2afa724e150a..8876672d793b 100644 --- a/internal/configs/configschema/internal_validate.go +++ b/internal/configs/configschema/internal_validate.go @@ -131,17 +131,9 @@ func (a *Attribute) internalValidate(name, prefix string) error { if a.NestedType != nil { switch a.NestedType.Nesting { - case NestingSingle: - switch { - case a.NestedType.MinItems != a.NestedType.MaxItems: - err = multierror.Append(err, fmt.Errorf("%s%s: MinItems and MaxItems must match in NestingSingle mode", prefix, name)) - case a.NestedType.MinItems < 0 || a.NestedType.MinItems > 1: - err = multierror.Append(err, fmt.Errorf("%s%s: MinItems and MaxItems must be set to either 0 or 1 in NestingSingle mode", prefix, name)) - } + case NestingSingle, NestingMap: + // no validations to perform case NestingList, NestingSet: - if a.NestedType.MinItems > a.NestedType.MaxItems && a.NestedType.MaxItems != 0 { - err = multierror.Append(err, fmt.Errorf("%s%s: MinItems must be less than or equal to MaxItems in %s mode", prefix, name, a.NestedType.Nesting)) - } if a.NestedType.Nesting == NestingSet { ety := a.NestedType.ImpliedType() if ety.HasDynamicTypes() { @@ -151,10 +143,6 @@ func (a *Attribute) internalValidate(name, prefix string) error { err = multierror.Append(err, fmt.Errorf("%s%s: NestingSet blocks may not contain attributes of cty.DynamicPseudoType", prefix, name)) } } - case NestingMap: - if a.NestedType.MinItems != 0 || a.NestedType.MaxItems != 0 { - err = multierror.Append(err, fmt.Errorf("%s%s: MinItems and MaxItems must both be 0 in NestingMap mode", prefix, name)) - } default: err = multierror.Append(err, fmt.Errorf("%s%s: invalid nesting mode %s", prefix, name, a.NestedType.Nesting)) } diff --git a/internal/configs/configschema/internal_validate_test.go b/internal/configs/configschema/internal_validate_test.go index dc70a5fa8003..3be461d4481d 100644 --- a/internal/configs/configschema/internal_validate_test.go +++ b/internal/configs/configschema/internal_validate_test.go @@ -143,21 +143,6 @@ func TestBlockInternalValidate(t *testing.T) { []string{"fooBar: name may contain only lowercase letters, digits and underscores"}, }, */ - "attribute with invalid NestedType nesting": { - &Block{ - Attributes: map[string]*Attribute{ - "foo": { - NestedType: &Object{ - Nesting: NestingSingle, - MinItems: 10, - MaxItems: 10, - }, - Optional: true, - }, - }, - }, - []string{"foo: MinItems and MaxItems must be set to either 0 or 1 in NestingSingle mode"}, - }, "attribute with invalid NestedType attribute": { &Block{ Attributes: map[string]*Attribute{ diff --git a/internal/configs/configschema/schema.go b/internal/configs/configschema/schema.go index 581bead8b98e..9ecc71e54d5c 100644 --- a/internal/configs/configschema/schema.go +++ b/internal/configs/configschema/schema.go @@ -87,13 +87,6 @@ type Object struct { // many instances of the Object are allowed, how many labels it expects, and // how the resulting data will be converted into a data structure. Nesting NestingMode - - // MinItems and MaxItems set, for the NestingList and NestingSet nesting - // modes, lower and upper limits on the number of child blocks allowed - // of the given type. If both are left at zero, no limit is applied. - // These fields are ignored for other nesting modes and must both be left - // at zero. - MinItems, MaxItems int } // NestedBlock represents the embedding of one block within another. diff --git a/internal/plugin6/convert/schema.go b/internal/plugin6/convert/schema.go index fa405acf11c2..0bdf4e28402f 100644 --- a/internal/plugin6/convert/schema.go +++ b/internal/plugin6/convert/schema.go @@ -199,8 +199,6 @@ func protoObjectToConfigSchema(b *proto.Schema_Object) *configschema.Object { object := &configschema.Object{ Attributes: make(map[string]*configschema.Attribute), Nesting: nesting, - MinItems: int(b.MinItems), - MaxItems: int(b.MaxItems), } for _, a := range b.Attributes { @@ -295,7 +293,5 @@ func configschemaObjectToProto(b *configschema.Object) *proto.Schema_Object { return &proto.Schema_Object{ Attributes: attributes, Nesting: nesting, - MinItems: int64(b.MinItems), - MaxItems: int64(b.MaxItems), } } diff --git a/internal/plugin6/convert/schema_test.go b/internal/plugin6/convert/schema_test.go index a94e812f1150..9befe4c5afb0 100644 --- a/internal/plugin6/convert/schema_test.go +++ b/internal/plugin6/convert/schema_test.go @@ -126,7 +126,6 @@ func TestConvertSchemaBlocks(t *testing.T) { Computed: true, }, }, - MinItems: 3, }, Required: true, }, @@ -246,7 +245,6 @@ func TestConvertSchemaBlocks(t *testing.T) { Computed: true, }, }, - MinItems: 3, }, Required: true, }, From b699391d04faf00641d0c30d24bd55dd679d946c Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Fri, 24 Sep 2021 11:18:39 -0400 Subject: [PATCH 107/644] json-output: Add change reasons to explain deletes The extra feedback information for why resource instance deletion is planned is now included in the streaming JSON UI output. We also add an explicit case for no-op actions to switch statements in this package to ensure exhaustiveness, for future linting. --- internal/command/views/json/change.go | 16 ++++++++++++++++ internal/command/views/json/hook.go | 12 ++++++++++++ .../docs/internals/machine-readable-ui.html.md | 7 ++++++- 3 files changed, 34 insertions(+), 1 deletion(-) diff --git a/internal/command/views/json/change.go b/internal/command/views/json/change.go index c18a2c15a456..4a8aa4e5bd14 100644 --- a/internal/command/views/json/change.go +++ b/internal/command/views/json/change.go @@ -73,6 +73,12 @@ const ( ReasonRequested ChangeReason = "requested" ReasonCannotUpdate ChangeReason = "cannot_update" ReasonUnknown ChangeReason = "unknown" + + ReasonDeleteBecauseNoResourceConfig ChangeReason = "delete_because_no_resource_config" + ReasonDeleteBecauseWrongRepetition ChangeReason = "delete_because_wrong_repetition" + ReasonDeleteBecauseCountIndex ChangeReason = "delete_because_count_index" + ReasonDeleteBecauseEachKey ChangeReason = "delete_because_each_key" + ReasonDeleteBecauseNoModule ChangeReason = "delete_because_no_module" ) func changeReason(reason plans.ResourceInstanceChangeActionReason) ChangeReason { @@ -85,6 +91,16 @@ func changeReason(reason plans.ResourceInstanceChangeActionReason) ChangeReason return ReasonRequested case plans.ResourceInstanceReplaceBecauseCannotUpdate: return ReasonCannotUpdate + case plans.ResourceInstanceDeleteBecauseNoResourceConfig: + return ReasonDeleteBecauseNoResourceConfig + case plans.ResourceInstanceDeleteBecauseWrongRepetition: + return ReasonDeleteBecauseWrongRepetition + case plans.ResourceInstanceDeleteBecauseCountIndex: + return ReasonDeleteBecauseCountIndex + case plans.ResourceInstanceDeleteBecauseEachKey: + return ReasonDeleteBecauseEachKey + case plans.ResourceInstanceDeleteBecauseNoModule: + return ReasonDeleteBecauseNoModule default: // This should never happen, but there's no good way to guarantee // exhaustive handling of the enum, so a generic fall back is better diff --git a/internal/command/views/json/hook.go b/internal/command/views/json/hook.go index 1736e10a5476..142a4d1fd199 100644 --- a/internal/command/views/json/hook.go +++ b/internal/command/views/json/hook.go @@ -314,6 +314,10 @@ func startActionVerb(action plans.Action) string { // This is not currently possible to reach, as we receive separate // passes for create and delete return "Replacing" + case plans.NoOp: + // This should never be possible: a no-op planned change should not + // be applied. We'll fall back to "Applying". + fallthrough default: return "Applying" } @@ -336,6 +340,10 @@ func progressActionVerb(action plans.Action) string { // This is not currently possible to reach, as we receive separate // passes for create and delete return "replacing" + case plans.NoOp: + // This should never be possible: a no-op planned change should not + // be applied. We'll fall back to "applying". + fallthrough default: return "applying" } @@ -358,6 +366,10 @@ func actionNoun(action plans.Action) string { // This is not currently possible to reach, as we receive separate // passes for create and delete return "Replacement" + case plans.NoOp: + // This should never be possible: a no-op planned change should not + // be applied. We'll fall back to "Apply". + fallthrough default: return "Apply" } diff --git a/website/docs/internals/machine-readable-ui.html.md b/website/docs/internals/machine-readable-ui.html.md index 53b3e47e0946..94427cc6d38e 100644 --- a/website/docs/internals/machine-readable-ui.html.md +++ b/website/docs/internals/machine-readable-ui.html.md @@ -126,10 +126,15 @@ At the end of a plan or before an apply, Terraform will emit a `planned_change` - `resource`: object describing the address of the resource to be changed; see [resource object](#resource-object) below for details - `previous_resource`: object describing the previous address of the resource, if this change includes a configuration-driven move - `action`: the action planned to be taken for the resource. Values: `noop`, `create`, `read`, `update`, `replace`, `delete`, `move`. -- `reason`: an optional reason for the change, currently only used when the action is `replace`. Values: +- `reason`: an optional reason for the change, currently only used when the action is `replace` or `delete`. Values: - `tainted`: resource was marked as tainted - `requested`: user requested that the resource be replaced, for example via the `-replace` plan flag - `cannot_update`: changes to configuration force the resource to be deleted and created rather than updated + - `delete_because_no_resource_config`: no matching resource in configuration + - `delete_because_wrong_repetition`: resource instance key has no corresponding `count` or `for_each` in configuration + - `delete_because_count_index`: resource instance key is outside the range of the `count` argument + - `delete_because_each_key`: resource instance key is not included in the `for_each` argument + - `delete_because_no_module`: enclosing module instance is not in configuration This message does not include details about the exact changes which caused the change to be planned. That information is available in [the JSON plan output](./json-format.html). From e09bad76ff31dfde1f964e3197e10d13bf8a45da Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Fri, 24 Sep 2021 15:02:30 -0400 Subject: [PATCH 108/644] build: Add exhaustive switch statement lint For now, only check the JSON views package, since this was the instance that most recently tripped us up. There are a few dozen failures elsewhere in Terraform which would need to be addressed before expanding this to other packages. --- .circleci/config.yml | 3 ++- Makefile | 3 +++ go.mod | 5 +++-- go.sum | 12 ++++++++++-- scripts/exhaustive.sh | 7 +++++++ tools/tools.go | 1 + 6 files changed, 26 insertions(+), 5 deletions(-) create mode 100755 scripts/exhaustive.sh diff --git a/.circleci/config.yml b/.circleci/config.yml index 0113ae17c3f4..b9132f634ac8 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -27,7 +27,8 @@ jobs: - checkout - run: go mod verify - run: go install honnef.co/go/tools/cmd/staticcheck - - run: make fmtcheck generate staticcheck + - run: go install github.com/nishanths/exhaustive/... + - run: make fmtcheck generate staticcheck exhaustive - run: name: verify no code was generated command: | diff --git a/Makefile b/Makefile index 16e2abaa5b2c..aa29d369669e 100644 --- a/Makefile +++ b/Makefile @@ -23,6 +23,9 @@ fmtcheck: staticcheck: @sh -c "'$(CURDIR)/scripts/staticcheck.sh'" +exhaustive: + @sh -c "'$(CURDIR)/scripts/exhaustive.sh'" + website: ifeq (,$(wildcard $(GOPATH)/src/$(WEBSITE_REPO))) echo "$(WEBSITE_REPO) not found in your GOPATH (necessary for layouts and assets), get-ting..." diff --git a/go.mod b/go.mod index 6cbca3b170df..bc3fae7b74e8 100644 --- a/go.mod +++ b/go.mod @@ -125,6 +125,7 @@ require ( github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.1 // indirect github.com/mozillazg/go-httpheader v0.3.0 // indirect + github.com/nishanths/exhaustive v0.2.3 github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d // indirect github.com/oklog/run v1.0.0 // indirect github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db @@ -158,11 +159,11 @@ require ( golang.org/x/mod v0.4.2 golang.org/x/net v0.0.0-20210614182718-04defd469f4e golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84 - golang.org/x/sys v0.0.0-20210423082822-04245dca01da + golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf golang.org/x/text v0.3.6 golang.org/x/time v0.0.0-20191024005414-555d28b269f0 // indirect - golang.org/x/tools v0.1.0 + golang.org/x/tools v0.1.4 golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect google.golang.org/api v0.44.0-impersonate-preview google.golang.org/appengine v1.6.7 // indirect diff --git a/go.sum b/go.sum index b89a0b92cdb7..b5c72603f3a3 100644 --- a/go.sum +++ b/go.sum @@ -536,6 +536,8 @@ github.com/mozillazg/go-httpheader v0.3.0/go.mod h1:PuT8h0pw6efvp8ZeUec1Rs7dwjK0 github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= +github.com/nishanths/exhaustive v0.2.3 h1:+ANTMqRNrqwInnP9aszg/0jDo+zbXa4x66U19Bx/oTk= +github.com/nishanths/exhaustive v0.2.3/go.mod h1:bhIX678Nx8inLM9PbpvK1yv6oGtoP8BfaIeMzgBNKvc= github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d h1:VhgPp6v9qf9Agr/56bj7Y/xa04UccTW04VP0Qed4vnQ= github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d/go.mod h1:YUTz3bUH2ZwIWBy3CJBeOBEugqcmXREj14T+iG/4k4U= github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw= @@ -643,6 +645,7 @@ github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/zclconf/go-cty v1.0.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= @@ -770,6 +773,7 @@ golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwY golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= +golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= golang.org/x/net v0.0.0-20210614182718-04defd469f4e h1:XpT3nA5TvE525Ne3hInMh6+GETgn27Zfm9dxsThnX2Q= golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= @@ -850,8 +854,11 @@ golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20210423082822-04245dca01da h1:b3NXsE2LusjYGGjL5bxEVZZORm/YEFFrWFjR8eFrw/c= +golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c h1:F1jZWGFhYfh0Ci55sIpILtKKK8p3i2/krTr0H1rg74I= +golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf h1:MZ2shdL+ZM/XzY3ZGOnh4Nlpnxz5GSOhOmtHo3iPU6M= @@ -923,8 +930,9 @@ golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4f golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= -golang.org/x/tools v0.1.0 h1:po9/4sTYwZU9lPhi1tOrb4hCv3qrhiQ77LZfGa2OjwY= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= +golang.org/x/tools v0.1.4 h1:cVngSRcfgyZCzys3KYOpCFa+4dqX/Oub9tAq00ttGVs= +golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= diff --git a/scripts/exhaustive.sh b/scripts/exhaustive.sh new file mode 100755 index 000000000000..73dbbc56ce23 --- /dev/null +++ b/scripts/exhaustive.sh @@ -0,0 +1,7 @@ +#!/usr/bin/env bash + +echo "==> Checking for switch statement exhaustiveness..." + +# For now we're only checking a handful of packages, rather than defaulting to +# everything with a skip list. +exhaustive ./internal/command/views/json diff --git a/tools/tools.go b/tools/tools.go index 712cd6717c56..f575e0189f40 100644 --- a/tools/tools.go +++ b/tools/tools.go @@ -7,6 +7,7 @@ import ( _ "github.com/golang/mock/mockgen" _ "github.com/golang/protobuf/protoc-gen-go" _ "github.com/mitchellh/gox" + _ "github.com/nishanths/exhaustive" _ "golang.org/x/tools/cmd/cover" _ "golang.org/x/tools/cmd/stringer" _ "google.golang.org/grpc/cmd/protoc-gen-go-grpc" From cac1f5c2641bb21d3e14a570dc10ff01801cf459 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 24 Sep 2021 18:08:33 -0400 Subject: [PATCH 109/644] refactoring: exhaustive NestedWithin checks When checking dependencies between statements, we need to check all combinations of To and From addresses. --- internal/refactoring/move_execute.go | 39 +++- internal/refactoring/move_execute_test.go | 206 +++++++++++----------- 2 files changed, 140 insertions(+), 105 deletions(-) diff --git a/internal/refactoring/move_execute.go b/internal/refactoring/move_execute.go index 322569803a23..7f5fbae23b36 100644 --- a/internal/refactoring/move_execute.go +++ b/internal/refactoring/move_execute.go @@ -200,10 +200,13 @@ func buildMoveStatementGraph(stmts []MoveStatement) *dag.AcyclicGraph { for dependerI := range stmts { depender := &stmts[dependerI] for dependeeI := range stmts { + if dependerI == dependeeI { + // skip comparing the statement to itself + continue + } dependee := &stmts[dependeeI] - dependeeTo := dependee.To - dependerFrom := depender.From - if dependerFrom.CanChainFrom(dependeeTo) || dependerFrom.NestedWithin(dependeeTo) { + + if statementDependsOn(depender, dependee) { g.Connect(dag.BasicEdge(depender, dependee)) } } @@ -212,6 +215,36 @@ func buildMoveStatementGraph(stmts []MoveStatement) *dag.AcyclicGraph { return g } +// statementDependsOn returns true if statement a depends on statement b; +// i.e. statement b must be executed before statement a. +func statementDependsOn(a, b *MoveStatement) bool { + // chain-able moves are simple, as on the destination of one move could be + // equal to the source of another. + if a.From.CanChainFrom(b.To) { + return true + } + + // Statement nesting in more complex, as we have 8 possible combinations to + // assess. Here we list all combinations, along with the statement which + // must be executed first when one address is nested within another. + // A.From IsNestedWithin B.From => A + // A.From IsNestedWithin B.To => B + // A.To IsNestedWithin B.From => A + // A.To IsNestedWithin B.To => B + // B.From IsNestedWithin A.From => B + // B.From IsNestedWithin A.To => A + // B.To IsNestedWithin A.From => B + // B.To IsNestedWithin A.To => A + // + // Since we are only interested in checking if A depends on B, we only need + // to check the 4 possibilities above which result in B being executed + // first. + return a.From.NestedWithin(b.To) || + a.To.NestedWithin(b.To) || + b.From.NestedWithin(a.From) || + b.To.NestedWithin(a.From) +} + // MoveResults describes the outcome of an ApplyMoves call. type MoveResults struct { // Changes is a map from the unique keys of the final new resource diff --git a/internal/refactoring/move_execute_test.go b/internal/refactoring/move_execute_test.go index e913df9f4723..ab71f5b0fe36 100644 --- a/internal/refactoring/move_execute_test.go +++ b/internal/refactoring/move_execute_test.go @@ -491,118 +491,120 @@ func TestApplyMoves(t *testing.T) { `foo.to[0]`, }, }, - - // FIXME: This test seems to flap between the result the test case - // currently records and the "more intuitive" results included inline, - // which suggests we have a missing edge in our move dependency graph. - // (The MoveResults commented out below predates some changes to that - // struct, so will need updating once we uncomment this test.) - /* - "move module and then move resource into it": { - []MoveStatement{ - testMoveStatement(t, "", "module.bar[0]", "module.boo"), - testMoveStatement(t, "", "foo.from", "module.boo.foo.from"), + "move resource and containing module": { + []MoveStatement{ + testMoveStatement(t, "", "module.boo", "module.bar[0]"), + testMoveStatement(t, "boo", "foo.from", "foo.to"), + }, + states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + instAddrs["module.boo.foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + }), + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["module.bar[0].foo.to"].UniqueKey(): { + From: instAddrs["module.boo.foo.from"], + To: instAddrs["module.bar[0].foo.to"], + }, }, - states.BuildState(func(s *states.SyncState) { - s.SetResourceInstanceCurrent( - instAddrs["module.bar[0].foo.to"], - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{}`), - }, - providerAddr, - ) - s.SetResourceInstanceCurrent( + Blocked: map[addrs.UniqueKey]MoveBlocked{}, + }, + []string{ + `module.bar[0].foo.to`, + }, + }, + + "move module and then move resource into it": { + []MoveStatement{ + testMoveStatement(t, "", "module.bar[0]", "module.boo"), + testMoveStatement(t, "", "foo.from", "module.boo.foo.from"), + }, + states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + instAddrs["module.bar[0].foo.to"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + s.SetResourceInstanceCurrent( + instAddrs["foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + }), + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + instAddrs["module.boo.foo.from"].UniqueKey(): { instAddrs["foo.from"], - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{}`), - }, - providerAddr, - ) - }), - MoveResults{ - // FIXME: This result is counter-intuitive, because ApplyMoves - // handled the resource move first and then the module move - // collided with it. It would be arguably more intuitive to - // complete the module move first and then let the "smaller" - // resource move merge into it. - // (The arguably-more-intuitive results are commented out - // in the maps below.) - - Changes: map[addrs.UniqueKey]MoveSuccess{ - //instAddrs["module.boo.foo.to"].UniqueKey(): instAddrs["module.bar[0].foo.to"], - //instAddrs["module.boo.foo.from"].UniqueKey(): instAddrs["foo.from"], - instAddrs["module.boo.foo.from"].UniqueKey(): instAddrs["foo.from"], - }, - Blocked: map[addrs.UniqueKey]MoveBlocked{ - // intuitive result: nothing blocked - instAddrs["module.bar[0].foo.to"].Module.UniqueKey(): instAddrs["module.boo.foo.from"].Module, + instAddrs["module.boo.foo.from"], + }, + instAddrs["module.boo.foo.to"].UniqueKey(): { + instAddrs["module.bar[0].foo.to"], + instAddrs["module.boo.foo.to"], }, }, - []string{ - //`foo.from`, - //`module.boo.foo.from`, - `module.bar[0].foo.to`, - `module.boo.foo.from`, - }, + Blocked: map[addrs.UniqueKey]MoveBlocked{}, }, - */ - - // FIXME: This test seems to flap between the result the test case - // currently records and the "more intuitive" results included inline, - // which suggests we have a missing edge in our move dependency graph. - // (The MoveResults commented out below predates some changes to that - // struct, so will need updating once we uncomment this test.) - /* - "module move collides with resource move": { - []MoveStatement{ - testMoveStatement(t, "", "module.bar[0]", "module.boo"), - testMoveStatement(t, "", "foo.from", "module.boo.foo.from"), - }, - states.BuildState(func(s *states.SyncState) { - s.SetResourceInstanceCurrent( + []string{ + `module.boo.foo.from`, + `module.boo.foo.to`, + }, + }, + + "module move collides with resource move": { + []MoveStatement{ + testMoveStatement(t, "", "module.bar[0]", "module.boo"), + testMoveStatement(t, "", "foo.from", "module.boo.foo.from"), + }, + states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent( + instAddrs["module.bar[0].foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + s.SetResourceInstanceCurrent( + instAddrs["foo.from"], + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{}`), + }, + providerAddr, + ) + }), + MoveResults{ + Changes: map[addrs.UniqueKey]MoveSuccess{ + + instAddrs["module.boo.foo.from"].UniqueKey(): { instAddrs["module.bar[0].foo.from"], - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{}`), - }, - providerAddr, - ) - s.SetResourceInstanceCurrent( - instAddrs["foo.from"], - &states.ResourceInstanceObjectSrc{ - Status: states.ObjectReady, - AttrsJSON: []byte(`{}`), - }, - providerAddr, - ) - }), - MoveResults{ - // FIXME: This result is counter-intuitive, because ApplyMoves - // handled the resource move first and then it was the - // module move that collided, whereas it would arguably have - // been better to let the module take priority and have only - // the one resource move be ignored due to the collision. - // (The arguably-more-intuitive results are commented out - // in the maps below.) - Changes: map[addrs.UniqueKey]MoveSuccess{ - //instAddrs["module.boo.foo.from"].UniqueKey(): instAddrs["module.bar[0].foo.from"], - instAddrs["module.boo.foo.from"].UniqueKey(): instAddrs["foo.from"], - }, - Blocked: map[addrs.UniqueKey]MoveBlocked{ - //instAddrs["foo.from"].UniqueKey(): instAddrs["module.bar[0].foo.from"], - instAddrs["module.bar[0].foo.from"].Module.UniqueKey(): instAddrs["module.boo.foo.from"].Module, + instAddrs["module.boo.foo.from"], }, }, - []string{ - //`foo.from`, - //`module.boo.foo.from`, - `module.bar[0].foo.from`, - `module.boo.foo.from`, + Blocked: map[addrs.UniqueKey]MoveBlocked{ + instAddrs["foo.from"].ContainingResource().UniqueKey(): { + Actual: instAddrs["foo.from"].ContainingResource(), + Wanted: instAddrs["module.boo.foo.from"].ContainingResource(), + }, }, }, - */ + []string{ + `foo.from`, + `module.boo.foo.from`, + }, + }, } for name, test := range tests { From f60d55d6adf8a60c7ff58c9056d101e928b2b6de Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Mon, 27 Sep 2021 13:45:51 -0700 Subject: [PATCH 110/644] core: Emit only one warning for move collisions in destroy-plan mode Our current implementation of destroy planning includes secretly running a normal plan first, in order to get its effect of refreshing the state. Previously our warning about colliding moves would betray that implementation detail because we'd return it from both of our planning operations here and thus show the message twice. That would also have happened in theory for any other warnings emitted by both plan operations, but it's the move collision warning that made it immediately visible. We'll now only return warnings from the initial plan if we're also returning errors from that plan, and thus the warnings of both plans can never mix together into the same diags and thus we'll avoid duplicating any warnings. This does mean that we'd lose any warnings which might hypothetically emerge only from the hidden normal plan and not from the subsequent destroy plan, but we'll accept that as an okay tradeoff here because those warnings are likely to not be super relevant to the destroy case anyway, or else we'd emit them from the destroy-plan walk too. --- internal/terraform/context_plan.go | 15 ++- internal/terraform/context_plan2_test.go | 118 +++++++++++++++++++++++ 2 files changed, 131 insertions(+), 2 deletions(-) diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 001a2507ad78..5610a445ba71 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -262,8 +262,19 @@ func (c *Context) destroyPlan(config *configs.Config, prevRunState *states.State normalOpts := *opts normalOpts.Mode = plans.NormalMode refreshPlan, refreshDiags := c.plan(config, prevRunState, rootVariables, &normalOpts) - diags = diags.Append(refreshDiags) - if diags.HasErrors() { + if refreshDiags.HasErrors() { + // NOTE: Normally we'd append diagnostics regardless of whether + // there are errors, just in case there are warnings we'd want to + // preserve, but we're intentionally _not_ doing that here because + // if the first plan succeeded then we'll be running another plan + // in DestroyMode below, and we don't want to double-up any + // warnings that both plan walks would generate. + // (This does mean we won't show any warnings that would've been + // unique to only this walk, but we're assuming here that if the + // warnings aren't also applicable to a destroy plan then we'd + // rather not show them here, because this non-destroy plan for + // refreshing is largely an implementation detail.) + diags = diags.Append(refreshDiags) return nil, diags } diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index e0c57cd0a8bc..6d54e9eef180 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -887,6 +887,124 @@ Terraform has planned to destroy these objects. If Terraform's proposed changes }) } +func TestContext2Plan_movedResourceCollisionDestroy(t *testing.T) { + // This is like TestContext2Plan_movedResourceCollision but intended to + // ensure we still produce the expected warning (and produce it only once) + // when we're creating a destroy plan, rather than a normal plan. + // (This case is interesting at the time of writing because we happen to + // use a normal plan as a trick to refresh before creating a destroy plan. + // This test will probably become uninteresting if a future change to + // the destroy-time planning behavior handles refreshing in a different + // way, which avoids this pre-processing step of running a normal plan + // first.) + + addrNoKey := mustResourceInstanceAddr("test_object.a") + addrZeroKey := mustResourceInstanceAddr("test_object.a[0]") + m := testModuleInline(t, map[string]string{ + "main.tf": ` + resource "test_object" "a" { + # No "count" set, so test_object.a[0] will want + # to implicitly move to test_object.a, but will get + # blocked by the existing object at that address. + } + `, + }) + + state := states.BuildState(func(s *states.SyncState) { + s.SetResourceInstanceCurrent(addrNoKey, &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + s.SetResourceInstanceCurrent(addrZeroKey, &states.ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: states.ObjectReady, + }, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`)) + }) + + p := simpleMockProvider() + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + if diags.HasErrors() { + t.Fatalf("unexpected errors\n%s", diags.Err().Error()) + } + + // We should have a warning, though! We'll lightly abuse the "for RPC" + // feature of diagnostics to get some more-readily-comparable diagnostic + // values. + gotDiags := diags.ForRPC() + wantDiags := tfdiags.Diagnostics{ + tfdiags.Sourceless( + tfdiags.Warning, + "Unresolved resource instance address changes", + // NOTE: This message is _lightly_ confusing in the destroy case, + // because it says "Terraform has planned to destroy these objects" + // but this is a plan to destroy all objects, anyway. We expect the + // conflict situation to be pretty rare though, and even rarer in + // a "terraform destroy", so we'll just live with that for now + // unless we see evidence that lots of folks are being confused by + // it in practice. + `Terraform tried to adjust resource instance addresses in the prior state based on change information recorded in the configuration, but some adjustments did not succeed due to existing objects already at the intended addresses: + - test_object.a[0] could not move to test_object.a + +Terraform has planned to destroy these objects. If Terraform's proposed changes aren't appropriate, you must first resolve the conflicts using the "terraform state" subcommands and then create a new plan.`, + ), + }.ForRPC() + if diff := cmp.Diff(wantDiags, gotDiags); diff != "" { + // If we get here with a diff that makes it seem like the above warning + // is being reported twice, the likely cause is not correctly handling + // the warnings from the hidden normal plan we run as part of preparing + // for a destroy plan, unless that strategy has changed in the meantime + // since we originally wrote this test. + t.Errorf("wrong diagnostics\n%s", diff) + } + + t.Run(addrNoKey.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addrNoKey) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addrNoKey) + } + + if got, want := instPlan.Addr, addrNoKey; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addrNoKey; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.Delete; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + }) + t.Run(addrZeroKey.String(), func(t *testing.T) { + instPlan := plan.Changes.ResourceInstance(addrZeroKey) + if instPlan == nil { + t.Fatalf("no plan for %s at all", addrZeroKey) + } + + if got, want := instPlan.Addr, addrZeroKey; !got.Equal(want) { + t.Errorf("wrong current address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.PrevRunAddr, addrZeroKey; !got.Equal(want) { + t.Errorf("wrong previous run address\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.Action, plans.Delete; got != want { + t.Errorf("wrong planned action\ngot: %s\nwant: %s", got, want) + } + if got, want := instPlan.ActionReason, plans.ResourceInstanceChangeNoReason; got != want { + t.Errorf("wrong action reason\ngot: %s\nwant: %s", got, want) + } + }) +} + func TestContext2Plan_movedResourceUntargeted(t *testing.T) { addrA := mustResourceInstanceAddr("test_object.a") addrB := mustResourceInstanceAddr("test_object.b") From 625e76867883e41b7bdac50a56952dc3d09f94f7 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 28 Sep 2021 12:38:40 -0400 Subject: [PATCH 111/644] make sure required_version is checked before diags We must ensure that the terraform required_version is checked as early as possible, so that new configuration constructs don't cause init to fail without indicating the version is incompatible. The loadConfig call before the earlyconfig parsing seems to be unneeded, and we can delay that to de-tangle it from installing the modules which may have their own constraints. TODO: it seems that loadConfig should be able to handle returning the version constraints in the same manner as loadSingleModule. --- internal/command/init.go | 62 ++++++++++++------- internal/command/init_test.go | 27 ++++++++ .../init-check-required-version-first/main.tf | 17 +++++ 3 files changed, 82 insertions(+), 24 deletions(-) create mode 100644 internal/command/testdata/init-check-required-version-first/main.tf diff --git a/internal/command/init.go b/internal/command/init.go index 96a383159f04..02f555ee0a8e 100644 --- a/internal/command/init.go +++ b/internal/command/init.go @@ -150,19 +150,7 @@ func (c *InitCommand) Run(args []string) int { // initialization functionality remains built around "earlyconfig" and // so we need to still load the module via that mechanism anyway until we // can do some more invasive refactoring here. - rootMod, confDiags := c.loadSingleModule(path) rootModEarly, earlyConfDiags := c.loadSingleModuleEarly(path) - if confDiags.HasErrors() { - c.Ui.Error(c.Colorize().Color(strings.TrimSpace(errInitConfigError))) - // TODO: It would be nice to check the version constraints in - // rootModEarly.RequiredCore and print out a hint if the module is - // declaring that it's not compatible with this version of Terraform, - // though we're deferring that for now because we're intending to - // refactor our use of "earlyconfig" here anyway and so whatever we - // might do here right now would likely be invalidated by that. - c.showDiagnostics(confDiags) - return 1 - } // If _only_ the early loader encountered errors then that's unusual // (it should generally be a superset of the normal loader) but we'll // return those errors anyway since otherwise we'll probably get @@ -172,7 +160,12 @@ func (c *InitCommand) Run(args []string) int { c.Ui.Error(c.Colorize().Color(strings.TrimSpace(errInitConfigError))) // Errors from the early loader are generally not as high-quality since // it has less context to work with. - diags = diags.Append(confDiags) + + // TODO: It would be nice to check the version constraints in + // rootModEarly.RequiredCore and print out a hint if the module is + // declaring that it's not compatible with this version of Terraform, + // and that may be what caused earlyconfig to fail. + diags = diags.Append(earlyConfDiags) c.showDiagnostics(diags) return 1 } @@ -189,23 +182,44 @@ func (c *InitCommand) Run(args []string) int { } } + // Using loadSingleModule will allow us to get the sniffed required_version + // before trying to build the complete config. + rootMod, _ := c.loadSingleModule(path) + // We can ignore the error, since we are going to reload the full config + // again below once we know the root module constraints are valid. + if rootMod != nil { + rootCfg := &configs.Config{ + Module: rootMod, + } + // If our module version constraints are not valid, then there is no + // need to continue processing. + versionDiags := terraform.CheckCoreVersionRequirements(rootCfg) + if versionDiags.HasErrors() { + c.showDiagnostics(versionDiags) + return 1 + } + } + // With all of the modules (hopefully) installed, we can now try to load the // whole configuration tree. config, confDiags := c.loadConfig(path) - diags = diags.Append(confDiags) - if confDiags.HasErrors() { - c.Ui.Error(strings.TrimSpace(errInitConfigError)) - c.showDiagnostics(diags) - return 1 - } + // configDiags will be handled after the version constraint check, since an + // incorrect version of terraform may be producing errors for configuration + // constructs added in later versions. - // Before we go further, we'll check to make sure none of the modules in the - // configuration declare that they don't support this Terraform version, so - // we can produce a version-related error message rather than + // Check again to make sure none of the modules in the configuration + // declare that they don't support this Terraform version, so we can + // produce a version-related error message rather than // potentially-confusing downstream errors. versionDiags := terraform.CheckCoreVersionRequirements(config) - diags = diags.Append(versionDiags) if versionDiags.HasErrors() { + c.showDiagnostics(versionDiags) + return 1 + } + + diags = diags.Append(confDiags) + if confDiags.HasErrors() { + c.Ui.Error(strings.TrimSpace(errInitConfigError)) c.showDiagnostics(diags) return 1 } @@ -213,7 +227,7 @@ func (c *InitCommand) Run(args []string) int { var back backend.Backend if flagBackend { - be, backendOutput, backendDiags := c.initBackend(rootMod, flagConfigExtra) + be, backendOutput, backendDiags := c.initBackend(config.Root.Module, flagConfigExtra) diags = diags.Append(backendDiags) if backendDiags.HasErrors() { c.showDiagnostics(diags) diff --git a/internal/command/init_test.go b/internal/command/init_test.go index b75da1f4f899..2bf1e0e26481 100644 --- a/internal/command/init_test.go +++ b/internal/command/init_test.go @@ -1608,6 +1608,33 @@ func TestInit_checkRequiredVersion(t *testing.T) { } } +// Verify that init will error out with an invalid version constraint, even if +// there are other invalid configuration constructs. +func TestInit_checkRequiredVersionFirst(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath("init-check-required-version-first"), td) + defer testChdir(t, td)() + + ui := cli.NewMockUi() + view, _ := testView(t) + c := &InitCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(testProvider()), + Ui: ui, + View: view, + }, + } + + args := []string{} + if code := c.Run(args); code != 1 { + t.Fatalf("got exit status %d; want 1\nstderr:\n%s\n\nstdout:\n%s", code, ui.ErrorWriter.String(), ui.OutputWriter.String()) + } + errStr := ui.ErrorWriter.String() + if !strings.Contains(errStr, `Unsupported Terraform Core version`) { + t.Fatalf("output should point to unmet version constraint, but is:\n\n%s", errStr) + } +} + func TestInit_providerLockFile(t *testing.T) { // Create a temporary working directory that is empty td := tempDir(t) diff --git a/internal/command/testdata/init-check-required-version-first/main.tf b/internal/command/testdata/init-check-required-version-first/main.tf new file mode 100644 index 000000000000..ab311d066953 --- /dev/null +++ b/internal/command/testdata/init-check-required-version-first/main.tf @@ -0,0 +1,17 @@ +terraform { + required_version = ">200.0.0" + + bad { + block = "false" + } + + required_providers { + bang = { + oops = "boom" + } + } +} + +nope { + boom {} +} From a53faf43f67859a5b72544221336df7ac2eb04ad Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 28 Sep 2021 13:02:26 -0400 Subject: [PATCH 112/644] return partial config from LoadConfig with errors LoadConfig should return any parsed configuration in order for the caller to verify `required_version`. --- internal/command/init.go | 2 +- internal/configs/configload/loader_load.go | 8 +++++++- internal/configs/configload/loader_load_test.go | 12 ++++++++++-- 3 files changed, 18 insertions(+), 4 deletions(-) diff --git a/internal/command/init.go b/internal/command/init.go index 02f555ee0a8e..ec6fa5723cc0 100644 --- a/internal/command/init.go +++ b/internal/command/init.go @@ -227,7 +227,7 @@ func (c *InitCommand) Run(args []string) int { var back backend.Backend if flagBackend { - be, backendOutput, backendDiags := c.initBackend(config.Root.Module, flagConfigExtra) + be, backendOutput, backendDiags := c.initBackend(config.Module, flagConfigExtra) diags = diags.Append(backendDiags) if backendDiags.HasErrors() { c.showDiagnostics(diags) diff --git a/internal/configs/configload/loader_load.go b/internal/configs/configload/loader_load.go index 323001de1028..9ae440274035 100644 --- a/internal/configs/configload/loader_load.go +++ b/internal/configs/configload/loader_load.go @@ -21,7 +21,13 @@ import ( func (l *Loader) LoadConfig(rootDir string) (*configs.Config, hcl.Diagnostics) { rootMod, diags := l.parser.LoadConfigDir(rootDir) if rootMod == nil || diags.HasErrors() { - return nil, diags + // Ensure we return any parsed modules here so that required_version + // constraints can be verified even when encountering errors. + cfg := &configs.Config{ + Module: rootMod, + } + + return cfg, diags } cfg, cDiags := configs.BuildConfig(rootMod, configs.ModuleWalkerFunc(l.moduleWalkerLoad)) diff --git a/internal/configs/configload/loader_load_test.go b/internal/configs/configload/loader_load_test.go index 82d8db0cd4fe..ab8dd5dee630 100644 --- a/internal/configs/configload/loader_load_test.go +++ b/internal/configs/configload/loader_load_test.go @@ -91,8 +91,16 @@ func TestLoaderLoadConfig_loadDiags(t *testing.T) { t.Fatalf("unexpected error from NewLoader: %s", err) } - _, diags := loader.LoadConfig(fixtureDir) + cfg, diags := loader.LoadConfig(fixtureDir) if !diags.HasErrors() { - t.Fatalf("success; want error") + t.Fatal("success; want error") + } + + if cfg == nil { + t.Fatal("partial config not returned with diagnostics") + } + + if cfg.Module == nil { + t.Fatal("expected config module") } } From c2e0d265cfe946fca70faf68935f73df178cab30 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 28 Sep 2021 13:06:22 -0400 Subject: [PATCH 113/644] LoadModule now always returns the module We don't need to load the configuration twice, since configload can return the module for us. --- internal/command/init.go | 24 +------ internal/command/init_test.go | 66 +++++++++++++------ .../main.tf | 3 + .../mod/main.tf | 17 +++++ 4 files changed, 69 insertions(+), 41 deletions(-) create mode 100644 internal/command/testdata/init-check-required-version-first-module/main.tf create mode 100644 internal/command/testdata/init-check-required-version-first-module/mod/main.tf diff --git a/internal/command/init.go b/internal/command/init.go index ec6fa5723cc0..551a4d0b8a5a 100644 --- a/internal/command/init.go +++ b/internal/command/init.go @@ -182,24 +182,6 @@ func (c *InitCommand) Run(args []string) int { } } - // Using loadSingleModule will allow us to get the sniffed required_version - // before trying to build the complete config. - rootMod, _ := c.loadSingleModule(path) - // We can ignore the error, since we are going to reload the full config - // again below once we know the root module constraints are valid. - if rootMod != nil { - rootCfg := &configs.Config{ - Module: rootMod, - } - // If our module version constraints are not valid, then there is no - // need to continue processing. - versionDiags := terraform.CheckCoreVersionRequirements(rootCfg) - if versionDiags.HasErrors() { - c.showDiagnostics(versionDiags) - return 1 - } - } - // With all of the modules (hopefully) installed, we can now try to load the // whole configuration tree. config, confDiags := c.loadConfig(path) @@ -207,9 +189,9 @@ func (c *InitCommand) Run(args []string) int { // incorrect version of terraform may be producing errors for configuration // constructs added in later versions. - // Check again to make sure none of the modules in the configuration - // declare that they don't support this Terraform version, so we can - // produce a version-related error message rather than + // Before we go further, we'll check to make sure none of the modules in + // the configuration declare that they don't support this Terraform + // version, so we can produce a version-related error message rather than // potentially-confusing downstream errors. versionDiags := terraform.CheckCoreVersionRequirements(config) if versionDiags.HasErrors() { diff --git a/internal/command/init_test.go b/internal/command/init_test.go index 2bf1e0e26481..2d96b27b4403 100644 --- a/internal/command/init_test.go +++ b/internal/command/init_test.go @@ -1611,28 +1611,54 @@ func TestInit_checkRequiredVersion(t *testing.T) { // Verify that init will error out with an invalid version constraint, even if // there are other invalid configuration constructs. func TestInit_checkRequiredVersionFirst(t *testing.T) { - td := t.TempDir() - testCopyDir(t, testFixturePath("init-check-required-version-first"), td) - defer testChdir(t, td)() + t.Run("root_module", func(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath("init-check-required-version-first"), td) + defer testChdir(t, td)() - ui := cli.NewMockUi() - view, _ := testView(t) - c := &InitCommand{ - Meta: Meta{ - testingOverrides: metaOverridesForProvider(testProvider()), - Ui: ui, - View: view, - }, - } + ui := cli.NewMockUi() + view, _ := testView(t) + c := &InitCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(testProvider()), + Ui: ui, + View: view, + }, + } - args := []string{} - if code := c.Run(args); code != 1 { - t.Fatalf("got exit status %d; want 1\nstderr:\n%s\n\nstdout:\n%s", code, ui.ErrorWriter.String(), ui.OutputWriter.String()) - } - errStr := ui.ErrorWriter.String() - if !strings.Contains(errStr, `Unsupported Terraform Core version`) { - t.Fatalf("output should point to unmet version constraint, but is:\n\n%s", errStr) - } + args := []string{} + if code := c.Run(args); code != 1 { + t.Fatalf("got exit status %d; want 1\nstderr:\n%s\n\nstdout:\n%s", code, ui.ErrorWriter.String(), ui.OutputWriter.String()) + } + errStr := ui.ErrorWriter.String() + if !strings.Contains(errStr, `Unsupported Terraform Core version`) { + t.Fatalf("output should point to unmet version constraint, but is:\n\n%s", errStr) + } + }) + t.Run("sub_module", func(t *testing.T) { + td := t.TempDir() + testCopyDir(t, testFixturePath("init-check-required-version-first-module"), td) + defer testChdir(t, td)() + + ui := cli.NewMockUi() + view, _ := testView(t) + c := &InitCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(testProvider()), + Ui: ui, + View: view, + }, + } + + args := []string{} + if code := c.Run(args); code != 1 { + t.Fatalf("got exit status %d; want 1\nstderr:\n%s\n\nstdout:\n%s", code, ui.ErrorWriter.String(), ui.OutputWriter.String()) + } + errStr := ui.ErrorWriter.String() + if !strings.Contains(errStr, `Unsupported Terraform Core version`) { + t.Fatalf("output should point to unmet version constraint, but is:\n\n%s", errStr) + } + }) } func TestInit_providerLockFile(t *testing.T) { diff --git a/internal/command/testdata/init-check-required-version-first-module/main.tf b/internal/command/testdata/init-check-required-version-first-module/main.tf new file mode 100644 index 000000000000..ba846846994e --- /dev/null +++ b/internal/command/testdata/init-check-required-version-first-module/main.tf @@ -0,0 +1,3 @@ +module "mod" { + source = "./mod" +} diff --git a/internal/command/testdata/init-check-required-version-first-module/mod/main.tf b/internal/command/testdata/init-check-required-version-first-module/mod/main.tf new file mode 100644 index 000000000000..ab311d066953 --- /dev/null +++ b/internal/command/testdata/init-check-required-version-first-module/mod/main.tf @@ -0,0 +1,17 @@ +terraform { + required_version = ">200.0.0" + + bad { + block = "false" + } + + required_providers { + bang = { + oops = "boom" + } + } +} + +nope { + boom {} +} From ab0322e406294104adeb86c38216bce2576a730a Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 28 Sep 2021 13:18:09 -0400 Subject: [PATCH 114/644] remove debugging println --- internal/command/show_test.go | 2 -- 1 file changed, 2 deletions(-) diff --git a/internal/command/show_test.go b/internal/command/show_test.go index b70ce14c1cb4..ea266d2cb32c 100644 --- a/internal/command/show_test.go +++ b/internal/command/show_test.go @@ -103,8 +103,6 @@ func TestShow_aliasedProvider(t *testing.T) { }, } - fmt.Println(os.Getwd()) - // the statefile created by testStateFile is named state.tfstate args := []string{"state.tfstate"} if code := c.Run(args); code != 0 { From c9a5fdb36631f306d44e00fd41d224c3e737c035 Mon Sep 17 00:00:00 2001 From: Zach Whaley Date: Wed, 29 Sep 2021 15:36:59 -0500 Subject: [PATCH 115/644] cliconfig: Fix error message about invalid credentials helper type --- internal/command/cliconfig/credentials.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/internal/command/cliconfig/credentials.go b/internal/command/cliconfig/credentials.go index 967f719eb9e5..185baf9d5b46 100644 --- a/internal/command/cliconfig/credentials.go +++ b/internal/command/cliconfig/credentials.go @@ -46,7 +46,7 @@ func (c *Config) CredentialsSource(helperPlugins pluginDiscovery.PluginMetaSet) for givenType, givenConfig := range c.CredentialsHelpers { available := helperPlugins.WithName(givenType) if available.Count() == 0 { - log.Printf("[ERROR] Unable to find credentials helper %q; ignoring", helperType) + log.Printf("[ERROR] Unable to find credentials helper %q; ignoring", givenType) break } From 0062e7112aac83304ea906f08c5aeb89b4ebfefa Mon Sep 17 00:00:00 2001 From: Melissa Gurney Greene Date: Wed, 29 Sep 2021 13:42:54 -0700 Subject: [PATCH 116/644] Update publish.html.md (#29671) Updated language around contributing modules with overlapping features in the Publishing Modules section: "We welcome..." (all contributions) --- website/docs/language/modules/develop/publish.html.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/website/docs/language/modules/develop/publish.html.md b/website/docs/language/modules/develop/publish.html.md index 93a17e3dfdd7..2f1419dbbaba 100644 --- a/website/docs/language/modules/develop/publish.html.md +++ b/website/docs/language/modules/develop/publish.html.md @@ -28,6 +28,9 @@ If you do not wish to publish your modules in the public registry, you can instead use a [private registry](/docs/registry/private.html) to get the same benefits. +We welcome contributions of Terraform modules from our community members, partners, and customers. Our ecosystem is made richer by each new module created or an existing one updated, as they reflect the wide range of experience and technical requirements of the community that uses them. Our cloud provider partners often seek to develop specific modules for popular or challenging use cases on their platform and utilize them as valuable learning experiences to empathize with their users. Similarly, our community module developers incorporate a variety of opinions and use cases from the broader Terraform community. Both types of modules have their place in the Terraform registry, accessible to practitioners who can decide which modules best fit their requirements. + + ## Distribution via other sources Although the registry is the native mechanism for distributing re-usable From fe671206cc85d6b9c21d8c89cafd4dadcbb6ca3d Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 29 Sep 2021 16:45:29 -0400 Subject: [PATCH 117/644] Add detail about the protocol deprecation Make sure it's clear that the deprecated fields serve no purpose, and should be ignored. --- docs/plugin-protocol/tfplugin6.1.proto | 3 +++ internal/tfplugin6/tfplugin6.pb.go | 3 +++ 2 files changed, 6 insertions(+) diff --git a/docs/plugin-protocol/tfplugin6.1.proto b/docs/plugin-protocol/tfplugin6.1.proto index e8912cd0b799..3f6dead35e18 100644 --- a/docs/plugin-protocol/tfplugin6.1.proto +++ b/docs/plugin-protocol/tfplugin6.1.proto @@ -128,6 +128,9 @@ message Schema { repeated Attribute attributes = 1; NestingMode nesting = 3; + + // MinItems and MaxItems were never used in the protocol, and have no + // effect on validation. int64 min_items = 4 [deprecated = true]; int64 max_items = 5 [deprecated = true]; } diff --git a/internal/tfplugin6/tfplugin6.pb.go b/internal/tfplugin6/tfplugin6.pb.go index d3152b355b58..d73ab55ec009 100644 --- a/internal/tfplugin6/tfplugin6.pb.go +++ b/internal/tfplugin6/tfplugin6.pb.go @@ -1480,6 +1480,9 @@ type Schema_Object struct { Attributes []*Schema_Attribute `protobuf:"bytes,1,rep,name=attributes,proto3" json:"attributes,omitempty"` Nesting Schema_Object_NestingMode `protobuf:"varint,3,opt,name=nesting,proto3,enum=tfplugin6.Schema_Object_NestingMode" json:"nesting,omitempty"` + // MinItems and MaxItems were never used in the protocol, and have no + // effect on validation. + // // Deprecated: Do not use. MinItems int64 `protobuf:"varint,4,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"` // Deprecated: Do not use. From 016463ea9c01fbfff0bbd4bb29b29b03d935be45 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 28 Sep 2021 17:57:10 -0400 Subject: [PATCH 118/644] don't check all ancestors for data depends_on Only depends_on ancestors for transitive dependencies when we're not pointed directly at a resource. We can't be much more precise here, since in order to maintain our guarantee that data sources will wait for explicit dependencies, if those dependencies happen to be a module, output, or variable, we have to find some upstream managed resource in order to check for a planned change. --- internal/terraform/transform_reference.go | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/internal/terraform/transform_reference.go b/internal/terraform/transform_reference.go index fe64e370770d..bd07161ca02e 100644 --- a/internal/terraform/transform_reference.go +++ b/internal/terraform/transform_reference.go @@ -353,11 +353,19 @@ func (m ReferenceMap) dependsOn(g *Graph, depender graphNodeDependsOn) ([]dag.Ve } res = append(res, rv) - // and check any ancestors for transitive dependencies - ans, _ := g.Ancestors(rv) - for _, v := range ans { - if isDependableResource(v) { - res = append(res, v) + // Check any ancestors for transitive dependencies when we're + // not pointed directly at a resource. We can't be much more + // precise here, since in order to maintain our guarantee that data + // sources will wait for explicit dependencies, if those dependencies + // happen to be a module, output, or variable, we have to find some + // upstream managed resource in order to check for a planned + // change. + if _, ok := rv.(GraphNodeConfigResource); !ok { + ans, _ := g.Ancestors(rv) + for _, v := range ans { + if isDependableResource(v) { + res = append(res, v) + } } } } From 618e9cf8ecd1b02323af7fa3e86a4dc9c4848379 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Thu, 30 Sep 2021 16:00:37 -0400 Subject: [PATCH 119/644] test for unexpected data reads --- internal/terraform/context_plan2_test.go | 108 +++++++++++++++++++++++ 1 file changed, 108 insertions(+) diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index 6d54e9eef180..b1034d7fb62e 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -1971,3 +1971,111 @@ func TestContext2Plan_forceReplaceIncompleteAddr(t *testing.T) { } }) } + +// Verify that adding a module instance does force existing module data sources +// to be deferred +func TestContext2Plan_noChangeDataSourceAddingModuleInstance(t *testing.T) { + m := testModuleInline(t, map[string]string{ + "main.tf": ` +locals { + data = { + a = "a" + b = "b" + } +} + +module "one" { + source = "./mod" + for_each = local.data + input = each.value +} + +module "two" { + source = "./mod" + for_each = module.one + input = each.value.output +} +`, + "mod/main.tf": ` +variable "input" { +} + +resource "test_resource" "x" { + value = var.input +} + +data "test_data_source" "d" { + foo = test_resource.x.id +} + +output "output" { + value = test_resource.x.id +} +`, + }) + + p := testProvider("test") + p.ReadDataSourceResponse = &providers.ReadDataSourceResponse{ + State: cty.ObjectVal(map[string]cty.Value{ + "id": cty.StringVal("data"), + "foo": cty.StringVal("foo"), + }), + } + state := states.NewState() + modOne := addrs.RootModuleInstance.Child("one", addrs.StringKey("a")) + modTwo := addrs.RootModuleInstance.Child("two", addrs.StringKey("a")) + one := state.EnsureModule(modOne) + two := state.EnsureModule(modTwo) + one.SetResourceInstanceCurrent( + mustResourceInstanceAddr(`test_resource.x`).Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo","value":"a"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), + ) + one.SetResourceInstanceCurrent( + mustResourceInstanceAddr(`data.test_data_source.d`).Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"data"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), + ) + two.SetResourceInstanceCurrent( + mustResourceInstanceAddr(`test_resource.x`).Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"foo","value":"foo"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), + ) + two.SetResourceInstanceCurrent( + mustResourceInstanceAddr(`data.test_data_source.d`).Resource, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectReady, + AttrsJSON: []byte(`{"id":"data"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), + ) + + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + plan, diags := ctx.Plan(m, state, DefaultPlanOpts) + assertNoErrors(t, diags) + + for _, res := range plan.Changes.Resources { + // both existing data sources should be read during plan + if res.Addr.Module[0].InstanceKey == addrs.StringKey("b") { + continue + } + + if res.Addr.Resource.Resource.Mode == addrs.DataResourceMode && res.Action != plans.NoOp { + t.Errorf("unexpected %s plan for %s", res.Action, res.Addr) + } + } +} From 8d193ad268c6601ac917194e24f581ad36bb91d0 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 29 Sep 2021 15:51:09 -0700 Subject: [PATCH 120/644] core: Simplify and centralize plugin availability checks Historically the responsibility for making sure that all of the available providers are of suitable versions and match the appropriate checksums has been split rather inexplicably over multiple different layers, with some of the checks happening as late as creating a terraform.Context. We're gradually iterating towards making that all be handled in one place, but in this step we're just cleaning up some old remnants from the main "terraform" package, which is now no longer responsible for any version or checksum verification and instead just assumes it's been provided with suitable factory functions by its caller. We do still have a pre-check here to make sure that we at least have a factory function for each plugin the configuration seems to depend on, because if we don't do that up front then it ends up getting caught instead deep inside the Terraform runtime, often inside a concurrent graph walk and thus it's not deterministic which codepath will happen to catch it on a particular run. As of this commit, this actually does leave some holes in our checks: the command package is using the dependency lock file to make sure we have exactly the provider packages we expect (exact versions and checksums), which is the most crucial part, but we don't yet have any spot where we make sure that the lock file is consistent with the current configuration, and we are no longer preserving the provider checksums as part of a saved plan. Both of those will come in subsequent commits. While it's unusual to have a series of commits that briefly subtracts functionality and then adds back in equivalent functionality later, the lock file checking is the only part that's crucial for security reasons, with everything else mainly just being to give better feedback when folks seem to be using Terraform incorrectly. The other bits are therefore mostly cosmetic and okay to be absent briefly as we work towards a better design that is clearer about where that responsibility belongs. --- internal/backend/local/backend_local.go | 5 - internal/command/meta.go | 24 -- internal/command/plan_test.go | 2 +- internal/command/plugins_lock.go | 7 - .../plans/internal/planproto/planfile.pb.go | 369 +++++++----------- .../plans/internal/planproto/planfile.proto | 16 - internal/plans/plan.go | 1 - internal/plans/planfile/planfile_test.go | 1 - internal/plans/planfile/tfplan.go | 17 - internal/plans/planfile/tfplan_test.go | 8 - internal/terraform/context.go | 183 +++++---- internal/terraform/context_plan.go | 7 +- internal/terraform/context_plan_test.go | 7 - internal/terraform/context_plugins.go | 10 + internal/terraform/context_refresh_test.go | 5 +- internal/terraform/context_test.go | 266 +++++-------- internal/terraform/context_validate.go | 6 +- 17 files changed, 364 insertions(+), 570 deletions(-) diff --git a/internal/backend/local/backend_local.go b/internal/backend/local/backend_local.go index 0ba6e66fc681..5ce5929ee310 100644 --- a/internal/backend/local/backend_local.go +++ b/internal/backend/local/backend_local.go @@ -252,11 +252,6 @@ func (b *Local) localRunForPlanFile(pf *planfile.Reader, run *backend.LocalRun, // we need to apply the plan. run.Plan = plan - // When we're applying a saved plan, our context must verify that all of - // the providers it ends up using are identical to those which created - // the plan. - coreOpts.ProviderSHA256s = plan.ProviderSHA256s - tfCtx, moreDiags := terraform.NewContext(coreOpts) diags = diags.Append(moreDiags) if moreDiags.HasErrors() { diff --git a/internal/command/meta.go b/internal/command/meta.go index 79e1b6f843a2..0a4029ef0151 100644 --- a/internal/command/meta.go +++ b/internal/command/meta.go @@ -478,32 +478,8 @@ func (m *Meta) contextOpts() (*terraform.ContextOpts, error) { } opts.Providers = providerFactories opts.Provisioners = m.provisionerFactories() - - // Read the dependency locks so that they can be verified against the - // provider requirements in the configuration - lockedDependencies, diags := m.lockedDependencies() - - // If the locks file is invalid, we should fail early rather than - // ignore it. A missing locks file will return no error. - if diags.HasErrors() { - return nil, diags.Err() - } - opts.LockedDependencies = lockedDependencies - - // If any unmanaged providers or dev overrides are enabled, they must - // be listed in the context so that they can be ignored when verifying - // the locks against the configuration - opts.ProvidersInDevelopment = make(map[addrs.Provider]struct{}) - for provider := range m.UnmanagedProviders { - opts.ProvidersInDevelopment[provider] = struct{}{} - } - for provider := range m.ProviderDevOverrides { - opts.ProvidersInDevelopment[provider] = struct{}{} - } } - opts.ProviderSHA256s = m.providerPluginsLock().Read() - opts.Meta = &terraform.ContextMeta{ Env: workspace, OriginalWorkingDir: m.WorkingDir.OriginalWorkingDir(), diff --git a/internal/command/plan_test.go b/internal/command/plan_test.go index 880f2e971e98..68f051f55532 100644 --- a/internal/command/plan_test.go +++ b/internal/command/plan_test.go @@ -1051,7 +1051,7 @@ func TestPlan_init_required(t *testing.T) { t.Fatalf("expected error, got success") } got := output.Stderr() - if !strings.Contains(got, `failed to read schema for test_instance.foo in registry.terraform.io/hashicorp/test`) { + if !strings.Contains(got, "Error: Missing required provider") { t.Fatal("wrong error message in output:", got) } } diff --git a/internal/command/plugins_lock.go b/internal/command/plugins_lock.go index b7f3c6b4e9db..03f11b71d286 100644 --- a/internal/command/plugins_lock.go +++ b/internal/command/plugins_lock.go @@ -5,15 +5,8 @@ import ( "fmt" "io/ioutil" "log" - "path/filepath" ) -func (m *Meta) providerPluginsLock() *pluginSHA256LockFile { - return &pluginSHA256LockFile{ - Filename: filepath.Join(m.pluginDir(), "lock.json"), - } -} - type pluginSHA256LockFile struct { Filename string } diff --git a/internal/plans/internal/planproto/planfile.pb.go b/internal/plans/internal/planproto/planfile.pb.go index beb50852ab15..a8810e0d746a 100644 --- a/internal/plans/internal/planproto/planfile.pb.go +++ b/internal/plans/internal/planproto/planfile.pb.go @@ -257,9 +257,6 @@ type Plan struct { ForceReplaceAddrs []string `protobuf:"bytes,16,rep,name=force_replace_addrs,json=forceReplaceAddrs,proto3" json:"force_replace_addrs,omitempty"` // The version string for the Terraform binary that created this plan. TerraformVersion string `protobuf:"bytes,14,opt,name=terraform_version,json=terraformVersion,proto3" json:"terraform_version,omitempty"` - // SHA256 digests of all of the provider plugin binaries that were used - // in the creation of this plan. - ProviderHashes map[string]*Hash `protobuf:"bytes,15,rep,name=provider_hashes,json=providerHashes,proto3" json:"provider_hashes,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` // Backend is a description of the backend configuration and other related // settings at the time the plan was created. Backend *Backend `protobuf:"bytes,13,opt,name=backend,proto3" json:"backend,omitempty"` @@ -360,13 +357,6 @@ func (x *Plan) GetTerraformVersion() string { return "" } -func (x *Plan) GetProviderHashes() map[string]*Hash { - if x != nil { - return x.ProviderHashes - } - return nil -} - func (x *Plan) GetBackend() *Backend { if x != nil { return x.Backend @@ -785,61 +775,6 @@ func (x *DynamicValue) GetMsgpack() []byte { return nil } -// Hash represents a hash value. -// -// At present hashes always use the SHA256 algorithm. In future other hash -// algorithms may be used, possibly with a transitional period of including -// both as separate attributes of this type. Consumers must ignore attributes -// they don't support and fail if no supported attribute is present. The -// top-level format version will not be incremented for changes to the set of -// hash algorithms. -type Hash struct { - state protoimpl.MessageState - sizeCache protoimpl.SizeCache - unknownFields protoimpl.UnknownFields - - Sha256 []byte `protobuf:"bytes,1,opt,name=sha256,proto3" json:"sha256,omitempty"` -} - -func (x *Hash) Reset() { - *x = Hash{} - if protoimpl.UnsafeEnabled { - mi := &file_planfile_proto_msgTypes[6] - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - ms.StoreMessageInfo(mi) - } -} - -func (x *Hash) String() string { - return protoimpl.X.MessageStringOf(x) -} - -func (*Hash) ProtoMessage() {} - -func (x *Hash) ProtoReflect() protoreflect.Message { - mi := &file_planfile_proto_msgTypes[6] - if protoimpl.UnsafeEnabled && x != nil { - ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) - if ms.LoadMessageInfo() == nil { - ms.StoreMessageInfo(mi) - } - return ms - } - return mi.MessageOf(x) -} - -// Deprecated: Use Hash.ProtoReflect.Descriptor instead. -func (*Hash) Descriptor() ([]byte, []int) { - return file_planfile_proto_rawDescGZIP(), []int{6} -} - -func (x *Hash) GetSha256() []byte { - if x != nil { - return x.Sha256 - } - return nil -} - // Path represents a set of steps to traverse into a data structure. It is // used to refer to a sub-structure within a dynamic data structure presented // separately. @@ -854,7 +789,7 @@ type Path struct { func (x *Path) Reset() { *x = Path{} if protoimpl.UnsafeEnabled { - mi := &file_planfile_proto_msgTypes[7] + mi := &file_planfile_proto_msgTypes[6] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -867,7 +802,7 @@ func (x *Path) String() string { func (*Path) ProtoMessage() {} func (x *Path) ProtoReflect() protoreflect.Message { - mi := &file_planfile_proto_msgTypes[7] + mi := &file_planfile_proto_msgTypes[6] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -880,7 +815,7 @@ func (x *Path) ProtoReflect() protoreflect.Message { // Deprecated: Use Path.ProtoReflect.Descriptor instead. func (*Path) Descriptor() ([]byte, []int) { - return file_planfile_proto_rawDescGZIP(), []int{7} + return file_planfile_proto_rawDescGZIP(), []int{6} } func (x *Path) GetSteps() []*Path_Step { @@ -904,7 +839,7 @@ type Path_Step struct { func (x *Path_Step) Reset() { *x = Path_Step{} if protoimpl.UnsafeEnabled { - mi := &file_planfile_proto_msgTypes[10] + mi := &file_planfile_proto_msgTypes[8] ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) ms.StoreMessageInfo(mi) } @@ -917,7 +852,7 @@ func (x *Path_Step) String() string { func (*Path_Step) ProtoMessage() {} func (x *Path_Step) ProtoReflect() protoreflect.Message { - mi := &file_planfile_proto_msgTypes[10] + mi := &file_planfile_proto_msgTypes[8] if protoimpl.UnsafeEnabled && x != nil { ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x)) if ms.LoadMessageInfo() == nil { @@ -930,7 +865,7 @@ func (x *Path_Step) ProtoReflect() protoreflect.Message { // Deprecated: Use Path_Step.ProtoReflect.Descriptor instead. func (*Path_Step) Descriptor() ([]byte, []int) { - return file_planfile_proto_rawDescGZIP(), []int{7, 0} + return file_planfile_proto_rawDescGZIP(), []int{6, 0} } func (m *Path_Step) GetSelector() isPath_Step_Selector { @@ -978,7 +913,7 @@ var File_planfile_proto protoreflect.FileDescriptor var file_planfile_proto_rawDesc = []byte{ 0x0a, 0x0e, 0x70, 0x6c, 0x61, 0x6e, 0x66, 0x69, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, - 0x12, 0x06, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x22, 0xec, 0x05, 0x0a, 0x04, 0x50, 0x6c, 0x61, + 0x12, 0x06, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x22, 0xd0, 0x04, 0x0a, 0x04, 0x50, 0x6c, 0x61, 0x6e, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x25, 0x0a, 0x07, 0x75, 0x69, 0x5f, 0x6d, 0x6f, 0x64, 0x65, 0x18, 0x11, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x0c, 0x2e, 0x74, @@ -1007,123 +942,111 @@ var file_planfile_proto_rawDesc = []byte{ 0x72, 0x63, 0x65, 0x52, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x41, 0x64, 0x64, 0x72, 0x73, 0x12, 0x2b, 0x0a, 0x11, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x0e, 0x20, 0x01, 0x28, 0x09, 0x52, 0x10, 0x74, 0x65, 0x72, 0x72, - 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x49, 0x0a, 0x0f, - 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x5f, 0x68, 0x61, 0x73, 0x68, 0x65, 0x73, 0x18, - 0x0f, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x20, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, - 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x48, 0x61, 0x73, 0x68, - 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0e, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, - 0x72, 0x48, 0x61, 0x73, 0x68, 0x65, 0x73, 0x12, 0x29, 0x0a, 0x07, 0x62, 0x61, 0x63, 0x6b, 0x65, - 0x6e, 0x64, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0f, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, - 0x6e, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x52, 0x07, 0x62, 0x61, 0x63, 0x6b, 0x65, - 0x6e, 0x64, 0x1a, 0x52, 0x0a, 0x0e, 0x56, 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65, 0x73, 0x45, - 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x2a, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, - 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, - 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x4f, 0x0a, 0x13, 0x50, 0x72, 0x6f, 0x76, 0x69, 0x64, - 0x65, 0x72, 0x48, 0x61, 0x73, 0x68, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, - 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, - 0x22, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0c, - 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x48, 0x61, 0x73, 0x68, 0x52, 0x05, 0x76, 0x61, - 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x69, 0x0a, 0x07, 0x42, 0x61, 0x63, 0x6b, 0x65, - 0x6e, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, - 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x2c, 0x0a, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, - 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, - 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x63, 0x6f, - 0x6e, 0x66, 0x69, 0x67, 0x12, 0x1c, 0x0a, 0x09, 0x77, 0x6f, 0x72, 0x6b, 0x73, 0x70, 0x61, 0x63, - 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x77, 0x6f, 0x72, 0x6b, 0x73, 0x70, 0x61, - 0x63, 0x65, 0x22, 0xe4, 0x01, 0x0a, 0x06, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x26, 0x0a, - 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x0e, 0x2e, - 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x06, 0x61, - 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2c, 0x0a, 0x06, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x73, 0x18, - 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, - 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x06, 0x76, 0x61, 0x6c, - 0x75, 0x65, 0x73, 0x12, 0x42, 0x0a, 0x16, 0x62, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x5f, 0x73, 0x65, - 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x73, 0x18, 0x03, 0x20, - 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, - 0x68, 0x52, 0x14, 0x62, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x53, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, - 0x76, 0x65, 0x50, 0x61, 0x74, 0x68, 0x73, 0x12, 0x40, 0x0a, 0x15, 0x61, 0x66, 0x74, 0x65, 0x72, - 0x5f, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, 0x73, - 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, - 0x50, 0x61, 0x74, 0x68, 0x52, 0x13, 0x61, 0x66, 0x74, 0x65, 0x72, 0x53, 0x65, 0x6e, 0x73, 0x69, - 0x74, 0x69, 0x76, 0x65, 0x50, 0x61, 0x74, 0x68, 0x73, 0x22, 0xd3, 0x02, 0x0a, 0x16, 0x52, 0x65, - 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x43, 0x68, - 0x61, 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x61, 0x64, 0x64, 0x72, 0x18, 0x0d, 0x20, 0x01, - 0x28, 0x09, 0x52, 0x04, 0x61, 0x64, 0x64, 0x72, 0x12, 0x22, 0x0a, 0x0d, 0x70, 0x72, 0x65, 0x76, - 0x5f, 0x72, 0x75, 0x6e, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x18, 0x0e, 0x20, 0x01, 0x28, 0x09, 0x52, - 0x0b, 0x70, 0x72, 0x65, 0x76, 0x52, 0x75, 0x6e, 0x41, 0x64, 0x64, 0x72, 0x12, 0x1f, 0x0a, 0x0b, - 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x07, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x4b, 0x65, 0x79, 0x12, 0x1a, 0x0a, - 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, - 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, 0x26, 0x0a, 0x06, 0x63, 0x68, 0x61, - 0x6e, 0x67, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, - 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, - 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x18, 0x0a, 0x20, 0x01, - 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, 0x37, 0x0a, 0x10, 0x72, - 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x18, - 0x0b, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, - 0x61, 0x74, 0x68, 0x52, 0x0f, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x52, 0x65, 0x70, - 0x6c, 0x61, 0x63, 0x65, 0x12, 0x49, 0x0a, 0x0d, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x72, - 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x24, 0x2e, 0x74, 0x66, - 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, - 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, - 0x6e, 0x52, 0x0c, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x22, - 0x68, 0x0a, 0x0c, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, - 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, - 0x61, 0x6d, 0x65, 0x12, 0x26, 0x0a, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x02, 0x20, - 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, - 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x1c, 0x0a, 0x09, 0x73, - 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09, - 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x22, 0x28, 0x0a, 0x0c, 0x44, 0x79, 0x6e, - 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x6d, 0x73, 0x67, - 0x70, 0x61, 0x63, 0x6b, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x6d, 0x73, 0x67, 0x70, - 0x61, 0x63, 0x6b, 0x22, 0x1e, 0x0a, 0x04, 0x48, 0x61, 0x73, 0x68, 0x12, 0x16, 0x0a, 0x06, 0x73, - 0x68, 0x61, 0x32, 0x35, 0x36, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x06, 0x73, 0x68, 0x61, - 0x32, 0x35, 0x36, 0x22, 0xa5, 0x01, 0x0a, 0x04, 0x50, 0x61, 0x74, 0x68, 0x12, 0x27, 0x0a, 0x05, - 0x73, 0x74, 0x65, 0x70, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x11, 0x2e, 0x74, 0x66, - 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x2e, 0x53, 0x74, 0x65, 0x70, 0x52, 0x05, - 0x73, 0x74, 0x65, 0x70, 0x73, 0x1a, 0x74, 0x0a, 0x04, 0x53, 0x74, 0x65, 0x70, 0x12, 0x27, 0x0a, - 0x0e, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x18, - 0x01, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0d, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, - 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x6e, - 0x74, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, - 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, - 0x65, 0x48, 0x00, 0x52, 0x0a, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x4b, 0x65, 0x79, 0x42, - 0x0a, 0x0a, 0x08, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x2a, 0x31, 0x0a, 0x04, 0x4d, - 0x6f, 0x64, 0x65, 0x12, 0x0a, 0x0a, 0x06, 0x4e, 0x4f, 0x52, 0x4d, 0x41, 0x4c, 0x10, 0x00, 0x12, - 0x0b, 0x0a, 0x07, 0x44, 0x45, 0x53, 0x54, 0x52, 0x4f, 0x59, 0x10, 0x01, 0x12, 0x10, 0x0a, 0x0c, - 0x52, 0x45, 0x46, 0x52, 0x45, 0x53, 0x48, 0x5f, 0x4f, 0x4e, 0x4c, 0x59, 0x10, 0x02, 0x2a, 0x70, - 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4f, 0x50, - 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x01, 0x12, 0x08, - 0x0a, 0x04, 0x52, 0x45, 0x41, 0x44, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x55, 0x50, 0x44, 0x41, - 0x54, 0x45, 0x10, 0x03, 0x12, 0x0a, 0x0a, 0x06, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x05, - 0x12, 0x16, 0x0a, 0x12, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, - 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x06, 0x12, 0x16, 0x0a, 0x12, 0x43, 0x52, 0x45, 0x41, - 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x10, 0x07, - 0x2a, 0xa7, 0x02, 0x0a, 0x1c, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, - 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, 0x73, 0x6f, - 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4e, 0x45, 0x10, 0x00, 0x12, 0x1b, 0x0a, 0x17, 0x52, - 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x54, - 0x41, 0x49, 0x4e, 0x54, 0x45, 0x44, 0x10, 0x01, 0x12, 0x16, 0x0a, 0x12, 0x52, 0x45, 0x50, 0x4c, - 0x41, 0x43, 0x45, 0x5f, 0x42, 0x59, 0x5f, 0x52, 0x45, 0x51, 0x55, 0x45, 0x53, 0x54, 0x10, 0x02, - 0x12, 0x21, 0x0a, 0x1d, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, - 0x55, 0x53, 0x45, 0x5f, 0x43, 0x41, 0x4e, 0x4e, 0x4f, 0x54, 0x5f, 0x55, 0x50, 0x44, 0x41, 0x54, - 0x45, 0x10, 0x03, 0x12, 0x25, 0x0a, 0x21, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, - 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x4e, 0x4f, 0x5f, 0x52, 0x45, 0x53, 0x4f, 0x55, 0x52, 0x43, - 0x45, 0x5f, 0x43, 0x4f, 0x4e, 0x46, 0x49, 0x47, 0x10, 0x04, 0x12, 0x23, 0x0a, 0x1f, 0x44, 0x45, - 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x57, 0x52, 0x4f, - 0x4e, 0x47, 0x5f, 0x52, 0x45, 0x50, 0x45, 0x54, 0x49, 0x54, 0x49, 0x4f, 0x4e, 0x10, 0x05, 0x12, - 0x1e, 0x0a, 0x1a, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, - 0x45, 0x5f, 0x43, 0x4f, 0x55, 0x4e, 0x54, 0x5f, 0x49, 0x4e, 0x44, 0x45, 0x58, 0x10, 0x06, 0x12, - 0x1b, 0x0a, 0x17, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, - 0x45, 0x5f, 0x45, 0x41, 0x43, 0x48, 0x5f, 0x4b, 0x45, 0x59, 0x10, 0x07, 0x12, 0x1c, 0x0a, 0x18, - 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x4e, - 0x4f, 0x5f, 0x4d, 0x4f, 0x44, 0x55, 0x4c, 0x45, 0x10, 0x08, 0x42, 0x42, 0x5a, 0x40, 0x67, 0x69, - 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, - 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x2f, 0x69, 0x6e, 0x74, - 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x73, 0x2f, 0x69, 0x6e, 0x74, 0x65, - 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, - 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, + 0x61, 0x66, 0x6f, 0x72, 0x6d, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x29, 0x0a, 0x07, + 0x62, 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0f, 0x2e, + 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x42, 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x52, 0x07, + 0x62, 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x1a, 0x52, 0x0a, 0x0e, 0x56, 0x61, 0x72, 0x69, 0x61, + 0x62, 0x6c, 0x65, 0x73, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, + 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x2a, 0x0a, 0x05, 0x76, + 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, + 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, + 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x22, 0x69, 0x0a, 0x07, 0x42, + 0x61, 0x63, 0x6b, 0x65, 0x6e, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x74, 0x79, 0x70, 0x65, 0x18, 0x01, + 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x74, 0x79, 0x70, 0x65, 0x12, 0x2c, 0x0a, 0x06, 0x63, 0x6f, + 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, + 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, + 0x52, 0x06, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x1c, 0x0a, 0x09, 0x77, 0x6f, 0x72, 0x6b, + 0x73, 0x70, 0x61, 0x63, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x77, 0x6f, 0x72, + 0x6b, 0x73, 0x70, 0x61, 0x63, 0x65, 0x22, 0xe4, 0x01, 0x0a, 0x06, 0x43, 0x68, 0x61, 0x6e, 0x67, + 0x65, 0x12, 0x26, 0x0a, 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x0e, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x41, 0x63, 0x74, 0x69, 0x6f, + 0x6e, 0x52, 0x06, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x2c, 0x0a, 0x06, 0x76, 0x61, 0x6c, + 0x75, 0x65, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, + 0x06, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x73, 0x12, 0x42, 0x0a, 0x16, 0x62, 0x65, 0x66, 0x6f, 0x72, + 0x65, 0x5f, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x70, 0x61, 0x74, 0x68, + 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, + 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x14, 0x62, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x53, 0x65, 0x6e, + 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x50, 0x61, 0x74, 0x68, 0x73, 0x12, 0x40, 0x0a, 0x15, 0x61, + 0x66, 0x74, 0x65, 0x72, 0x5f, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x5f, 0x70, + 0x61, 0x74, 0x68, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, + 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x13, 0x61, 0x66, 0x74, 0x65, 0x72, 0x53, + 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x50, 0x61, 0x74, 0x68, 0x73, 0x22, 0xd3, 0x02, + 0x0a, 0x16, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, + 0x63, 0x65, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x61, 0x64, 0x64, 0x72, + 0x18, 0x0d, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x61, 0x64, 0x64, 0x72, 0x12, 0x22, 0x0a, 0x0d, + 0x70, 0x72, 0x65, 0x76, 0x5f, 0x72, 0x75, 0x6e, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x18, 0x0e, 0x20, + 0x01, 0x28, 0x09, 0x52, 0x0b, 0x70, 0x72, 0x65, 0x76, 0x52, 0x75, 0x6e, 0x41, 0x64, 0x64, 0x72, + 0x12, 0x1f, 0x0a, 0x0b, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x5f, 0x6b, 0x65, 0x79, 0x18, + 0x07, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0a, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x65, 0x64, 0x4b, 0x65, + 0x79, 0x12, 0x1a, 0x0a, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x18, 0x08, 0x20, + 0x01, 0x28, 0x09, 0x52, 0x08, 0x70, 0x72, 0x6f, 0x76, 0x69, 0x64, 0x65, 0x72, 0x12, 0x26, 0x0a, + 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, + 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, + 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, + 0x18, 0x0a, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x12, + 0x37, 0x0a, 0x10, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, 0x64, 0x5f, 0x72, 0x65, 0x70, 0x6c, + 0x61, 0x63, 0x65, 0x18, 0x0b, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x0c, 0x2e, 0x74, 0x66, 0x70, 0x6c, + 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x52, 0x0f, 0x72, 0x65, 0x71, 0x75, 0x69, 0x72, 0x65, + 0x64, 0x52, 0x65, 0x70, 0x6c, 0x61, 0x63, 0x65, 0x12, 0x49, 0x0a, 0x0d, 0x61, 0x63, 0x74, 0x69, + 0x6f, 0x6e, 0x5f, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x0e, 0x32, + 0x24, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, + 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, + 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x52, 0x0c, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x61, + 0x73, 0x6f, 0x6e, 0x22, 0x68, 0x0a, 0x0c, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x43, 0x68, 0x61, + 0x6e, 0x67, 0x65, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x26, 0x0a, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, + 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x0e, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, + 0x2e, 0x43, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x06, 0x63, 0x68, 0x61, 0x6e, 0x67, 0x65, 0x12, + 0x1c, 0x0a, 0x09, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x18, 0x03, 0x20, 0x01, + 0x28, 0x08, 0x52, 0x09, 0x73, 0x65, 0x6e, 0x73, 0x69, 0x74, 0x69, 0x76, 0x65, 0x22, 0x28, 0x0a, + 0x0c, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x18, 0x0a, + 0x07, 0x6d, 0x73, 0x67, 0x70, 0x61, 0x63, 0x6b, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x07, + 0x6d, 0x73, 0x67, 0x70, 0x61, 0x63, 0x6b, 0x22, 0xa5, 0x01, 0x0a, 0x04, 0x50, 0x61, 0x74, 0x68, + 0x12, 0x27, 0x0a, 0x05, 0x73, 0x74, 0x65, 0x70, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, + 0x11, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x50, 0x61, 0x74, 0x68, 0x2e, 0x53, 0x74, + 0x65, 0x70, 0x52, 0x05, 0x73, 0x74, 0x65, 0x70, 0x73, 0x1a, 0x74, 0x0a, 0x04, 0x53, 0x74, 0x65, + 0x70, 0x12, 0x27, 0x0a, 0x0e, 0x61, 0x74, 0x74, 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x5f, 0x6e, + 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x48, 0x00, 0x52, 0x0d, 0x61, 0x74, 0x74, + 0x72, 0x69, 0x62, 0x75, 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x37, 0x0a, 0x0b, 0x65, 0x6c, + 0x65, 0x6d, 0x65, 0x6e, 0x74, 0x5f, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, + 0x14, 0x2e, 0x74, 0x66, 0x70, 0x6c, 0x61, 0x6e, 0x2e, 0x44, 0x79, 0x6e, 0x61, 0x6d, 0x69, 0x63, + 0x56, 0x61, 0x6c, 0x75, 0x65, 0x48, 0x00, 0x52, 0x0a, 0x65, 0x6c, 0x65, 0x6d, 0x65, 0x6e, 0x74, + 0x4b, 0x65, 0x79, 0x42, 0x0a, 0x0a, 0x08, 0x73, 0x65, 0x6c, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x2a, + 0x31, 0x0a, 0x04, 0x4d, 0x6f, 0x64, 0x65, 0x12, 0x0a, 0x0a, 0x06, 0x4e, 0x4f, 0x52, 0x4d, 0x41, + 0x4c, 0x10, 0x00, 0x12, 0x0b, 0x0a, 0x07, 0x44, 0x45, 0x53, 0x54, 0x52, 0x4f, 0x59, 0x10, 0x01, + 0x12, 0x10, 0x0a, 0x0c, 0x52, 0x45, 0x46, 0x52, 0x45, 0x53, 0x48, 0x5f, 0x4f, 0x4e, 0x4c, 0x59, + 0x10, 0x02, 0x2a, 0x70, 0x0a, 0x06, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, + 0x4e, 0x4f, 0x4f, 0x50, 0x10, 0x00, 0x12, 0x0a, 0x0a, 0x06, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, + 0x10, 0x01, 0x12, 0x08, 0x0a, 0x04, 0x52, 0x45, 0x41, 0x44, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, + 0x55, 0x50, 0x44, 0x41, 0x54, 0x45, 0x10, 0x03, 0x12, 0x0a, 0x0a, 0x06, 0x44, 0x45, 0x4c, 0x45, + 0x54, 0x45, 0x10, 0x05, 0x12, 0x16, 0x0a, 0x12, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x54, + 0x48, 0x45, 0x4e, 0x5f, 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x10, 0x06, 0x12, 0x16, 0x0a, 0x12, + 0x43, 0x52, 0x45, 0x41, 0x54, 0x45, 0x5f, 0x54, 0x48, 0x45, 0x4e, 0x5f, 0x44, 0x45, 0x4c, 0x45, + 0x54, 0x45, 0x10, 0x07, 0x2a, 0xa7, 0x02, 0x0a, 0x1c, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, + 0x65, 0x49, 0x6e, 0x73, 0x74, 0x61, 0x6e, 0x63, 0x65, 0x41, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x52, + 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x12, 0x08, 0x0a, 0x04, 0x4e, 0x4f, 0x4e, 0x45, 0x10, 0x00, 0x12, + 0x1b, 0x0a, 0x17, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, + 0x53, 0x45, 0x5f, 0x54, 0x41, 0x49, 0x4e, 0x54, 0x45, 0x44, 0x10, 0x01, 0x12, 0x16, 0x0a, 0x12, + 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, 0x42, 0x59, 0x5f, 0x52, 0x45, 0x51, 0x55, 0x45, + 0x53, 0x54, 0x10, 0x02, 0x12, 0x21, 0x0a, 0x1d, 0x52, 0x45, 0x50, 0x4c, 0x41, 0x43, 0x45, 0x5f, + 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x43, 0x41, 0x4e, 0x4e, 0x4f, 0x54, 0x5f, 0x55, + 0x50, 0x44, 0x41, 0x54, 0x45, 0x10, 0x03, 0x12, 0x25, 0x0a, 0x21, 0x44, 0x45, 0x4c, 0x45, 0x54, + 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x4e, 0x4f, 0x5f, 0x52, 0x45, 0x53, + 0x4f, 0x55, 0x52, 0x43, 0x45, 0x5f, 0x43, 0x4f, 0x4e, 0x46, 0x49, 0x47, 0x10, 0x04, 0x12, 0x23, + 0x0a, 0x1f, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, 0x53, 0x45, + 0x5f, 0x57, 0x52, 0x4f, 0x4e, 0x47, 0x5f, 0x52, 0x45, 0x50, 0x45, 0x54, 0x49, 0x54, 0x49, 0x4f, + 0x4e, 0x10, 0x05, 0x12, 0x1e, 0x0a, 0x1a, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, + 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x43, 0x4f, 0x55, 0x4e, 0x54, 0x5f, 0x49, 0x4e, 0x44, 0x45, + 0x58, 0x10, 0x06, 0x12, 0x1b, 0x0a, 0x17, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, + 0x43, 0x41, 0x55, 0x53, 0x45, 0x5f, 0x45, 0x41, 0x43, 0x48, 0x5f, 0x4b, 0x45, 0x59, 0x10, 0x07, + 0x12, 0x1c, 0x0a, 0x18, 0x44, 0x45, 0x4c, 0x45, 0x54, 0x45, 0x5f, 0x42, 0x45, 0x43, 0x41, 0x55, + 0x53, 0x45, 0x5f, 0x4e, 0x4f, 0x5f, 0x4d, 0x4f, 0x44, 0x55, 0x4c, 0x45, 0x10, 0x08, 0x42, 0x42, + 0x5a, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x61, 0x73, + 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x74, 0x65, 0x72, 0x72, 0x61, 0x66, 0x6f, 0x72, 0x6d, + 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x73, 0x2f, + 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x70, 0x6c, 0x61, 0x6e, 0x70, 0x72, 0x6f, + 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( @@ -1139,7 +1062,7 @@ func file_planfile_proto_rawDescGZIP() []byte { } var file_planfile_proto_enumTypes = make([]protoimpl.EnumInfo, 3) -var file_planfile_proto_msgTypes = make([]protoimpl.MessageInfo, 11) +var file_planfile_proto_msgTypes = make([]protoimpl.MessageInfo, 9) var file_planfile_proto_goTypes = []interface{}{ (Mode)(0), // 0: tfplan.Mode (Action)(0), // 1: tfplan.Action @@ -1150,38 +1073,34 @@ var file_planfile_proto_goTypes = []interface{}{ (*ResourceInstanceChange)(nil), // 6: tfplan.ResourceInstanceChange (*OutputChange)(nil), // 7: tfplan.OutputChange (*DynamicValue)(nil), // 8: tfplan.DynamicValue - (*Hash)(nil), // 9: tfplan.Hash - (*Path)(nil), // 10: tfplan.Path - nil, // 11: tfplan.Plan.VariablesEntry - nil, // 12: tfplan.Plan.ProviderHashesEntry - (*Path_Step)(nil), // 13: tfplan.Path.Step + (*Path)(nil), // 9: tfplan.Path + nil, // 10: tfplan.Plan.VariablesEntry + (*Path_Step)(nil), // 11: tfplan.Path.Step } var file_planfile_proto_depIdxs = []int32{ 0, // 0: tfplan.Plan.ui_mode:type_name -> tfplan.Mode - 11, // 1: tfplan.Plan.variables:type_name -> tfplan.Plan.VariablesEntry + 10, // 1: tfplan.Plan.variables:type_name -> tfplan.Plan.VariablesEntry 6, // 2: tfplan.Plan.resource_changes:type_name -> tfplan.ResourceInstanceChange 6, // 3: tfplan.Plan.resource_drift:type_name -> tfplan.ResourceInstanceChange 7, // 4: tfplan.Plan.output_changes:type_name -> tfplan.OutputChange - 12, // 5: tfplan.Plan.provider_hashes:type_name -> tfplan.Plan.ProviderHashesEntry - 4, // 6: tfplan.Plan.backend:type_name -> tfplan.Backend - 8, // 7: tfplan.Backend.config:type_name -> tfplan.DynamicValue - 1, // 8: tfplan.Change.action:type_name -> tfplan.Action - 8, // 9: tfplan.Change.values:type_name -> tfplan.DynamicValue - 10, // 10: tfplan.Change.before_sensitive_paths:type_name -> tfplan.Path - 10, // 11: tfplan.Change.after_sensitive_paths:type_name -> tfplan.Path - 5, // 12: tfplan.ResourceInstanceChange.change:type_name -> tfplan.Change - 10, // 13: tfplan.ResourceInstanceChange.required_replace:type_name -> tfplan.Path - 2, // 14: tfplan.ResourceInstanceChange.action_reason:type_name -> tfplan.ResourceInstanceActionReason - 5, // 15: tfplan.OutputChange.change:type_name -> tfplan.Change - 13, // 16: tfplan.Path.steps:type_name -> tfplan.Path.Step - 8, // 17: tfplan.Plan.VariablesEntry.value:type_name -> tfplan.DynamicValue - 9, // 18: tfplan.Plan.ProviderHashesEntry.value:type_name -> tfplan.Hash - 8, // 19: tfplan.Path.Step.element_key:type_name -> tfplan.DynamicValue - 20, // [20:20] is the sub-list for method output_type - 20, // [20:20] is the sub-list for method input_type - 20, // [20:20] is the sub-list for extension type_name - 20, // [20:20] is the sub-list for extension extendee - 0, // [0:20] is the sub-list for field type_name + 4, // 5: tfplan.Plan.backend:type_name -> tfplan.Backend + 8, // 6: tfplan.Backend.config:type_name -> tfplan.DynamicValue + 1, // 7: tfplan.Change.action:type_name -> tfplan.Action + 8, // 8: tfplan.Change.values:type_name -> tfplan.DynamicValue + 9, // 9: tfplan.Change.before_sensitive_paths:type_name -> tfplan.Path + 9, // 10: tfplan.Change.after_sensitive_paths:type_name -> tfplan.Path + 5, // 11: tfplan.ResourceInstanceChange.change:type_name -> tfplan.Change + 9, // 12: tfplan.ResourceInstanceChange.required_replace:type_name -> tfplan.Path + 2, // 13: tfplan.ResourceInstanceChange.action_reason:type_name -> tfplan.ResourceInstanceActionReason + 5, // 14: tfplan.OutputChange.change:type_name -> tfplan.Change + 11, // 15: tfplan.Path.steps:type_name -> tfplan.Path.Step + 8, // 16: tfplan.Plan.VariablesEntry.value:type_name -> tfplan.DynamicValue + 8, // 17: tfplan.Path.Step.element_key:type_name -> tfplan.DynamicValue + 18, // [18:18] is the sub-list for method output_type + 18, // [18:18] is the sub-list for method input_type + 18, // [18:18] is the sub-list for extension type_name + 18, // [18:18] is the sub-list for extension extendee + 0, // [0:18] is the sub-list for field type_name } func init() { file_planfile_proto_init() } @@ -1263,18 +1182,6 @@ func file_planfile_proto_init() { } } file_planfile_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { - switch v := v.(*Hash); i { - case 0: - return &v.state - case 1: - return &v.sizeCache - case 2: - return &v.unknownFields - default: - return nil - } - } - file_planfile_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*Path); i { case 0: return &v.state @@ -1286,7 +1193,7 @@ func file_planfile_proto_init() { return nil } } - file_planfile_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} { + file_planfile_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { switch v := v.(*Path_Step); i { case 0: return &v.state @@ -1299,7 +1206,7 @@ func file_planfile_proto_init() { } } } - file_planfile_proto_msgTypes[10].OneofWrappers = []interface{}{ + file_planfile_proto_msgTypes[8].OneofWrappers = []interface{}{ (*Path_Step_AttributeName)(nil), (*Path_Step_ElementKey)(nil), } @@ -1309,7 +1216,7 @@ func file_planfile_proto_init() { GoPackagePath: reflect.TypeOf(x{}).PkgPath(), RawDescriptor: file_planfile_proto_rawDesc, NumEnums: 3, - NumMessages: 11, + NumMessages: 9, NumExtensions: 0, NumServices: 0, }, diff --git a/internal/plans/internal/planproto/planfile.proto b/internal/plans/internal/planproto/planfile.proto index 1bbe7425e502..752abd77ac5a 100644 --- a/internal/plans/internal/planproto/planfile.proto +++ b/internal/plans/internal/planproto/planfile.proto @@ -58,10 +58,6 @@ message Plan { // The version string for the Terraform binary that created this plan. string terraform_version = 14; - // SHA256 digests of all of the provider plugin binaries that were used - // in the creation of this plan. - map provider_hashes = 15; - // Backend is a description of the backend configuration and other related // settings at the time the plan was created. Backend backend = 13; @@ -218,18 +214,6 @@ message DynamicValue { bytes msgpack = 1; } -// Hash represents a hash value. -// -// At present hashes always use the SHA256 algorithm. In future other hash -// algorithms may be used, possibly with a transitional period of including -// both as separate attributes of this type. Consumers must ignore attributes -// they don't support and fail if no supported attribute is present. The -// top-level format version will not be incremented for changes to the set of -// hash algorithms. -message Hash { - bytes sha256 = 1; -} - // Path represents a set of steps to traverse into a data structure. It is // used to refer to a sub-structure within a dynamic data structure presented // separately. diff --git a/internal/plans/plan.go b/internal/plans/plan.go index a96a056480e1..da824f24ab3e 100644 --- a/internal/plans/plan.go +++ b/internal/plans/plan.go @@ -34,7 +34,6 @@ type Plan struct { DriftedResources []*ResourceInstanceChangeSrc TargetAddrs []addrs.Targetable ForceReplaceAddrs []addrs.AbsResourceInstance - ProviderSHA256s map[string][]byte Backend Backend // PrevRunState and PriorState both describe the situation that the plan diff --git a/internal/plans/planfile/planfile_test.go b/internal/plans/planfile/planfile_test.go index 14d23c87a67b..13e456666ae4 100644 --- a/internal/plans/planfile/planfile_test.go +++ b/internal/plans/planfile/planfile_test.go @@ -52,7 +52,6 @@ func TestRoundtrip(t *testing.T) { Outputs: []*plans.OutputChangeSrc{}, }, DriftedResources: []*plans.ResourceInstanceChangeSrc{}, - ProviderSHA256s: map[string][]byte{}, VariableValues: map[string]plans.DynamicValue{ "foo": plans.DynamicValue([]byte("foo placeholder")), }, diff --git a/internal/plans/planfile/tfplan.go b/internal/plans/planfile/tfplan.go index 87d21822efc8..47f95a2a3260 100644 --- a/internal/plans/planfile/tfplan.go +++ b/internal/plans/planfile/tfplan.go @@ -57,8 +57,6 @@ func readTfplan(r io.Reader) (*plans.Plan, error) { Resources: []*plans.ResourceInstanceChangeSrc{}, }, DriftedResources: []*plans.ResourceInstanceChangeSrc{}, - - ProviderSHA256s: map[string][]byte{}, } switch rawPlan.UiMode { @@ -125,14 +123,6 @@ func readTfplan(r io.Reader) (*plans.Plan, error) { plan.ForceReplaceAddrs = append(plan.ForceReplaceAddrs, addr) } - for name, rawHashObj := range rawPlan.ProviderHashes { - if len(rawHashObj.Sha256) == 0 { - return nil, fmt.Errorf("no SHA256 hash for provider %q plugin", name) - } - - plan.ProviderSHA256s[name] = rawHashObj.Sha256 - } - for name, rawVal := range rawPlan.Variables { val, err := valueFromTfplan(rawVal) if err != nil { @@ -358,7 +348,6 @@ func writeTfplan(plan *plans.Plan, w io.Writer) error { rawPlan := &planproto.Plan{ Version: tfplanFormatVersion, TerraformVersion: version.String(), - ProviderHashes: map[string]*planproto.Hash{}, Variables: map[string]*planproto.DynamicValue{}, OutputChanges: []*planproto.OutputChange{}, @@ -426,12 +415,6 @@ func writeTfplan(plan *plans.Plan, w io.Writer) error { rawPlan.ForceReplaceAddrs = append(rawPlan.ForceReplaceAddrs, replaceAddr.String()) } - for name, hash := range plan.ProviderSHA256s { - rawPlan.ProviderHashes[name] = &planproto.Hash{ - Sha256: hash, - } - } - for name, val := range plan.VariableValues { rawPlan.Variables[name] = valueToTfplan(val) } diff --git a/internal/plans/planfile/tfplan_test.go b/internal/plans/planfile/tfplan_test.go index 7ab62de532d7..7d5be4dc5713 100644 --- a/internal/plans/planfile/tfplan_test.go +++ b/internal/plans/planfile/tfplan_test.go @@ -165,14 +165,6 @@ func TestTFPlanRoundTrip(t *testing.T) { Name: "woot", }.Absolute(addrs.RootModuleInstance), }, - ProviderSHA256s: map[string][]byte{ - "test": []byte{ - 0xba, 0x5e, 0x1e, 0x55, 0xb0, 0x1d, 0xfa, 0xce, - 0xef, 0xfe, 0xc7, 0xed, 0x1a, 0xbe, 0x11, 0xed, - 0x5c, 0xa1, 0xab, 0x1e, 0xda, 0x7a, 0xba, 0x5e, - 0x70, 0x7a, 0x11, 0xed, 0xb0, 0x07, 0xab, 0x1e, - }, - }, Backend: plans.Backend{ Type: "local", Config: mustNewDynamicValue( diff --git a/internal/terraform/context.go b/internal/terraform/context.go index e05200e1e527..2174b0fe0e78 100644 --- a/internal/terraform/context.go +++ b/internal/terraform/context.go @@ -4,10 +4,9 @@ import ( "context" "fmt" "log" - "strings" + "sort" "sync" - "github.com/apparentlymart/go-versions/versions" "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/providers" @@ -16,8 +15,6 @@ import ( "github.com/hashicorp/terraform/internal/tfdiags" "github.com/zclconf/go-cty/cty" - "github.com/hashicorp/terraform/internal/depsfile" - "github.com/hashicorp/terraform/internal/getproviders" _ "github.com/hashicorp/terraform/internal/logging" ) @@ -43,18 +40,6 @@ type ContextOpts struct { Providers map[addrs.Provider]providers.Factory Provisioners map[string]provisioners.Factory - // If non-nil, will apply as additional constraints on the provider - // plugins that will be requested from the provider resolver. - ProviderSHA256s map[string][]byte - - // If non-nil, will be verified to ensure that provider requirements from - // configuration can be satisfied by the set of locked dependencies. - LockedDependencies *depsfile.Locks - - // Set of providers to exclude from the requirements check process, as they - // are marked as in local development. - ProvidersInDevelopment map[addrs.Provider]struct{} - UIInput UIInput } @@ -88,9 +73,7 @@ type Context struct { // operations. meta *ContextMeta - plugins *contextPlugins - dependencyLocks *depsfile.Locks - providersInDevelopment map[addrs.Provider]struct{} + plugins *contextPlugins hooks []Hook sh *stopHook @@ -99,7 +82,6 @@ type Context struct { l sync.Mutex // Lock acquired during any task parallelSem Semaphore providerInputConfig map[string]map[string]cty.Value - providerSHA256s map[string][]byte runCond *sync.Cond runContext context.Context runContextCancel context.CancelFunc @@ -153,13 +135,10 @@ func NewContext(opts *ContextOpts) (*Context, tfdiags.Diagnostics) { meta: opts.Meta, uiInput: opts.UIInput, - plugins: plugins, - dependencyLocks: opts.LockedDependencies, - providersInDevelopment: opts.ProvidersInDevelopment, + plugins: plugins, parallelSem: NewSemaphore(par), providerInputConfig: make(map[string]map[string]cty.Value), - providerSHA256s: opts.ProviderSHA256s, sh: sh, }, diags } @@ -174,50 +153,6 @@ func (c *Context) Schemas(config *configs.Config, state *states.State) (*Schemas var diags tfdiags.Diagnostics - // If we have a configuration and a set of locked dependencies, verify that - // the provider requirements from the configuration can be satisfied by the - // locked dependencies. - if c.dependencyLocks != nil && config != nil { - reqs, providerDiags := config.ProviderRequirements() - diags = diags.Append(providerDiags) - - locked := c.dependencyLocks.AllProviders() - unmetReqs := make(getproviders.Requirements) - for provider, versionConstraints := range reqs { - // Builtin providers are not listed in the locks file - if provider.IsBuiltIn() { - continue - } - // Development providers must be excluded from this check - if _, ok := c.providersInDevelopment[provider]; ok { - continue - } - // If the required provider doesn't exist in the lock, or the - // locked version doesn't meet the constraints, mark the - // requirement unmet - acceptable := versions.MeetingConstraints(versionConstraints) - if lock, ok := locked[provider]; !ok || !acceptable.Has(lock.Version()) { - unmetReqs[provider] = versionConstraints - } - } - - if len(unmetReqs) > 0 { - var buf strings.Builder - for provider, versionConstraints := range unmetReqs { - fmt.Fprintf(&buf, "\n- %s", provider) - if len(versionConstraints) > 0 { - fmt.Fprintf(&buf, " (%s)", getproviders.VersionConstraintsString(versionConstraints)) - } - } - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Provider requirements cannot be satisfied by locked dependencies", - fmt.Sprintf("The following required providers are not installed:\n%s\n\nPlease run \"terraform init\".", buf.String()), - )) - return nil, diags - } - } - ret, err := loadSchemas(config, state, c.plugins) if err != nil { diags = diags.Append(tfdiags.Sourceless( @@ -381,3 +316,115 @@ func (c *Context) watchStop(walker *ContextGraphWalker) (chan struct{}, <-chan s return stop, wait } + +// checkConfigDependencies checks whether the recieving context is able to +// support the given configuration, returning error diagnostics if not. +// +// Currently this function checks whether the current Terraform CLI version +// matches the version requirements of all of the modules, and whether our +// plugin library contains all of the plugin names/addresses needed. +// +// This function does *not* check that external modules are installed (that's +// the responsibility of the configuration loader) and doesn't check that the +// plugins are of suitable versions to match any version constraints (which is +// the responsibility of the code which installed the plugins and then +// constructed the Providers/Provisioners maps passed in to NewContext). +// +// In most cases we should typically catch the problems this function detects +// before we reach this point, but this function can come into play in some +// unusual cases outside of the main workflow, and can avoid some +// potentially-more-confusing errors from later operations. +func (c *Context) checkConfigDependencies(config *configs.Config) tfdiags.Diagnostics { + var diags tfdiags.Diagnostics + + // This checks the Terraform CLI version constraints specified in all of + // the modules. + diags = diags.Append(CheckCoreVersionRequirements(config)) + + // We only check that we have a factory for each required provider, and + // assume the caller already assured that any separately-installed + // plugins are of a suitable version, match expected checksums, etc. + providerReqs, hclDiags := config.ProviderRequirements() + diags = diags.Append(hclDiags) + if hclDiags.HasErrors() { + return diags + } + for providerAddr := range providerReqs { + if !c.plugins.HasProvider(providerAddr) { + if !providerAddr.IsBuiltIn() { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Missing required provider", + fmt.Sprintf( + "This configuration requires provider %s, but that provider isn't available. You may be able to install it automatically by running:\n terraform init", + providerAddr, + ), + )) + } else { + // Built-in providers can never be installed by "terraform init", + // so no point in confusing the user by suggesting that. + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Missing required provider", + fmt.Sprintf( + "This configuration requires built-in provider %s, but that provider isn't available in this Terraform version.", + providerAddr, + ), + )) + } + } + } + + // Our handling of provisioners is much less sophisticated than providers + // because they are in many ways a legacy system. We need to go hunting + // for them more directly in the configuration. + config.DeepEach(func(modCfg *configs.Config) { + if modCfg == nil || modCfg.Module == nil { + return // should not happen, but we'll be robust + } + for _, rc := range modCfg.Module.ManagedResources { + if rc.Managed == nil { + continue // should not happen, but we'll be robust + } + for _, pc := range rc.Managed.Provisioners { + if !c.plugins.HasProvisioner(pc.Type) { + // This is not a very high-quality error, because really + // the caller of terraform.NewContext should've already + // done equivalent checks when doing plugin discovery. + // This is just to make sure we return a predictable + // error in a central place, rather than failing somewhere + // later in the non-deterministically-ordered graph walk. + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Missing required provisioner plugin", + fmt.Sprintf( + "This configuration requires provisioner plugin %q, which isn't available. If you're intending to use an external provisioner plugin, you must install it manually into one of the plugin search directories before running Terraform.", + pc.Type, + ), + )) + } + } + } + }) + + // Because we were doing a lot of map iteration above, and we're only + // generating sourceless diagnostics anyway, our diagnostics will not be + // in a deterministic order. To ensure stable output when there are + // multiple errors to report, we'll sort these particular diagnostics + // so they are at least always consistent alone. This ordering is + // arbitrary and not a compatibility constraint. + sort.Slice(diags, func(i, j int) bool { + // Because these are sourcelss diagnostics and we know they are all + // errors, we know they'll only differ in their description fields. + descI := diags[i].Description() + descJ := diags[j].Description() + switch { + case descI.Summary != descJ.Summary: + return descI.Summary < descJ.Summary + default: + return descI.Detail < descJ.Detail + } + }) + + return diags +} diff --git a/internal/terraform/context_plan.go b/internal/terraform/context_plan.go index 5610a445ba71..3f860ef1bd1f 100644 --- a/internal/terraform/context_plan.go +++ b/internal/terraform/context_plan.go @@ -55,10 +55,10 @@ func (c *Context) Plan(config *configs.Config, prevRunState *states.State, opts } } - moreDiags := CheckCoreVersionRequirements(config) + moreDiags := c.checkConfigDependencies(config) diags = diags.Append(moreDiags) - // If version constraints are not met then we'll bail early since otherwise - // we're likely to just see a bunch of other errors related to + // If required dependencies are not available then we'll bail early since + // otherwise we're likely to just see a bunch of other errors related to // incompatibilities, which could be overwhelming for the user. if diags.HasErrors() { return nil, diags @@ -161,7 +161,6 @@ The -target option is not for routine use, and is provided only for exceptional if plan != nil { plan.VariableValues = varVals plan.TargetAddrs = opts.Targets - plan.ProviderSHA256s = c.providerSHA256s } else if !diags.HasErrors() { panic("nil plan but no errors") } diff --git a/internal/terraform/context_plan_test.go b/internal/terraform/context_plan_test.go index 9cb4c8925573..cfd51da8cc43 100644 --- a/internal/terraform/context_plan_test.go +++ b/internal/terraform/context_plan_test.go @@ -33,9 +33,6 @@ func TestContext2Plan_basic(t *testing.T) { Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), }, - ProviderSHA256s: map[string][]byte{ - "aws": []byte("placeholder"), - }, }) plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) @@ -47,10 +44,6 @@ func TestContext2Plan_basic(t *testing.T) { t.Fatalf("wrong number of resources %d; want fewer than two\n%s", l, spew.Sdump(plan.Changes.Resources)) } - if !reflect.DeepEqual(plan.ProviderSHA256s, ctx.providerSHA256s) { - t.Errorf("wrong ProviderSHA256s %#v; want %#v", plan.ProviderSHA256s, ctx.providerSHA256s) - } - schema := p.GetProviderSchemaResponse.ResourceTypes["aws_instance"].Block ty := schema.ImpliedType() for _, r := range plan.Changes.Resources { diff --git a/internal/terraform/context_plugins.go b/internal/terraform/context_plugins.go index 4fbdf84d0a44..4b3071cf6d84 100644 --- a/internal/terraform/context_plugins.go +++ b/internal/terraform/context_plugins.go @@ -43,6 +43,11 @@ func (cp *contextPlugins) init() { cp.provisionerSchemas = make(map[string]*configschema.Block, len(cp.provisionerFactories)) } +func (cp *contextPlugins) HasProvider(addr addrs.Provider) bool { + _, ok := cp.providerFactories[addr] + return ok +} + func (cp *contextPlugins) NewProviderInstance(addr addrs.Provider) (providers.Interface, error) { f, ok := cp.providerFactories[addr] if !ok { @@ -53,6 +58,11 @@ func (cp *contextPlugins) NewProviderInstance(addr addrs.Provider) (providers.In } +func (cp *contextPlugins) HasProvisioner(typ string) bool { + _, ok := cp.provisionerFactories[typ] + return ok +} + func (cp *contextPlugins) NewProvisionerInstance(typ string) (provisioners.Interface, error) { f, ok := cp.provisionerFactories[typ] if !ok { diff --git a/internal/terraform/context_refresh_test.go b/internal/terraform/context_refresh_test.go index 49cd02e0ea5a..aa2239dbeddf 100644 --- a/internal/terraform/context_refresh_test.go +++ b/internal/terraform/context_refresh_test.go @@ -2,7 +2,6 @@ package terraform import ( "reflect" - "regexp" "sort" "strings" "sync" @@ -1051,8 +1050,8 @@ func TestContext2Refresh_unknownProvider(t *testing.T) { t.Fatal("successfully refreshed; want error") } - if !regexp.MustCompile(`failed to instantiate provider ".+"`).MatchString(diags.Err().Error()) { - t.Fatalf("wrong error: %s", diags.Err()) + if got, want := diags.Err().Error(), "Missing required provider"; !strings.Contains(got, want) { + t.Errorf("missing expected error\nwant substring: %s\ngot:\n%s", want, got) } } diff --git a/internal/terraform/context_test.go b/internal/terraform/context_test.go index 02addb4dbf68..9ef2603db4c3 100644 --- a/internal/terraform/context_test.go +++ b/internal/terraform/context_test.go @@ -15,12 +15,10 @@ import ( "github.com/google/go-cmp/cmp" "github.com/google/go-cmp/cmp/cmpopts" "github.com/hashicorp/go-version" - "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configload" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/configs/hcl2shim" - "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/plans/planfile" "github.com/hashicorp/terraform/internal/providers" @@ -122,184 +120,82 @@ func TestNewContextRequiredVersion(t *testing.T) { } } -func TestNewContext_lockedDependencies(t *testing.T) { - // TODO: Remove this test altogether once we've factored out the version - // and checksum verification to be exclusively the caller's responsibility. - t.Skip("only one step away from locked dependencies being the caller's responsibility") +func TestContext_missingPlugins(t *testing.T) { + ctx, diags := NewContext(&ContextOpts{}) + assertNoDiagnostics(t, diags) - configBeepGreaterThanOne := ` + configSrc := ` terraform { - required_providers { - beep = { - source = "example.com/foo/beep" - version = ">= 1.0.0" - } - } -} -` - configBeepLessThanOne := ` -terraform { - required_providers { - beep = { - source = "example.com/foo/beep" - version = "< 1.0.0" - } - } -} -` - configBuiltin := ` -terraform { - required_providers { - terraform = { - source = "terraform.io/builtin/terraform" + required_providers { + explicit = { + source = "example.com/foo/beep" + } + builtin = { + source = "terraform.io/builtin/nonexist" + } } - } -} -` - locksBeepGreaterThanOne := ` -provider "example.com/foo/beep" { - version = "1.0.0" - constraints = ">= 1.0.0" - hashes = [ - "h1:does-not-match", - ] -} -` - configBeepBoop := ` -terraform { - required_providers { - beep = { - source = "example.com/foo/beep" - version = "< 1.0.0" # different from locks - } - boop = { - source = "example.com/foo/boop" - version = ">= 2.0.0" - } - } } -` - locksBeepBoop := ` -provider "example.com/foo/beep" { - version = "1.0.0" - constraints = ">= 1.0.0" - hashes = [ - "h1:does-not-match", - ] -} -provider "example.com/foo/boop" { - version = "2.3.4" - constraints = ">= 2.0.0" - hashes = [ - "h1:does-not-match", - ] -} -` - beepAddr := addrs.MustParseProviderSourceString("example.com/foo/beep") - boopAddr := addrs.MustParseProviderSourceString("example.com/foo/boop") - - testCases := map[string]struct { - Config string - LockFile string - DevProviders []addrs.Provider - WantErr string - }{ - "dependencies met": { - Config: configBeepGreaterThanOne, - LockFile: locksBeepGreaterThanOne, - }, - "no locks given": { - Config: configBeepGreaterThanOne, - }, - "builtin provider with empty locks": { - Config: configBuiltin, - LockFile: `# This file is maintained automatically by "terraform init".`, - }, - "multiple providers, one in development": { - Config: configBeepBoop, - LockFile: locksBeepBoop, - DevProviders: []addrs.Provider{beepAddr}, - }, - "development provider with empty locks": { - Config: configBeepGreaterThanOne, - LockFile: `# This file is maintained automatically by "terraform init".`, - DevProviders: []addrs.Provider{beepAddr}, - }, - "multiple providers, one in development, one missing": { - Config: configBeepBoop, - LockFile: locksBeepGreaterThanOne, - DevProviders: []addrs.Provider{beepAddr}, - WantErr: `Provider requirements cannot be satisfied by locked dependencies: The following required providers are not installed: - -- example.com/foo/boop (>= 2.0.0) - -Please run "terraform init".`, - }, - "wrong provider version": { - Config: configBeepLessThanOne, - LockFile: locksBeepGreaterThanOne, - WantErr: `Provider requirements cannot be satisfied by locked dependencies: The following required providers are not installed: -- example.com/foo/beep (< 1.0.0) - -Please run "terraform init".`, - }, - "empty locks": { - Config: configBeepGreaterThanOne, - LockFile: `# This file is maintained automatically by "terraform init".`, - WantErr: `Provider requirements cannot be satisfied by locked dependencies: The following required providers are not installed: - -- example.com/foo/beep (>= 1.0.0) - -Please run "terraform init".`, - }, +resource "implicit_thing" "a" { + provisioner "nonexist" { } - for name, tc := range testCases { - t.Run(name, func(t *testing.T) { - var locks *depsfile.Locks - if tc.LockFile != "" { - var diags tfdiags.Diagnostics - locks, diags = depsfile.LoadLocksFromBytes([]byte(tc.LockFile), "test.lock.hcl") - if len(diags) > 0 { - t.Fatalf("unexpected error loading locks file: %s", diags.Err()) - } - } - devProviders := make(map[addrs.Provider]struct{}) - for _, provider := range tc.DevProviders { - devProviders[provider] = struct{}{} - } - opts := &ContextOpts{ - LockedDependencies: locks, - ProvidersInDevelopment: devProviders, - Providers: map[addrs.Provider]providers.Factory{ - beepAddr: testProviderFuncFixed(testProvider("beep")), - boopAddr: testProviderFuncFixed(testProvider("boop")), - addrs.NewBuiltInProvider("terraform"): testProviderFuncFixed(testProvider("terraform")), - }, - } +} - m := testModuleInline(t, map[string]string{ - "main.tf": tc.Config, - }) +resource "implicit_thing" "b" { + provider = implicit2 +} +` - c, diags := NewContext(opts) - if diags.HasErrors() { - t.Fatalf("unexpected NewContext error: %s", diags.Err()) - } + cfg := testModuleInline(t, map[string]string{ + "main.tf": configSrc, + }) - diags = c.Validate(m) - if tc.WantErr != "" { - if len(diags) == 0 { - t.Fatal("expected diags but none returned") - } - if got, want := diags.Err().Error(), tc.WantErr; got != want { - t.Errorf("wrong diags\n got: %s\nwant: %s", got, want) - } - } else { - if len(diags) > 0 { - t.Errorf("unexpected diags: %s", diags.Err()) - } - } + // Validate and Plan are the two entry points where we explicitly verify + // the available plugins match what the configuration needs. For other + // operations we typically fail more deeply in Terraform Core, with + // potentially-less-helpful error messages, because getting there would + // require doing some pretty weird things that aren't common enough to + // be worth the complexity to check for them. + + validateDiags := ctx.Validate(cfg) + _, planDiags := ctx.Plan(cfg, nil, DefaultPlanOpts) + + tests := map[string]tfdiags.Diagnostics{ + "validate": validateDiags, + "plan": planDiags, + } + + for testName, gotDiags := range tests { + t.Run(testName, func(t *testing.T) { + var wantDiags tfdiags.Diagnostics + wantDiags = wantDiags.Append( + tfdiags.Sourceless( + tfdiags.Error, + "Missing required provider", + "This configuration requires built-in provider terraform.io/builtin/nonexist, but that provider isn't available in this Terraform version.", + ), + tfdiags.Sourceless( + tfdiags.Error, + "Missing required provider", + "This configuration requires provider example.com/foo/beep, but that provider isn't available. You may be able to install it automatically by running:\n terraform init", + ), + tfdiags.Sourceless( + tfdiags.Error, + "Missing required provider", + "This configuration requires provider registry.terraform.io/hashicorp/implicit, but that provider isn't available. You may be able to install it automatically by running:\n terraform init", + ), + tfdiags.Sourceless( + tfdiags.Error, + "Missing required provider", + "This configuration requires provider registry.terraform.io/hashicorp/implicit2, but that provider isn't available. You may be able to install it automatically by running:\n terraform init", + ), + tfdiags.Sourceless( + tfdiags.Error, + "Missing required provisioner plugin", + `This configuration requires provisioner plugin "nonexist", which isn't available. If you're intending to use an external provisioner plugin, you must install it manually into one of the plugin search directories before running Terraform.`, + ), + ) + assertDiagnosticsMatch(t, gotDiags, wantDiags) }) } } @@ -779,9 +675,13 @@ func contextOptsForPlanViaFile(configSnap *configload.Snapshot, plan *plans.Plan return nil, nil, nil, err } - return &ContextOpts{ - ProviderSHA256s: plan.ProviderSHA256s, - }, config, plan, nil + // Note: This has grown rather silly over the course of ongoing refactoring, + // because ContextOpts is no longer actually responsible for carrying + // any information from a plan file and instead all of the information + // lives inside the config and plan objects. We continue to return a + // silly empty ContextOpts here just to keep all of the calling tests + // working. + return &ContextOpts{}, config, plan, nil } // legacyPlanComparisonString produces a string representation of the changes @@ -994,6 +894,24 @@ func assertNoErrors(t *testing.T, diags tfdiags.Diagnostics) { t.FailNow() } +// assertDiagnosticsMatch fails the test in progress (using t.Fatal) if the +// two sets of diagnostics don't match after being normalized using the +// "ForRPC" processing step, which eliminates the specific type information +// and HCL expression information of each diagnostic. +// +// assertDiagnosticsMatch sorts the two sets of diagnostics in the usual way +// before comparing them, though diagnostics only have a partial order so that +// will not totally normalize the ordering of all diagnostics sets. +func assertDiagnosticsMatch(t *testing.T, got, want tfdiags.Diagnostics) { + got = got.ForRPC() + want = want.ForRPC() + got.Sort() + want.Sort() + if diff := cmp.Diff(want, got); diff != "" { + t.Fatalf("wrong diagnostics\n%s", diff) + } +} + // logDiagnostics is a test helper that logs the given diagnostics to to the // given testing.T using t.Log, in a way that is hopefully useful in debugging // a test. It does not generate any errors or fail the test. See diff --git a/internal/terraform/context_validate.go b/internal/terraform/context_validate.go index b079477bafee..fb54be420221 100644 --- a/internal/terraform/context_validate.go +++ b/internal/terraform/context_validate.go @@ -26,10 +26,10 @@ func (c *Context) Validate(config *configs.Config) tfdiags.Diagnostics { var diags tfdiags.Diagnostics - moreDiags := CheckCoreVersionRequirements(config) + moreDiags := c.checkConfigDependencies(config) diags = diags.Append(moreDiags) - // If version constraints are not met then we'll bail early since otherwise - // we're likely to just see a bunch of other errors related to + // If required dependencies are not available then we'll bail early since + // otherwise we're likely to just see a bunch of other errors related to // incompatibilities, which could be overwhelming for the user. if diags.HasErrors() { return diags From 6a98e4720c5c2ef179c67d00ef771c2464ba4f7f Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 29 Sep 2021 16:03:10 -0700 Subject: [PATCH 121/644] plans/planfile: Create takes most arguments via a struct type Previously the planfile.Create function had accumulated probably already too many positional arguments, and I'm intending to add another one in a subsequent commit and so this is preparation to make the callsites more readable (subjectively) and make it clearer how we can extend this function's arguments to include further components in a plan file. There's no difference in observable functionality here. This is just passing the same set of arguments in a slightly different way. --- internal/backend/local/backend_local_test.go | 8 ++++- internal/backend/local/backend_plan.go | 7 +++- internal/command/command_test.go | 7 +++- internal/plans/planfile/planfile_test.go | 7 +++- internal/plans/planfile/writer.go | 34 +++++++++++++++++--- internal/terraform/context_test.go | 7 +++- 6 files changed, 60 insertions(+), 10 deletions(-) diff --git a/internal/backend/local/backend_local_test.go b/internal/backend/local/backend_local_test.go index 13070c2d3c8d..32675e0949e4 100644 --- a/internal/backend/local/backend_local_test.go +++ b/internal/backend/local/backend_local_test.go @@ -132,7 +132,13 @@ func TestLocalRun_stalePlan(t *testing.T) { outDir := t.TempDir() defer os.RemoveAll(outDir) planPath := filepath.Join(outDir, "plan.tfplan") - if err := planfile.Create(planPath, configload.NewEmptySnapshot(), prevStateFile, stateFile, plan); err != nil { + planfileArgs := planfile.CreateArgs{ + ConfigSnapshot: configload.NewEmptySnapshot(), + PreviousRunStateFile: prevStateFile, + StateFile: stateFile, + Plan: plan, + } + if err := planfile.Create(planPath, planfileArgs); err != nil { t.Fatalf("unexpected error writing planfile: %s", err) } planFile, err := planfile.Open(planPath) diff --git a/internal/backend/local/backend_plan.go b/internal/backend/local/backend_plan.go index e25ab3ef6415..655ece1028e2 100644 --- a/internal/backend/local/backend_plan.go +++ b/internal/backend/local/backend_plan.go @@ -133,7 +133,12 @@ func (b *Local) opPlan( } log.Printf("[INFO] backend/local: writing plan output to: %s", path) - err := planfile.Create(path, configSnap, prevStateFile, plannedStateFile, plan) + err := planfile.Create(path, planfile.CreateArgs{ + ConfigSnapshot: configSnap, + PreviousRunStateFile: prevStateFile, + StateFile: plannedStateFile, + Plan: plan, + }) if err != nil { diags = diags.Append(tfdiags.Sourceless( tfdiags.Error, diff --git a/internal/command/command_test.go b/internal/command/command_test.go index 929bfb43975b..a0bc4710d6f9 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -242,7 +242,12 @@ func testPlanFile(t *testing.T, configSnap *configload.Snapshot, state *states.S } path := testTempFile(t) - err := planfile.Create(path, configSnap, prevStateFile, stateFile, plan) + err := planfile.Create(path, planfile.CreateArgs{ + ConfigSnapshot: configSnap, + PreviousRunStateFile: prevStateFile, + StateFile: stateFile, + Plan: plan, + }) if err != nil { t.Fatalf("failed to create temporary plan file: %s", err) } diff --git a/internal/plans/planfile/planfile_test.go b/internal/plans/planfile/planfile_test.go index 13e456666ae4..37e38080aec5 100644 --- a/internal/plans/planfile/planfile_test.go +++ b/internal/plans/planfile/planfile_test.go @@ -77,7 +77,12 @@ func TestRoundtrip(t *testing.T) { } planFn := filepath.Join(workDir, "tfplan") - err = Create(planFn, snapIn, prevStateFileIn, stateFileIn, planIn) + err = Create(planFn, CreateArgs{ + ConfigSnapshot: snapIn, + PreviousRunStateFile: prevStateFileIn, + StateFile: stateFileIn, + Plan: planIn, + }) if err != nil { t.Fatalf("failed to create plan file: %s", err) } diff --git a/internal/plans/planfile/writer.go b/internal/plans/planfile/writer.go index 358c4d785ee3..3be2516cb4fa 100644 --- a/internal/plans/planfile/writer.go +++ b/internal/plans/planfile/writer.go @@ -11,6 +11,30 @@ import ( "github.com/hashicorp/terraform/internal/states/statefile" ) +type CreateArgs struct { + // ConfigSnapshot is a snapshot of the configuration that the plan + // was created from. + ConfigSnapshot *configload.Snapshot + + // PreviousRunStateFile is a representation of the state snapshot we used + // as the original input when creating this plan, containing the same + // information as recorded at the end of the previous apply except for + // upgrading managed resource instance data to the provider's latest + // schema versions. + PreviousRunStateFile *statefile.File + + // BaseStateFile is a representation of the state snapshot we used to + // create the plan, which is the result of asking the providers to refresh + // all previously-stored objects to match the current situation in the + // remote system. (If this plan was created with refreshing disabled, + // this should be the same as PreviousRunStateFile.) + StateFile *statefile.File + + // Plan records the plan itself, which is the main artifact inside a + // saved plan file. + Plan *plans.Plan +} + // Create creates a new plan file with the given filename, overwriting any // file that might already exist there. // @@ -18,7 +42,7 @@ import ( // state file in addition to the plan itself, so that Terraform can detect // if the world has changed since the plan was created and thus refuse to // apply it. -func Create(filename string, configSnap *configload.Snapshot, prevStateFile, stateFile *statefile.File, plan *plans.Plan) error { +func Create(filename string, args CreateArgs) error { f, err := os.Create(filename) if err != nil { return err @@ -38,7 +62,7 @@ func Create(filename string, configSnap *configload.Snapshot, prevStateFile, sta if err != nil { return fmt.Errorf("failed to create tfplan file: %s", err) } - err = writeTfplan(plan, w) + err = writeTfplan(args.Plan, w) if err != nil { return fmt.Errorf("failed to write plan: %s", err) } @@ -54,7 +78,7 @@ func Create(filename string, configSnap *configload.Snapshot, prevStateFile, sta if err != nil { return fmt.Errorf("failed to create embedded tfstate file: %s", err) } - err = statefile.Write(stateFile, w) + err = statefile.Write(args.StateFile, w) if err != nil { return fmt.Errorf("failed to write state snapshot: %s", err) } @@ -70,7 +94,7 @@ func Create(filename string, configSnap *configload.Snapshot, prevStateFile, sta if err != nil { return fmt.Errorf("failed to create embedded tfstate-prev file: %s", err) } - err = statefile.Write(prevStateFile, w) + err = statefile.Write(args.PreviousRunStateFile, w) if err != nil { return fmt.Errorf("failed to write previous state snapshot: %s", err) } @@ -78,7 +102,7 @@ func Create(filename string, configSnap *configload.Snapshot, prevStateFile, sta // tfconfig directory { - err := writeConfigSnapshot(configSnap, zw) + err := writeConfigSnapshot(args.ConfigSnapshot, zw) if err != nil { return fmt.Errorf("failed to write config snapshot: %s", err) } diff --git a/internal/terraform/context_test.go b/internal/terraform/context_test.go index 9ef2603db4c3..e40a95058a99 100644 --- a/internal/terraform/context_test.go +++ b/internal/terraform/context_test.go @@ -655,7 +655,12 @@ func contextOptsForPlanViaFile(configSnap *configload.Snapshot, plan *plans.Plan } filename := filepath.Join(dir, "tfplan") - err = planfile.Create(filename, configSnap, prevStateFile, stateFile, plan) + err = planfile.Create(filename, planfile.CreateArgs{ + ConfigSnapshot: configSnap, + PreviousRunStateFile: prevStateFile, + StateFile: stateFile, + Plan: plan, + }) if err != nil { return nil, nil, nil, err } From 3f8559199806771940e6e74004a9b814040125c4 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 29 Sep 2021 16:13:20 -0700 Subject: [PATCH 122/644] depsfile: SaveLocksToBytes function In a future commit we'll use this to include dependency lock information in full fidelity inside a saved plan file. --- internal/depsfile/locks_file.go | 41 ++++++++++++++++++++++----------- 1 file changed, 28 insertions(+), 13 deletions(-) diff --git a/internal/depsfile/locks_file.go b/internal/depsfile/locks_file.go index add88ea8d0a2..e619e06703c6 100644 --- a/internal/depsfile/locks_file.go +++ b/internal/depsfile/locks_file.go @@ -43,6 +43,12 @@ func LoadLocksFromFile(filename string) (*Locks, tfdiags.Diagnostics) { // integration testing (avoiding creating temporary files on disk); if you // are writing non-test code, consider whether LoadLocksFromFile might be // more appropriate to call. +// +// It is valid to use this with dependency lock information recorded as part of +// a plan file, in which case the given filename will typically be a +// placeholder that will only be seen in the unusual case that the plan file +// contains an invalid lock file, which should only be possible if the user +// edited it directly (Terraform bugs notwithstanding). func LoadLocksFromBytes(src []byte, filename string) (*Locks, tfdiags.Diagnostics) { return loadLocks(func(parser *hclparse.Parser) (*hcl.File, hcl.Diagnostics) { return parser.ParseHCL(src, filename) @@ -80,6 +86,27 @@ func loadLocks(loadParse func(*hclparse.Parser) (*hcl.File, hcl.Diagnostics)) (* // temporary files may be temporarily created in the same directory as the // given filename during the operation. func SaveLocksToFile(locks *Locks, filename string) tfdiags.Diagnostics { + src, diags := SaveLocksToBytes(locks) + if diags.HasErrors() { + return diags + } + + err := replacefile.AtomicWriteFile(filename, src, 0644) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to update dependency lock file", + fmt.Sprintf("Error while writing new dependency lock information to %s: %s.", filename, err), + )) + return diags + } + + return diags +} + +// SaveLocksToBytes writes the given locks object into a byte array, +// using the same syntax that LoadLocksFromBytes expects to parse. +func SaveLocksToBytes(locks *Locks) ([]byte, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics // In other uses of the "hclwrite" package we typically try to make @@ -131,19 +158,7 @@ func SaveLocksToFile(locks *Locks, filename string) tfdiags.Diagnostics { } } - newContent := f.Bytes() - - err := replacefile.AtomicWriteFile(filename, newContent, 0644) - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Failed to update dependency lock file", - fmt.Sprintf("Error while writing new dependency lock information to %s: %s.", filename, err), - )) - return diags - } - - return diags + return f.Bytes(), diags } func decodeLocksFromHCL(locks *Locks, body hcl.Body) tfdiags.Diagnostics { From 702413702cfb8cd2fc365572639d874a4f9778d0 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 29 Sep 2021 17:03:38 -0700 Subject: [PATCH 123/644] plans/planfile: Include dependency locks in saved plan files We recently removed the legacy way we used to track the SHA256 hashes of individual provider executables as part of a plans.Plan, because these days we want to track the checksums of entire provider packages rather than just the executable. In order to achieve that new goal, we can save a copy of the dependency lock information inside the plan file. This follows our existing precedent of using exactly the same serialization formats we'd normally use for this information, and thus we can reuse the existing models and serializers and be confident we won't lose any detail in the round-trip. As of this commit there's not yet anything actually making use of this mechanism. In a subsequent commit we'll teach the main callers that write and read plan files to include and expect (respectively) dependency information, verifying that the available providers still match by the time we're applying the plan. --- internal/plans/planfile/planfile_test.go | 26 ++++++++++++++ internal/plans/planfile/reader.go | 46 ++++++++++++++++++++++++ internal/plans/planfile/writer.go | 27 ++++++++++++++ 3 files changed, 99 insertions(+) diff --git a/internal/plans/planfile/planfile_test.go b/internal/plans/planfile/planfile_test.go index 37e38080aec5..d5d1365c8eeb 100644 --- a/internal/plans/planfile/planfile_test.go +++ b/internal/plans/planfile/planfile_test.go @@ -7,7 +7,10 @@ import ( "github.com/google/go-cmp/cmp" + "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/configs/configload" + "github.com/hashicorp/terraform/internal/depsfile" + "github.com/hashicorp/terraform/internal/getproviders" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/states/statefile" @@ -71,6 +74,16 @@ func TestRoundtrip(t *testing.T) { PriorState: stateFileIn.State, } + locksIn := depsfile.NewLocks() + locksIn.SetProvider( + addrs.NewDefaultProvider("boop"), + getproviders.MustParseVersion("1.0.0"), + getproviders.MustParseVersionConstraints(">= 1.0.0"), + []getproviders.Hash{ + getproviders.MustParseHash("fake:hello"), + }, + ) + workDir, err := ioutil.TempDir("", "tf-planfile") if err != nil { t.Fatal(err) @@ -82,6 +95,7 @@ func TestRoundtrip(t *testing.T) { PreviousRunStateFile: prevStateFileIn, StateFile: stateFileIn, Plan: planIn, + DependencyLocks: locksIn, }) if err != nil { t.Fatalf("failed to create plan file: %s", err) @@ -141,4 +155,16 @@ func TestRoundtrip(t *testing.T) { t.Errorf("when reading config: %s", diags.Err()) } }) + + t.Run("ReadDependencyLocks", func(t *testing.T) { + locksOut, diags := pr.ReadDependencyLocks() + if diags.HasErrors() { + t.Fatalf("when reading config: %s", diags.Err()) + } + got := locksOut.AllProviders() + want := locksIn.AllProviders() + if diff := cmp.Diff(want, got, cmp.AllowUnexported(depsfile.ProviderLock{})); diff != "" { + t.Errorf("provider locks did not survive round-trip\n%s", diff) + } + }) } diff --git a/internal/plans/planfile/reader.go b/internal/plans/planfile/reader.go index 7175eda27499..ff6e129e0bac 100644 --- a/internal/plans/planfile/reader.go +++ b/internal/plans/planfile/reader.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configload" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/states/statefile" "github.com/hashicorp/terraform/internal/tfdiags" @@ -15,6 +16,7 @@ import ( const tfstateFilename = "tfstate" const tfstatePreviousFilename = "tfstate-prev" +const dependencyLocksFilename = ".terraform.lock.hcl" // matches the conventional name in an input configuration // Reader is the main type used to read plan files. Create a Reader by calling // Open. @@ -190,6 +192,50 @@ func (r *Reader) ReadConfig() (*configs.Config, tfdiags.Diagnostics) { return config, diags } +// ReadDependencyLocks reads the dependency lock information embedded in +// the plan file. +// +// Some test codepaths create plan files without dependency lock information, +// but all main codepaths should populate this. If reading a file without +// the dependency information, this will return error diagnostics. +func (r *Reader) ReadDependencyLocks() (*depsfile.Locks, tfdiags.Diagnostics) { + var diags tfdiags.Diagnostics + + for _, file := range r.zip.File { + if file.Name == dependencyLocksFilename { + r, err := file.Open() + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to read dependency locks from plan file", + fmt.Sprintf("Couldn't read the dependency lock information embedded in the plan file: %s.", err), + )) + return nil, diags + } + src, err := ioutil.ReadAll(r) + if err != nil { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Failed to read dependency locks from plan file", + fmt.Sprintf("Couldn't read the dependency lock information embedded in the plan file: %s.", err), + )) + return nil, diags + } + locks, moreDiags := depsfile.LoadLocksFromBytes(src, "") + diags = diags.Append(moreDiags) + return locks, diags + } + } + + // If we fall out here then this is a file without dependency information. + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Saved plan has no dependency lock information", + "The specified saved plan file does not include any dependency lock information. This is a bug in the previous run of Terraform that created this file.", + )) + return nil, diags +} + // Close closes the file, after which no other operations may be performed. func (r *Reader) Close() error { return r.zip.Close() diff --git a/internal/plans/planfile/writer.go b/internal/plans/planfile/writer.go index 3be2516cb4fa..bdf84c86db44 100644 --- a/internal/plans/planfile/writer.go +++ b/internal/plans/planfile/writer.go @@ -7,6 +7,7 @@ import ( "time" "github.com/hashicorp/terraform/internal/configs/configload" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/states/statefile" ) @@ -33,6 +34,11 @@ type CreateArgs struct { // Plan records the plan itself, which is the main artifact inside a // saved plan file. Plan *plans.Plan + + // DependencyLocks records the dependency lock information that we + // checked prior to creating the plan, so we can make sure that all of the + // same dependencies are still available when applying the plan. + DependencyLocks *depsfile.Locks } // Create creates a new plan file with the given filename, overwriting any @@ -108,5 +114,26 @@ func Create(filename string, args CreateArgs) error { } } + // .terraform.lock.hcl file, containing dependency lock information + if args.DependencyLocks != nil { // (this was a later addition, so not all callers set it, but main callers should) + src, diags := depsfile.SaveLocksToBytes(args.DependencyLocks) + if diags.HasErrors() { + return fmt.Errorf("failed to write embedded dependency lock file: %s", diags.Err().Error()) + } + + w, err := zw.CreateHeader(&zip.FileHeader{ + Name: dependencyLocksFilename, + Method: zip.Deflate, + Modified: time.Now(), + }) + if err != nil { + return fmt.Errorf("failed to create embedded dependency lock file: %s", err) + } + _, err = w.Write(src) + if err != nil { + return fmt.Errorf("failed to write embedded dependency lock file: %s", err) + } + } + return nil } From df578afd7e34ae32345d86840edd05b4c05559e8 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 29 Sep 2021 17:31:43 -0700 Subject: [PATCH 124/644] backend/local: Check dependency lock consistency before any operations In historical versions of Terraform the responsibility to check this was inside the terraform.NewContext function, along with various other assorted concerns that made that function particularly complicated. More recently, we reduced the responsibility of the "terraform" package only to instantiating particular named plugins, assuming that its caller is responsible for selecting appropriate versions of any providers that _are_ external. However, until this commit we were just assuming that "terraform init" had correctly selected appropriate plugins and recorded them in the lock file, and so nothing was dealing with the problem of ensuring that there haven't been any changes to the lock file or config since the most recent "terraform init" which would cause us to need to re-evaluate those decisions. Part of the game here is to slightly extend the role of the dependency locks object to also carry information about a subset of provider addresses whose lock entries we're intentionally disregarding as part of the various little edge-case features we have for overridding providers: dev_overrides, "unmanaged providers", and the testing overrides in our own unit tests. This is an in-memory-only annotation, never included in the serialized plan files on disk. I had originally intended to create a new package to encapsulate all of this plugin-selection logic, including both the version constraint checking here and also the handling of the provider factory functions, but as an interim step I've just made version constraint consistency checks the responsibility of the backend/local package, which means that we'll always catch problems as part of preparing for local operations, while not imposing these additional checks on commands that _don't_ run local operations, such as "terraform apply" when in remote operations mode. --- internal/backend/backend.go | 11 ++ internal/backend/local/backend_apply_test.go | 18 ++- internal/backend/local/backend_local.go | 71 +++++++++- internal/backend/local/backend_plan.go | 1 + internal/backend/local/backend_plan_test.go | 17 ++- .../backend/local/backend_refresh_test.go | 17 ++- internal/backend/remote/backend.go | 1 + internal/backend/remote/backend_apply_test.go | 19 ++- internal/backend/remote/backend_plan_test.go | 19 ++- internal/command/command_test.go | 2 + internal/command/import_test.go | 2 +- internal/command/meta_backend.go | 23 +++- internal/command/meta_dependencies.go | 35 ++++- internal/command/meta_providers.go | 8 +- internal/command/plan_test.go | 2 +- internal/configs/config.go | 85 ++++++++++++ internal/configs/config_test.go | 123 ++++++++++++++++++ internal/depsfile/locks.go | 56 ++++++++ 18 files changed, 469 insertions(+), 41 deletions(-) diff --git a/internal/backend/backend.go b/internal/backend/backend.go index d8b28b9435f4..9f1855eb291f 100644 --- a/internal/backend/backend.go +++ b/internal/backend/backend.go @@ -17,6 +17,7 @@ import ( "github.com/hashicorp/terraform/internal/configs" "github.com/hashicorp/terraform/internal/configs/configload" "github.com/hashicorp/terraform/internal/configs/configschema" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/plans/planfile" "github.com/hashicorp/terraform/internal/states" @@ -237,6 +238,16 @@ type Operation struct { // configuration from ConfigDir. ConfigLoader *configload.Loader + // DependencyLocks represents the locked dependencies associated with + // the configuration directory given in ConfigDir. + // + // Note that if field PlanFile is set then the plan file should contain + // its own dependency locks. The backend is responsible for correctly + // selecting between these two sets of locks depending on whether it + // will be using ConfigDir or PlanFile to get the configuration for + // this operation. + DependencyLocks *depsfile.Locks + // Hooks can be used to perform actions triggered by various events during // the operation's lifecycle. Hooks []terraform.Hook diff --git a/internal/backend/local/backend_apply_test.go b/internal/backend/local/backend_apply_test.go index 4ffc0fa0a871..1f9514ad5410 100644 --- a/internal/backend/local/backend_apply_test.go +++ b/internal/backend/local/backend_apply_test.go @@ -11,11 +11,13 @@ import ( "github.com/zclconf/go-cty/cty" + "github.com/hashicorp/terraform/internal/addrs" "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/command/arguments" "github.com/hashicorp/terraform/internal/command/clistate" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/configs/configschema" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/initwd" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/providers" @@ -319,12 +321,18 @@ func testOperationApply(t *testing.T, configDir string) (*backend.Operation, fun streams, done := terminal.StreamsForTesting(t) view := views.NewOperation(arguments.ViewHuman, false, views.NewView(streams)) + // Many of our tests use an overridden "test" provider that's just in-memory + // inside the test process, not a separate plugin on disk. + depLocks := depsfile.NewLocks() + depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/test")) + return &backend.Operation{ - Type: backend.OperationTypeApply, - ConfigDir: configDir, - ConfigLoader: configLoader, - StateLocker: clistate.NewNoopLocker(), - View: view, + Type: backend.OperationTypeApply, + ConfigDir: configDir, + ConfigLoader: configLoader, + StateLocker: clistate.NewNoopLocker(), + View: view, + DependencyLocks: depLocks, }, configCleanup, done } diff --git a/internal/backend/local/backend_local.go b/internal/backend/local/backend_local.go index 5ce5929ee310..209757e43375 100644 --- a/internal/backend/local/backend_local.go +++ b/internal/backend/local/backend_local.go @@ -5,6 +5,7 @@ import ( "fmt" "log" "sort" + "strings" "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/configs" @@ -83,7 +84,7 @@ func (b *Local) localRun(op *backend.Operation) (*backend.LocalRun, *configload. stateMeta = &m } log.Printf("[TRACE] backend/local: populating backend.LocalRun from plan file") - ret, configSnap, ctxDiags = b.localRunForPlanFile(op.PlanFile, ret, &coreOpts, stateMeta) + ret, configSnap, ctxDiags = b.localRunForPlanFile(op, op.PlanFile, ret, &coreOpts, stateMeta) if ctxDiags.HasErrors() { diags = diags.Append(ctxDiags) return nil, nil, nil, diags @@ -138,6 +139,32 @@ func (b *Local) localRunDirect(op *backend.Operation, run *backend.LocalRun, cor } run.Config = config + if errs := config.VerifyDependencySelections(op.DependencyLocks); len(errs) > 0 { + var buf strings.Builder + for _, err := range errs { + fmt.Fprintf(&buf, "\n - %s", err.Error()) + } + var suggestion string + switch { + case op.DependencyLocks == nil: + // If we get here then it suggests that there's a caller that we + // didn't yet update to populate DependencyLocks, which is a bug. + suggestion = "This run has no dependency lock information provided at all, which is a bug in Terraform; please report it!" + case op.DependencyLocks.Empty(): + suggestion = "To make the initial dependency selections that will initialize the dependency lock file, run:\n terraform init" + default: + suggestion = "To update the locked dependency selections to match a changed configuration, run:\n terraform init -upgrade" + } + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Inconsistent dependency lock file", + fmt.Sprintf( + "The following dependency selections recorded in the lock file are inconsistent with the current configuration:%s\n\n%s", + buf.String(), suggestion, + ), + )) + } + var rawVariables map[string]backend.UnparsedVariableValue if op.AllowUnsetVariables { // Rather than prompting for input, we'll just stub out the required @@ -181,7 +208,7 @@ func (b *Local) localRunDirect(op *backend.Operation, run *backend.LocalRun, cor return run, configSnap, diags } -func (b *Local) localRunForPlanFile(pf *planfile.Reader, run *backend.LocalRun, coreOpts *terraform.ContextOpts, currentStateMeta *statemgr.SnapshotMeta) (*backend.LocalRun, *configload.Snapshot, tfdiags.Diagnostics) { +func (b *Local) localRunForPlanFile(op *backend.Operation, pf *planfile.Reader, run *backend.LocalRun, coreOpts *terraform.ContextOpts, currentStateMeta *statemgr.SnapshotMeta) (*backend.LocalRun, *configload.Snapshot, tfdiags.Diagnostics) { var diags tfdiags.Diagnostics const errSummary = "Invalid plan file" @@ -206,6 +233,46 @@ func (b *Local) localRunForPlanFile(pf *planfile.Reader, run *backend.LocalRun, } run.Config = config + // NOTE: We're intentionally comparing the current locks with the + // configuration snapshot, rather than the lock snapshot in the plan file, + // because it's the current locks which dictate our plugin selections + // in coreOpts below. However, we'll also separately check that the + // plan file has identical locked plugins below, and thus we're effectively + // checking consistency with both here. + if errs := config.VerifyDependencySelections(op.DependencyLocks); len(errs) > 0 { + var buf strings.Builder + for _, err := range errs { + fmt.Fprintf(&buf, "\n - %s", err.Error()) + } + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Inconsistent dependency lock file", + fmt.Sprintf( + "The following dependency selections recorded in the lock file are inconsistent with the configuration in the saved plan:%s\n\nA saved plan can be applied only to the same configuration it was created from. Create a new plan from the updated configuration.", + buf.String(), + ), + )) + } + + // This check is an important complement to the check above: the locked + // dependencies in the configuration must match the configuration, and + // the locked dependencies in the plan must match the locked dependencies + // in the configuration, and so transitively we ensure that the locked + // dependencies in the plan match the configuration too. However, this + // additionally catches any inconsistency between the two sets of locks + // even if they both happen to be valid per the current configuration, + // which is one of several ways we try to catch the mistake of applying + // a saved plan file in a different place than where we created it. + depLocksFromPlan, moreDiags := pf.ReadDependencyLocks() + diags = diags.Append(moreDiags) + if depLocksFromPlan != nil && !op.DependencyLocks.Equal(depLocksFromPlan) { + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Inconsistent dependency lock file", + "The given plan file was created with a different set of external dependency selections than the current configuration. A saved plan can be applied only to the same configuration it was created from.\n\nCreate a new plan from the updated configuration.", + )) + } + // A plan file also contains a snapshot of the prior state the changes // are intended to apply to. priorStateFile, err := pf.ReadStateFile() diff --git a/internal/backend/local/backend_plan.go b/internal/backend/local/backend_plan.go index 655ece1028e2..c721074b4a8e 100644 --- a/internal/backend/local/backend_plan.go +++ b/internal/backend/local/backend_plan.go @@ -138,6 +138,7 @@ func (b *Local) opPlan( PreviousRunStateFile: prevStateFile, StateFile: plannedStateFile, Plan: plan, + DependencyLocks: op.DependencyLocks, }) if err != nil { diags = diags.Append(tfdiags.Sourceless( diff --git a/internal/backend/local/backend_plan_test.go b/internal/backend/local/backend_plan_test.go index 2a9f3f8287b0..5121d45e70ed 100644 --- a/internal/backend/local/backend_plan_test.go +++ b/internal/backend/local/backend_plan_test.go @@ -13,6 +13,7 @@ import ( "github.com/hashicorp/terraform/internal/command/clistate" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/configs/configschema" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/initwd" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/plans/planfile" @@ -716,12 +717,18 @@ func testOperationPlan(t *testing.T, configDir string) (*backend.Operation, func streams, done := terminal.StreamsForTesting(t) view := views.NewOperation(arguments.ViewHuman, false, views.NewView(streams)) + // Many of our tests use an overridden "test" provider that's just in-memory + // inside the test process, not a separate plugin on disk. + depLocks := depsfile.NewLocks() + depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/test")) + return &backend.Operation{ - Type: backend.OperationTypePlan, - ConfigDir: configDir, - ConfigLoader: configLoader, - StateLocker: clistate.NewNoopLocker(), - View: view, + Type: backend.OperationTypePlan, + ConfigDir: configDir, + ConfigLoader: configLoader, + StateLocker: clistate.NewNoopLocker(), + View: view, + DependencyLocks: depLocks, }, configCleanup, done } diff --git a/internal/backend/local/backend_refresh_test.go b/internal/backend/local/backend_refresh_test.go index 0e502267cc64..886c184181c2 100644 --- a/internal/backend/local/backend_refresh_test.go +++ b/internal/backend/local/backend_refresh_test.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/terraform/internal/command/clistate" "github.com/hashicorp/terraform/internal/command/views" "github.com/hashicorp/terraform/internal/configs/configschema" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/initwd" "github.com/hashicorp/terraform/internal/providers" "github.com/hashicorp/terraform/internal/states" @@ -260,12 +261,18 @@ func testOperationRefresh(t *testing.T, configDir string) (*backend.Operation, f streams, done := terminal.StreamsForTesting(t) view := views.NewOperation(arguments.ViewHuman, false, views.NewView(streams)) + // Many of our tests use an overridden "test" provider that's just in-memory + // inside the test process, not a separate plugin on disk. + depLocks := depsfile.NewLocks() + depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/test")) + return &backend.Operation{ - Type: backend.OperationTypeRefresh, - ConfigDir: configDir, - ConfigLoader: configLoader, - StateLocker: clistate.NewNoopLocker(), - View: view, + Type: backend.OperationTypeRefresh, + ConfigDir: configDir, + ConfigLoader: configLoader, + StateLocker: clistate.NewNoopLocker(), + View: view, + DependencyLocks: depLocks, }, configCleanup, done } diff --git a/internal/backend/remote/backend.go b/internal/backend/remote/backend.go index 5e09e7207d3f..e427befa0391 100644 --- a/internal/backend/remote/backend.go +++ b/internal/backend/remote/backend.go @@ -710,6 +710,7 @@ func (b *Remote) Operation(ctx context.Context, op *backend.Operation) (*backend // Record that we're forced to run operations locally to allow the // command package UI to operate correctly b.forceLocal = true + log.Printf("[DEBUG] Remote backend is delegating %s to the local backend", op.Type) return b.local.Operation(ctx, op) } diff --git a/internal/backend/remote/backend_apply_test.go b/internal/backend/remote/backend_apply_test.go index 4bc2a909f288..71c819b9e505 100644 --- a/internal/backend/remote/backend_apply_test.go +++ b/internal/backend/remote/backend_apply_test.go @@ -17,6 +17,7 @@ import ( "github.com/hashicorp/terraform/internal/command/arguments" "github.com/hashicorp/terraform/internal/command/clistate" "github.com/hashicorp/terraform/internal/command/views" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/initwd" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/plans/planfile" @@ -43,13 +44,19 @@ func testOperationApplyWithTimeout(t *testing.T, configDir string, timeout time. stateLockerView := views.NewStateLocker(arguments.ViewHuman, view) operationView := views.NewOperation(arguments.ViewHuman, false, view) + // Many of our tests use an overridden "null" provider that's just in-memory + // inside the test process, not a separate plugin on disk. + depLocks := depsfile.NewLocks() + depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/null")) + return &backend.Operation{ - ConfigDir: configDir, - ConfigLoader: configLoader, - PlanRefresh: true, - StateLocker: clistate.NewLocker(timeout, stateLockerView), - Type: backend.OperationTypeApply, - View: operationView, + ConfigDir: configDir, + ConfigLoader: configLoader, + PlanRefresh: true, + StateLocker: clistate.NewLocker(timeout, stateLockerView), + Type: backend.OperationTypeApply, + View: operationView, + DependencyLocks: depLocks, }, configCleanup, done } diff --git a/internal/backend/remote/backend_plan_test.go b/internal/backend/remote/backend_plan_test.go index 6d4ced7b87a7..147c68e9dbe2 100644 --- a/internal/backend/remote/backend_plan_test.go +++ b/internal/backend/remote/backend_plan_test.go @@ -16,6 +16,7 @@ import ( "github.com/hashicorp/terraform/internal/command/arguments" "github.com/hashicorp/terraform/internal/command/clistate" "github.com/hashicorp/terraform/internal/command/views" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/initwd" "github.com/hashicorp/terraform/internal/plans" "github.com/hashicorp/terraform/internal/plans/planfile" @@ -41,13 +42,19 @@ func testOperationPlanWithTimeout(t *testing.T, configDir string, timeout time.D stateLockerView := views.NewStateLocker(arguments.ViewHuman, view) operationView := views.NewOperation(arguments.ViewHuman, false, view) + // Many of our tests use an overridden "null" provider that's just in-memory + // inside the test process, not a separate plugin on disk. + depLocks := depsfile.NewLocks() + depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/null")) + return &backend.Operation{ - ConfigDir: configDir, - ConfigLoader: configLoader, - PlanRefresh: true, - StateLocker: clistate.NewLocker(timeout, stateLockerView), - Type: backend.OperationTypePlan, - View: operationView, + ConfigDir: configDir, + ConfigLoader: configLoader, + PlanRefresh: true, + StateLocker: clistate.NewLocker(timeout, stateLockerView), + Type: backend.OperationTypePlan, + View: operationView, + DependencyLocks: depLocks, }, configCleanup, done } diff --git a/internal/command/command_test.go b/internal/command/command_test.go index a0bc4710d6f9..a9ba2a044e0a 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -29,6 +29,7 @@ import ( "github.com/hashicorp/terraform/internal/configs/configload" "github.com/hashicorp/terraform/internal/configs/configschema" "github.com/hashicorp/terraform/internal/copy" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/getproviders" "github.com/hashicorp/terraform/internal/initwd" legacy "github.com/hashicorp/terraform/internal/legacy/terraform" @@ -247,6 +248,7 @@ func testPlanFile(t *testing.T, configSnap *configload.Snapshot, state *states.S PreviousRunStateFile: prevStateFile, StateFile: stateFile, Plan: plan, + DependencyLocks: depsfile.NewLocks(), }) if err != nil { t.Fatalf("failed to create temporary plan file: %s", err) diff --git a/internal/command/import_test.go b/internal/command/import_test.go index 1469ea81d224..091646ecc246 100644 --- a/internal/command/import_test.go +++ b/internal/command/import_test.go @@ -332,7 +332,7 @@ func TestImport_initializationErrorShouldUnlock(t *testing.T) { // specifically, it should fail due to a missing provider msg := strings.ReplaceAll(ui.ErrorWriter.String(), "\n", " ") - if want := `unavailable provider "registry.terraform.io/hashicorp/unknown"`; !strings.Contains(msg, want) { + if want := `provider registry.terraform.io/hashicorp/unknown: required by this configuration but no version is selected`; !strings.Contains(msg, want) { t.Errorf("incorrect message\nwant substring: %s\ngot:\n%s", want, msg) } diff --git a/internal/command/meta_backend.go b/internal/command/meta_backend.go index f467579ba973..5d2aa5ed3b91 100644 --- a/internal/command/meta_backend.go +++ b/internal/command/meta_backend.go @@ -355,13 +355,24 @@ func (m *Meta) Operation(b backend.Backend) *backend.Operation { stateLocker = clistate.NewLocker(m.stateLockTimeout, view) } + depLocks, diags := m.lockedDependencies() + if diags.HasErrors() { + // We can't actually report errors from here, but m.lockedDependencies + // should always have been called earlier to prepare the "ContextOpts" + // for the backend anyway, so we should never actually get here in + // a real situation. If we do get here then the backend will inevitably + // fail downstream somwhere if it tries to use the empty depLocks. + log.Printf("[WARN] Failed to load dependency locks while preparing backend operation (ignored): %s", diags.Err().Error()) + } + return &backend.Operation{ - PlanOutBackend: planOutBackend, - Targets: m.targets, - UIIn: m.UIInput(), - UIOut: m.Ui, - Workspace: workspace, - StateLocker: stateLocker, + PlanOutBackend: planOutBackend, + Targets: m.targets, + UIIn: m.UIInput(), + UIOut: m.Ui, + Workspace: workspace, + StateLocker: stateLocker, + DependencyLocks: depLocks, } } diff --git a/internal/command/meta_dependencies.go b/internal/command/meta_dependencies.go index efda3f10321d..1b0cb97f8df8 100644 --- a/internal/command/meta_dependencies.go +++ b/internal/command/meta_dependencies.go @@ -1,6 +1,7 @@ package command import ( + "log" "os" "github.com/hashicorp/terraform/internal/depsfile" @@ -48,10 +49,11 @@ func (m *Meta) lockedDependencies() (*depsfile.Locks, tfdiags.Diagnostics) { // promising to support two concurrent dependency installation processes. _, err := os.Stat(dependencyLockFilename) if os.IsNotExist(err) { - return depsfile.NewLocks(), nil + return m.annotateDependencyLocksWithOverrides(depsfile.NewLocks()), nil } - return depsfile.LoadLocksFromFile(dependencyLockFilename) + ret, diags := depsfile.LoadLocksFromFile(dependencyLockFilename) + return m.annotateDependencyLocksWithOverrides(ret), diags } // replaceLockedDependencies creates or overwrites the lock file in the @@ -60,3 +62,32 @@ func (m *Meta) lockedDependencies() (*depsfile.Locks, tfdiags.Diagnostics) { func (m *Meta) replaceLockedDependencies(new *depsfile.Locks) tfdiags.Diagnostics { return depsfile.SaveLocksToFile(new, dependencyLockFilename) } + +// annotateDependencyLocksWithOverrides modifies the given Locks object in-place +// to track as overridden any provider address that's subject to testing +// overrides, development overrides, or "unmanaged provider" status. +// +// This is just an implementation detail of the lockedDependencies method, +// not intended for use anywhere else. +func (m *Meta) annotateDependencyLocksWithOverrides(ret *depsfile.Locks) *depsfile.Locks { + if ret == nil { + return ret + } + + for addr := range m.ProviderDevOverrides { + log.Printf("[DEBUG] Provider %s is overridden by dev_overrides", addr) + ret.SetProviderOverridden(addr) + } + for addr := range m.UnmanagedProviders { + log.Printf("[DEBUG] Provider %s is overridden as an \"unmanaged provider\"", addr) + ret.SetProviderOverridden(addr) + } + if m.testingOverrides != nil { + for addr := range m.testingOverrides.Providers { + log.Printf("[DEBUG] Provider %s is overridden in Meta.testingOverrides", addr) + ret.SetProviderOverridden(addr) + } + } + + return ret +} diff --git a/internal/command/meta_providers.go b/internal/command/meta_providers.go index 84ccc89d87a4..ffe52ffde6c2 100644 --- a/internal/command/meta_providers.go +++ b/internal/command/meta_providers.go @@ -282,6 +282,12 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error) factories[provider] = providerFactoryError(thisErr) } + if locks.ProviderIsOverridden(provider) { + // Overridden providers we'll handle with the other separate + // loops below, for dev overrides etc. + continue + } + version := lock.Version() cached := cacheDir.ProviderVersion(provider, version) if cached == nil { @@ -313,8 +319,6 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error) factories[provider] = providerFactory(cached) } for provider, localDir := range devOverrideProviders { - // It's likely that providers in this map will conflict with providers - // in providerLocks factories[provider] = devOverrideProviderFactory(provider, localDir) } for provider, reattach := range unmanagedProviders { diff --git a/internal/command/plan_test.go b/internal/command/plan_test.go index 68f051f55532..84950d7e0d9c 100644 --- a/internal/command/plan_test.go +++ b/internal/command/plan_test.go @@ -1051,7 +1051,7 @@ func TestPlan_init_required(t *testing.T) { t.Fatalf("expected error, got success") } got := output.Stderr() - if !strings.Contains(got, "Error: Missing required provider") { + if !(strings.Contains(got, "terraform init") && strings.Contains(got, "provider registry.terraform.io/hashicorp/test: required by this configuration but no version is selected")) { t.Fatal("wrong error message in output:", got) } } diff --git a/internal/configs/config.go b/internal/configs/config.go index cc3f06e798b1..f38d3cd85daa 100644 --- a/internal/configs/config.go +++ b/internal/configs/config.go @@ -2,11 +2,13 @@ package configs import ( "fmt" + "log" "sort" version "github.com/hashicorp/go-version" "github.com/hashicorp/hcl/v2" "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/getproviders" ) @@ -194,6 +196,89 @@ func (c *Config) EntersNewPackage() bool { return moduleSourceAddrEntersNewPackage(c.SourceAddr) } +// VerifyDependencySelections checks whether the given locked dependencies +// are acceptable for all of the version constraints reported in the +// configuration tree represented by the reciever. +// +// This function will errors only if any of the locked dependencies are out of +// range for corresponding constraints in the configuration. If there are +// multiple inconsistencies then it will attempt to describe as many of them +// as possible, rather than stopping at the first problem. +// +// It's typically the responsibility of "terraform init" to change the locked +// dependencies to conform with the configuration, and so +// VerifyDependencySelections is intended for other commands to check whether +// it did so correctly and to catch if anything has changed in configuration +// since the last "terraform init" which requires re-initialization. However, +// it's up to the caller to decide how to advise users recover from these +// errors, because the advise can vary depending on what operation the user +// is attempting. +func (c *Config) VerifyDependencySelections(depLocks *depsfile.Locks) []error { + var errs []error + + reqs, diags := c.ProviderRequirements() + if diags.HasErrors() { + // It should be very unusual to get here, but unfortunately we can + // end up here in some edge cases where the config loader doesn't + // process version constraint strings in exactly the same way as + // the requirements resolver. (See the addProviderRequirements method + // for more information.) + errs = append(errs, fmt.Errorf("failed to determine the configuration's provider requirements: %s", diags.Error())) + } + + for providerAddr, constraints := range reqs { + if !depsfile.ProviderIsLockable(providerAddr) { + continue // disregard builtin providers, and such + } + if depLocks != nil && depLocks.ProviderIsOverridden(providerAddr) { + // The "overridden" case is for unusual special situations like + // dev overrides, so we'll explicitly note it in the logs just in + // case we see bug reports with these active and it helps us + // understand why we ended up using the "wrong" plugin. + log.Printf("[DEBUG] Config.VerifyDependencySelections: skipping %s because it's overridden by a special configuration setting", providerAddr) + continue + } + + var lock *depsfile.ProviderLock + if depLocks != nil { // Should always be true in main code, but unfortunately sometimes not true in old tests that don't fill out arguments completely + lock = depLocks.Provider(providerAddr) + } + if lock == nil { + log.Printf("[TRACE] Config.VerifyDependencySelections: provider %s has no lock file entry to satisfy %q", providerAddr, getproviders.VersionConstraintsString(constraints)) + errs = append(errs, fmt.Errorf("provider %s: required by this configuration but no version is selected", providerAddr)) + continue + } + + selectedVersion := lock.Version() + allowedVersions := getproviders.MeetingConstraints(constraints) + log.Printf("[TRACE] Config.VerifyDependencySelections: provider %s has %s to satisfy %q", providerAddr, selectedVersion.String(), getproviders.VersionConstraintsString(constraints)) + if !allowedVersions.Has(selectedVersion) { + // The most likely cause of this is that the author of a module + // has changed its constraints, but this could also happen in + // some other unusual situations, such as the user directly + // editing the lock file to record something invalid. We'll + // distinguish those cases here in order to avoid the more + // specific error message potentially being a red herring in + // the edge-cases. + currentConstraints := getproviders.VersionConstraintsString(constraints) + lockedConstraints := getproviders.VersionConstraintsString(lock.VersionConstraints()) + switch { + case currentConstraints != lockedConstraints: + errs = append(errs, fmt.Errorf("provider %s: locked version selection %s doesn't match the updated version constraints %q", providerAddr, selectedVersion.String(), currentConstraints)) + default: + errs = append(errs, fmt.Errorf("provider %s: version constraints %q don't match the locked version selection %s", providerAddr, currentConstraints, selectedVersion.String())) + } + } + } + + // Return multiple errors in an arbitrary-but-deterministic order. + sort.Slice(errs, func(i, j int) bool { + return errs[i].Error() < errs[j].Error() + }) + + return errs +} + // ProviderRequirements searches the full tree of modules under the receiver // for both explicit and implicit dependencies on providers. // diff --git a/internal/configs/config_test.go b/internal/configs/config_test.go index 4677f3416a69..21c400a85a54 100644 --- a/internal/configs/config_test.go +++ b/internal/configs/config_test.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/hcl/v2/hclsyntax" svchost "github.com/hashicorp/terraform-svchost" "github.com/hashicorp/terraform/internal/addrs" + "github.com/hashicorp/terraform/internal/depsfile" "github.com/hashicorp/terraform/internal/getproviders" ) @@ -262,6 +263,128 @@ func TestConfigProviderRequirementsByModule(t *testing.T) { } } +func TestVerifyDependencySelections(t *testing.T) { + cfg, diags := testNestedModuleConfigFromDir(t, "testdata/provider-reqs") + // TODO: Version Constraint Deprecation. + // Once we've removed the version argument from provider configuration + // blocks, this can go back to expected 0 diagnostics. + // assertNoDiagnostics(t, diags) + assertDiagnosticCount(t, diags, 1) + assertDiagnosticSummary(t, diags, "Version constraints inside provider configuration blocks are deprecated") + + tlsProvider := addrs.NewProvider( + addrs.DefaultProviderRegistryHost, + "hashicorp", "tls", + ) + happycloudProvider := addrs.NewProvider( + svchost.Hostname("tf.example.com"), + "awesomecorp", "happycloud", + ) + nullProvider := addrs.NewDefaultProvider("null") + randomProvider := addrs.NewDefaultProvider("random") + impliedProvider := addrs.NewDefaultProvider("implied") + configuredProvider := addrs.NewDefaultProvider("configured") + grandchildProvider := addrs.NewDefaultProvider("grandchild") + + tests := map[string]struct { + PrepareLocks func(*depsfile.Locks) + WantErrs []string + }{ + "empty locks": { + func(*depsfile.Locks) { + // Intentionally blank + }, + []string{ + `provider registry.terraform.io/hashicorp/configured: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/grandchild: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/implied: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/random: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/tls: required by this configuration but no version is selected`, + `provider tf.example.com/awesomecorp/happycloud: required by this configuration but no version is selected`, + }, + }, + "suitable locks": { + func(locks *depsfile.Locks) { + locks.SetProvider(configuredProvider, getproviders.MustParseVersion("1.4.0"), nil, nil) + locks.SetProvider(grandchildProvider, getproviders.MustParseVersion("0.1.0"), nil, nil) + locks.SetProvider(impliedProvider, getproviders.MustParseVersion("0.2.0"), nil, nil) + locks.SetProvider(nullProvider, getproviders.MustParseVersion("2.0.1"), nil, nil) + locks.SetProvider(randomProvider, getproviders.MustParseVersion("1.2.2"), nil, nil) + locks.SetProvider(tlsProvider, getproviders.MustParseVersion("3.0.1"), nil, nil) + locks.SetProvider(happycloudProvider, getproviders.MustParseVersion("0.0.1"), nil, nil) + }, + nil, + }, + "null provider constraints changed": { + func(locks *depsfile.Locks) { + locks.SetProvider(configuredProvider, getproviders.MustParseVersion("1.4.0"), nil, nil) + locks.SetProvider(grandchildProvider, getproviders.MustParseVersion("0.1.0"), nil, nil) + locks.SetProvider(impliedProvider, getproviders.MustParseVersion("0.2.0"), nil, nil) + locks.SetProvider(nullProvider, getproviders.MustParseVersion("3.0.0"), nil, nil) + locks.SetProvider(randomProvider, getproviders.MustParseVersion("1.2.2"), nil, nil) + locks.SetProvider(tlsProvider, getproviders.MustParseVersion("3.0.1"), nil, nil) + locks.SetProvider(happycloudProvider, getproviders.MustParseVersion("0.0.1"), nil, nil) + }, + []string{ + `provider registry.terraform.io/hashicorp/null: locked version selection 3.0.0 doesn't match the updated version constraints "~> 2.0.0, 2.0.1"`, + }, + }, + "null provider lock changed": { + func(locks *depsfile.Locks) { + // In this case, we set the lock file version constraints to + // match the configuration, and so our error message changes + // to not assume the configuration changed anymore. + locks.SetProvider(nullProvider, getproviders.MustParseVersion("3.0.0"), getproviders.MustParseVersionConstraints("~> 2.0.0, 2.0.1"), nil) + + locks.SetProvider(configuredProvider, getproviders.MustParseVersion("1.4.0"), nil, nil) + locks.SetProvider(grandchildProvider, getproviders.MustParseVersion("0.1.0"), nil, nil) + locks.SetProvider(impliedProvider, getproviders.MustParseVersion("0.2.0"), nil, nil) + locks.SetProvider(randomProvider, getproviders.MustParseVersion("1.2.2"), nil, nil) + locks.SetProvider(tlsProvider, getproviders.MustParseVersion("3.0.1"), nil, nil) + locks.SetProvider(happycloudProvider, getproviders.MustParseVersion("0.0.1"), nil, nil) + }, + []string{ + `provider registry.terraform.io/hashicorp/null: version constraints "~> 2.0.0, 2.0.1" don't match the locked version selection 3.0.0`, + }, + }, + "overridden provider": { + func(locks *depsfile.Locks) { + locks.SetProviderOverridden(happycloudProvider) + }, + []string{ + // We still catch all of the other ones, because only happycloud was overridden + `provider registry.terraform.io/hashicorp/configured: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/grandchild: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/implied: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/random: required by this configuration but no version is selected`, + `provider registry.terraform.io/hashicorp/tls: required by this configuration but no version is selected`, + }, + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + depLocks := depsfile.NewLocks() + test.PrepareLocks(depLocks) + gotErrs := cfg.VerifyDependencySelections(depLocks) + + var gotErrsStr []string + if gotErrs != nil { + gotErrsStr = make([]string, len(gotErrs)) + for i, err := range gotErrs { + gotErrsStr[i] = err.Error() + } + } + + if diff := cmp.Diff(test.WantErrs, gotErrsStr); diff != "" { + t.Errorf("wrong errors\n%s", diff) + } + }) + } +} + func TestConfigProviderForConfigAddr(t *testing.T) { cfg, diags := testModuleConfigFromDir("testdata/valid-modules/providers-fqns") assertNoDiagnostics(t, diags) diff --git a/internal/depsfile/locks.go b/internal/depsfile/locks.go index 751399070303..0dec32a1deb4 100644 --- a/internal/depsfile/locks.go +++ b/internal/depsfile/locks.go @@ -18,6 +18,20 @@ import ( type Locks struct { providers map[addrs.Provider]*ProviderLock + // overriddenProviders is a subset of providers which we might be tracking + // in field providers but whose lock information we're disregarding for + // this particular run due to some feature that forces Terraform to not + // use a normally-installed plugin for it. For example, the "provider dev + // overrides" feature means that we'll be using an arbitrary directory on + // disk as the package, regardless of what might be selected in "providers". + // + // overriddenProviders is an in-memory-only annotation, never stored as + // part of a lock file and thus not persistent between Terraform runs. + // The CLI layer is generally the one responsible for populating this, + // by calling SetProviderOverridden in response to CLI Configuration + // settings, environment variables, or whatever similar sources. + overriddenProviders map[addrs.Provider]struct{} + // TODO: In future we'll also have module locks, but the design of that // still needs some more work and we're deferring that to get the // provider locking capability out sooner, because it's more common to @@ -84,6 +98,48 @@ func (l *Locks) SetProvider(addr addrs.Provider, version getproviders.Version, c return new } +// SetProviderOverridden records that this particular Terraform process will +// not pay attention to the recorded lock entry for the given provider, and +// will instead access that provider's functionality in some other special +// way that isn't sensitive to provider version selections or checksums. +// +// This is an in-memory-only annotation which lives only inside a particular +// Locks object, and is never persisted as part of a saved lock file on disk. +// It's valid to still use other methods of the reciever to access +// already-stored lock information and to update lock information for an +// overridden provider, but some callers may need to use ProviderIsOverridden +// to selectively disregard stored lock information for overridden providers, +// depending on what they intended to use the lock information for. +func (l *Locks) SetProviderOverridden(addr addrs.Provider) { + if l.overriddenProviders == nil { + l.overriddenProviders = make(map[addrs.Provider]struct{}) + } + l.overriddenProviders[addr] = struct{}{} +} + +// ProviderIsOverridden returns true only if the given provider address was +// previously registered as overridden by calling SetProviderOverridden. +func (l *Locks) ProviderIsOverridden(addr addrs.Provider) bool { + _, ret := l.overriddenProviders[addr] + return ret +} + +// SetSameOverriddenProviders updates the receiver to mark as overridden all +// of the same providers already marked as overridden in the other given locks. +// +// This allows propagating override information between different lock objects, +// as if calling SetProviderOverridden for each address already overridden +// in the other given locks. If the reciever already has overridden providers, +// SetSameOverriddenProviders will preserve them. +func (l *Locks) SetSameOverriddenProviders(other *Locks) { + if other == nil { + return + } + for addr := range other.overriddenProviders { + l.SetProviderOverridden(addr) + } +} + // NewProviderLock creates a new ProviderLock object that isn't associated // with any Locks object. // From 9e0de1c46483f4bcacbe71b79925dde58408382c Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 1 Oct 2021 14:50:36 -0700 Subject: [PATCH 125/644] Update CHANGELOG.md --- CHANGELOG.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index f5b11e243e4a..15da0516165b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,9 @@ UPGRADE NOTES: * Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported. * The `terraform graph` command no longer supports `-type=validate` and `-type=eval` options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the `terraform console` command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph modes. (Please note that `terraform graph` is not covered by the Terraform v1.0 compatibility promises, because its behavior inherently exposes Terraform Core implementation details, so we recommend it only for interactive debugging tasks and not for use in automation.) +* `terraform apply` with a previously-saved plan file will now verify that the provider plugin packages used to create the plan fully match the ones used during apply, using the same checksum scheme that Terraform normally uses for the dependency lock file. Previously Terraform was checking consistency of plugins from a plan file using a legacy mechanism which covered only the main plugin executable, not any other files that might be distributed alongside in the plugin package. + + This additional check should not affect typical plugins that conform to the expectation that a plugin package's contents are immutable once released, but may affect a hypothetical in-house plugin that intentionally modifies extra files in its package directory somehow between plan and apply. If you have such a plugin, you'll need to change its approach to store those files in some other location separate from the package directory. This is a minor compatibility break motivated by increasing the assurance that plugins have not been inadvertently or maliciously modified between plan and apply. NEW FEATURES: From 45dab1b956c4828a0b144ad1db9c1bc4564c53bf Mon Sep 17 00:00:00 2001 From: Ram Date: Sat, 2 Oct 2021 17:15:51 +0800 Subject: [PATCH 126/644] replace an with a --- website/docs/cli/commands/state/rm.html.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/cli/commands/state/rm.html.md b/website/docs/cli/commands/state/rm.html.md index 3cc9f297eac7..131c31a815f3 100644 --- a/website/docs/cli/commands/state/rm.html.md +++ b/website/docs/cli/commands/state/rm.html.md @@ -12,7 +12,7 @@ The main function of [Terraform state](/docs/language/state/index.html) is to track the bindings between resource instance addresses in your configuration and the remote objects they represent. Normally Terraform automatically updates the state in response to actions taken when applying a plan, such as -removing a binding for an remote object that has now been deleted. +removing a binding for a remote object that has now been deleted. You can use `terraform state rm` in the less common situation where you wish to remove a binding to an existing remote object without first destroying it, From 18d354223e5a4b039fdf2d10c2c7c6dba6269992 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Mon, 4 Oct 2021 14:21:16 -0400 Subject: [PATCH 127/644] objchange: fix ProposedNew from null objects The codepath for AllAttributesNull was not correct for any nested object types with collections, and should create single null values for the correct NestingMode rather than a single object with null attributes. Since there is no reason to descend into nested object types to create nullv alues, we can drop the AllAttributesNull function altogether and create null values as needed during ProposedNew. The corresponding AllBlockAttributesNull was only called internally in 1 location, and simply delegated to schema.EmptyValue. We can reduce the package surface area by dropping that function too and calling EmptyValue directly. --- internal/plans/objchange/all_null.go | 33 ------- internal/plans/objchange/objchange.go | 16 +++- internal/plans/objchange/objchange_test.go | 104 +++++++++++++++++++++ 3 files changed, 115 insertions(+), 38 deletions(-) delete mode 100644 internal/plans/objchange/all_null.go diff --git a/internal/plans/objchange/all_null.go b/internal/plans/objchange/all_null.go deleted file mode 100644 index fb7ec4cfb35c..000000000000 --- a/internal/plans/objchange/all_null.go +++ /dev/null @@ -1,33 +0,0 @@ -package objchange - -import ( - "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/zclconf/go-cty/cty" -) - -// AllBlockAttributesNull constructs a non-null cty.Value of the object type implied -// by the given schema that has all of its leaf attributes set to null and all -// of its nested block collections set to zero-length. -// -// This simulates what would result from decoding an empty configuration block -// with the given schema, except that it does not produce errors -func AllBlockAttributesNull(schema *configschema.Block) cty.Value { - // "All attributes null" happens to be the definition of EmptyValue for - // a Block, so we can just delegate to that. - return schema.EmptyValue() -} - -// AllAttributesNull returns a cty.Value of the object type implied by the given -// attriubutes that has all of its leaf attributes set to null. -func AllAttributesNull(attrs map[string]*configschema.Attribute) cty.Value { - newAttrs := make(map[string]cty.Value, len(attrs)) - - for name, attr := range attrs { - if attr.NestedType != nil { - newAttrs[name] = AllAttributesNull(attr.NestedType.Attributes) - } else { - newAttrs[name] = cty.NullVal(attr.Type) - } - } - return cty.ObjectVal(newAttrs) -} diff --git a/internal/plans/objchange/objchange.go b/internal/plans/objchange/objchange.go index 739d666480bc..63e8464510d9 100644 --- a/internal/plans/objchange/objchange.go +++ b/internal/plans/objchange/objchange.go @@ -37,7 +37,10 @@ func ProposedNew(schema *configschema.Block, prior, config cty.Value) cty.Value // similar to the result of decoding an empty configuration block, // which simplifies our handling of the top-level attributes/blocks // below by giving us one non-null level of object to pull values from. - prior = AllBlockAttributesNull(schema) + // + // "All attributes null" happens to be the definition of EmptyValue for + // a Block, so we can just delegate to that + prior = schema.EmptyValue() } return proposedNew(schema, prior, config) } @@ -258,12 +261,15 @@ func proposedNewNestedBlock(schema *configschema.NestedBlock, prior, config cty. } func proposedNewAttributes(attrs map[string]*configschema.Attribute, prior, config cty.Value) map[string]cty.Value { - if prior.IsNull() { - prior = AllAttributesNull(attrs) - } newAttrs := make(map[string]cty.Value, len(attrs)) for name, attr := range attrs { - priorV := prior.GetAttr(name) + var priorV cty.Value + if prior.IsNull() { + priorV = cty.NullVal(prior.Type().AttributeType(name)) + } else { + priorV = prior.GetAttr(name) + } + configV := config.GetAttr(name) var newV cty.Value switch { diff --git a/internal/plans/objchange/objchange_test.go b/internal/plans/objchange/objchange_test.go index b0021fb14b20..32a56320cf50 100644 --- a/internal/plans/objchange/objchange_test.go +++ b/internal/plans/objchange/objchange_test.go @@ -1536,6 +1536,110 @@ func TestProposedNew(t *testing.T) { }), }), }, + "prior null nested objects": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "single": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingSingle, + Attributes: map[string]*configschema.Attribute{ + "list": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingList, + Attributes: map[string]*configschema.Attribute{ + "foo": { + Type: cty.String, + }, + }, + }, + Optional: true, + }, + }, + }, + Optional: true, + }, + "map": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingMap, + Attributes: map[string]*configschema.Attribute{ + "map": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingList, + Attributes: map[string]*configschema.Attribute{ + "foo": { + Type: cty.String, + }, + }, + }, + Optional: true, + }, + }, + }, + Optional: true, + }, + }, + }, + cty.NullVal(cty.Object(map[string]cty.Type{ + "single": cty.Object(map[string]cty.Type{ + "list": cty.List(cty.Object(map[string]cty.Type{ + "foo": cty.String, + })), + }), + "map": cty.Map(cty.Object(map[string]cty.Type{ + "list": cty.List(cty.Object(map[string]cty.Type{ + "foo": cty.String, + })), + })), + })), + cty.ObjectVal(map[string]cty.Value{ + "single": cty.ObjectVal(map[string]cty.Value{ + "list": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.StringVal("a"), + }), + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.StringVal("b"), + }), + }), + }), + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "list": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.StringVal("a"), + }), + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.StringVal("b"), + }), + }), + }), + }), + }), + cty.ObjectVal(map[string]cty.Value{ + "single": cty.ObjectVal(map[string]cty.Value{ + "list": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.StringVal("a"), + }), + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.StringVal("b"), + }), + }), + }), + "map": cty.MapVal(map[string]cty.Value{ + "one": cty.ObjectVal(map[string]cty.Value{ + "list": cty.ListVal([]cty.Value{ + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.StringVal("a"), + }), + cty.ObjectVal(map[string]cty.Value{ + "foo": cty.StringVal("b"), + }), + }), + }), + }), + }), + }, } for name, test := range tests { From 58d85fcc2e21c2b68d4b0c391422f5b001be0a9b Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 5 Oct 2021 12:31:23 -0400 Subject: [PATCH 128/644] add test for planing unknown data source values --- internal/plans/objchange/objchange_test.go | 48 ++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/internal/plans/objchange/objchange_test.go b/internal/plans/objchange/objchange_test.go index 32a56320cf50..4c434d3e42ee 100644 --- a/internal/plans/objchange/objchange_test.go +++ b/internal/plans/objchange/objchange_test.go @@ -1640,6 +1640,54 @@ func TestProposedNew(t *testing.T) { }), }), }, + + // data sources are planned with an unknown value + "unknown prior nested objects": { + &configschema.Block{ + Attributes: map[string]*configschema.Attribute{ + "list": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingList, + Attributes: map[string]*configschema.Attribute{ + "list": { + NestedType: &configschema.Object{ + Nesting: configschema.NestingList, + Attributes: map[string]*configschema.Attribute{ + "foo": { + Type: cty.String, + }, + }, + }, + Computed: true, + }, + }, + }, + Computed: true, + }, + }, + }, + cty.UnknownVal(cty.Object(map[string]cty.Type{ + "List": cty.List(cty.Object(map[string]cty.Type{ + "list": cty.List(cty.Object(map[string]cty.Type{ + "foo": cty.String, + })), + })), + })), + cty.NullVal(cty.Object(map[string]cty.Type{ + "List": cty.List(cty.Object(map[string]cty.Type{ + "list": cty.List(cty.Object(map[string]cty.Type{ + "foo": cty.String, + })), + })), + })), + cty.UnknownVal(cty.Object(map[string]cty.Type{ + "List": cty.List(cty.Object(map[string]cty.Type{ + "list": cty.List(cty.Object(map[string]cty.Type{ + "foo": cty.String, + })), + })), + })), + }, } for name, test := range tests { From d09510a8fbcb4121d5b67d160d3d1ee4811eac9f Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 1 Oct 2021 14:51:06 -0700 Subject: [PATCH 129/644] command: Early error message for missing cache entries of locked providers In the original incarnation of Meta.providerFactories we were returning into a Meta.contextOpts whose signature didn't allow it to return an error directly, and so we had compromised by making the provider factory functions themselves return errors once called. Subsequent work made Meta.contextOpts need to return an error anyway, but at the time we neglected to update our handling of the providerFactories result, having it still defer the error handling until we finally instantiate a provider. Although that did ultimately get the expected result anyway, the error ended up being reported from deep in the guts of a Terraform Core graph walk, in whichever concurrently-visited graph node happened to try to instantiate the plugin first. This meant that the exact phrasing of the error message would vary between runs and the reporting codepath didn't have enough context to given an actionable suggestion on how to proceed. In this commit we make Meta.contextOpts pass through directly any error that Meta.providerFactories produces, and then make Meta.providerFactories produce a special error type so that Meta.Backend can ultimately return a user-friendly diagnostic message containing a specific suggestion to run "terraform init", along with a short explanation of what a provider plugin is. The reliance here on an implied contract between two functions that are not directly connected in the callstack is non-ideal, and so hopefully we'll revisit this further in future work on the overall architecture of the CLI layer. To try to make this robust in the meantime though, I wrote it to use the errors.As function to potentially unwrap a wrapped version of our special error type, in case one of the intervening layers is changed at some point to wrap the downstream error before returning it. --- internal/command/meta.go | 15 +------------- internal/command/meta_backend.go | 29 +++++++++++++++++++++++++- internal/command/meta_providers.go | 33 +++++++++++++++++++++++++++--- 3 files changed, 59 insertions(+), 18 deletions(-) diff --git a/internal/command/meta.go b/internal/command/meta.go index 0a4029ef0151..cd9387173f0d 100644 --- a/internal/command/meta.go +++ b/internal/command/meta.go @@ -461,20 +461,7 @@ func (m *Meta) contextOpts() (*terraform.ContextOpts, error) { } else { providerFactories, err := m.providerFactories() if err != nil { - // providerFactories can fail if the plugin selections file is - // invalid in some way, but we don't have any way to report that - // from here so we'll just behave as if no providers are available - // in that case. However, we will produce a warning in case this - // shows up unexpectedly and prompts a bug report. - // This situation shouldn't arise commonly in practice because - // the selections file is generated programmatically. - log.Printf("[WARN] Failed to determine selected providers: %s", err) - - // variable providerFactories may now be incomplete, which could - // lead to errors reported downstream from here. providerFactories - // tries to populate as many providers as possible even in an - // error case, so that operations not using problematic providers - // can still succeed. + return nil, err } opts.Providers = providerFactories opts.Provisioners = m.provisionerFactories() diff --git a/internal/command/meta_backend.go b/internal/command/meta_backend.go index 5d2aa5ed3b91..3828708175ae 100644 --- a/internal/command/meta_backend.go +++ b/internal/command/meta_backend.go @@ -4,8 +4,10 @@ package command // exported and private. import ( + "bytes" "context" "encoding/json" + "errors" "fmt" "log" "path/filepath" @@ -105,7 +107,32 @@ func (m *Meta) Backend(opts *BackendOpts) (backend.Enhanced, tfdiags.Diagnostics // Set up the CLI opts we pass into backends that support it. cliOpts, err := m.backendCLIOpts() if err != nil { - diags = diags.Append(err) + if errs := providerPluginErrors(nil); errors.As(err, &errs) { + // This is a special type returned by m.providerFactories, which + // indicates one or more inconsistencies between the dependency + // lock file and the provider plugins actually available in the + // local cache directory. + var buf bytes.Buffer + for addr, err := range errs { + fmt.Fprintf(&buf, "\n - %s: %s", addr, err) + } + suggestion := "To download the plugins required for this configuration, run:\n terraform init" + if m.RunningInAutomation { + // Don't mention "terraform init" specifically if we're running in an automation wrapper + suggestion = "You must install the required plugins before running Terraform operations." + } + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Required plugins are not installed", + fmt.Sprintf( + "The installed provider plugins are not consistent with the packages selected in the dependency lock file:%s\n\nTerraform uses external plugins to integrate with a variety of different infrastructure services. %s", + buf.String(), suggestion, + ), + )) + } else { + // All other errors just get generic handling. + diags = diags.Append(err) + } return nil, diags } cliOpts.Validation = true diff --git a/internal/command/meta_providers.go b/internal/command/meta_providers.go index ffe52ffde6c2..c406e4745f23 100644 --- a/internal/command/meta_providers.go +++ b/internal/command/meta_providers.go @@ -1,6 +1,7 @@ package command import ( + "bytes" "errors" "fmt" "log" @@ -8,7 +9,6 @@ import ( "os/exec" "strings" - "github.com/hashicorp/go-multierror" plugin "github.com/hashicorp/go-plugin" "github.com/hashicorp/terraform/internal/addrs" @@ -236,7 +236,7 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error) // where appropriate and so that callers can potentially make use of the // partial result we return if e.g. they want to enumerate which providers // are available, or call into one of the providers that didn't fail. - var err error + errs := make(map[addrs.Provider]error) // For the providers from the lock file, we expect them to be already // available in the provider cache because "terraform init" should already @@ -274,7 +274,7 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error) } for provider, lock := range providerLocks { reportError := func(thisErr error) { - err = multierror.Append(err, thisErr) + errs[provider] = thisErr // We'll populate a provider factory that just echoes our error // again if called, which allows us to still report a helpful // error even if it gets detected downstream somewhere from the @@ -324,6 +324,11 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error) for provider, reattach := range unmanagedProviders { factories[provider] = unmanagedProviderFactory(provider, reattach) } + + var err error + if len(errs) > 0 { + err = providerPluginErrors(errs) + } return factories, err } @@ -475,3 +480,25 @@ func providerFactoryError(err error) providers.Factory { return nil, err } } + +// providerPluginErrors is an error implementation we can return from +// Meta.providerFactories to capture potentially multiple errors about the +// locally-cached plugins (or lack thereof) for particular external providers. +// +// Some functions closer to the UI layer can sniff for this error type in order +// to return a more helpful error message. +type providerPluginErrors map[addrs.Provider]error + +func (errs providerPluginErrors) Error() string { + if len(errs) == 1 { + for addr, err := range errs { + return fmt.Sprintf("%s: %s", addr, err) + } + } + var buf bytes.Buffer + fmt.Fprintf(&buf, "missing or corrupted provider plugins:") + for addr, err := range errs { + fmt.Fprintf(&buf, "\n - %s: %s", addr, err) + } + return buf.String() +} From 01b22f4b7688c9cf084214ef1dfe51ed7719fe42 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 1 Oct 2021 16:19:36 -0700 Subject: [PATCH 130/644] command/e2etest: TestProviderTampering We have various mechanisms that aim to ensure that the installed provider plugins are consistent with the lock file and that the lock file is consistent with the provider requirements, and we do have existing unit tests for them, but all of those cases mock our fake out at least part of the process and in the past that's caused us to miss usability regressions, where we still catch the error but do so at the wrong layer and thus generate error message lacking useful additional context. Here we'll add some new end-to-end tests to supplement the existing unit tests, making sure things work as expected when we assemble the system together as we would in a release. These tests cover a number of different ways in which the plugin selections can grow inconsistent. These new tests all run only when we're in a context where we're allowed to access the network, because they exercise the real plugin installer codepath. We could technically build this to use a local filesystem mirror or other such override to avoid that, but the point here is to make sure we see the expected behavior in the main case, and so it's worth the small additional cost of downloading the null provider from the real registry. --- .../command/e2etest/providers_tamper_test.go | 221 ++++++++++++++++++ .../provider-tampering-base.tf | 12 + 2 files changed, 233 insertions(+) create mode 100644 internal/command/e2etest/providers_tamper_test.go create mode 100644 internal/command/e2etest/testdata/provider-tampering-base/provider-tampering-base.tf diff --git a/internal/command/e2etest/providers_tamper_test.go b/internal/command/e2etest/providers_tamper_test.go new file mode 100644 index 000000000000..ac5612192050 --- /dev/null +++ b/internal/command/e2etest/providers_tamper_test.go @@ -0,0 +1,221 @@ +package e2etest + +import ( + "io/ioutil" + "os" + "path/filepath" + "strings" + "testing" + + "github.com/hashicorp/terraform/internal/e2e" + "github.com/hashicorp/terraform/internal/getproviders" +) + +// TestProviderTampering tests various ways that the provider plugins in the +// local cache directory might be modified after an initial "terraform init", +// which other Terraform commands which use those plugins should catch and +// report early. +func TestProviderTampering(t *testing.T) { + // General setup: we'll do a one-off init of a test directory as our + // starting point, and then we'll clone that result for each test so + // that we can save the cost of a repeated re-init with the same + // provider. + t.Parallel() + + // This test reaches out to releases.hashicorp.com to download the + // null provider, so it can only run if network access is allowed. + skipIfCannotAccessNetwork(t) + + fixturePath := filepath.Join("testdata", "provider-tampering-base") + tf := e2e.NewBinary(terraformBin, fixturePath) + defer tf.Close() + + stdout, stderr, err := tf.Run("init") + if err != nil { + t.Fatalf("unexpected init error: %s\nstderr:\n%s", err, stderr) + } + if !strings.Contains(stdout, "Installing hashicorp/null v") { + t.Errorf("null provider download message is missing from init output:\n%s", stdout) + t.Logf("(this can happen if you have a copy of the plugin in one of the global plugin search dirs)") + } + + seedDir := tf.WorkDir() + const providerVersion = "3.1.0" // must match the version in the fixture config + pluginDir := ".terraform/providers/registry.terraform.io/hashicorp/null/" + providerVersion + "/" + getproviders.CurrentPlatform.String() + pluginExe := pluginDir + "/terraform-provider-null_v" + providerVersion + "_x5" + if getproviders.CurrentPlatform.OS == "windows" { + pluginExe += ".exe" // ugh + } + + t.Run("cache dir totally gone", func(t *testing.T) { + tf := e2e.NewBinary(terraformBin, seedDir) + defer tf.Close() + workDir := tf.WorkDir() + + err := os.RemoveAll(filepath.Join(workDir, ".terraform")) + if err != nil { + t.Fatal(err) + } + + _, stderr, err := tf.Run("plan") + if err == nil { + t.Fatalf("unexpected plan success\nstdout:\n%s", stdout) + } + if want := `registry.terraform.io/hashicorp/null: there is no package for registry.terraform.io/hashicorp/null 3.1.0 cached in .terraform/providers`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + if want := `terraform init`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + }) + t.Run("null plugin package modified before plan", func(t *testing.T) { + tf := e2e.NewBinary(terraformBin, seedDir) + defer tf.Close() + workDir := tf.WorkDir() + + err := ioutil.WriteFile(filepath.Join(workDir, pluginExe), []byte("tamper"), 0600) + if err != nil { + t.Fatal(err) + } + + stdout, stderr, err := tf.Run("plan") + if err == nil { + t.Fatalf("unexpected plan success\nstdout:\n%s", stdout) + } + if want := `registry.terraform.io/hashicorp/null: the cached package for registry.terraform.io/hashicorp/null 3.1.0 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + if want := `terraform init`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + }) + t.Run("version constraint changed in config before plan", func(t *testing.T) { + tf := e2e.NewBinary(terraformBin, seedDir) + defer tf.Close() + workDir := tf.WorkDir() + + err := ioutil.WriteFile(filepath.Join(workDir, "provider-tampering-base.tf"), []byte(` + terraform { + required_providers { + null = { + source = "hashicorp/null" + version = "1.0.0" + } + } + } + `), 0600) + if err != nil { + t.Fatal(err) + } + + stdout, stderr, err := tf.Run("plan") + if err == nil { + t.Fatalf("unexpected plan success\nstdout:\n%s", stdout) + } + if want := `provider registry.terraform.io/hashicorp/null: locked version selection 3.1.0 doesn't match the updated version constraints "1.0.0"`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + if want := `terraform init -upgrade`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + }) + t.Run("lock file modified before plan", func(t *testing.T) { + tf := e2e.NewBinary(terraformBin, seedDir) + defer tf.Close() + workDir := tf.WorkDir() + + // NOTE: We're just emptying out the lock file here because that's + // good enough for what we're trying to assert. The leaf codepath + // that generates this family of errors has some different variations + // of this error message for otehr sorts of inconsistency, but those + // are tested more thoroughly over in the "configs" package, which is + // ultimately responsible for that logic. + err := ioutil.WriteFile(filepath.Join(workDir, ".terraform.lock.hcl"), []byte(``), 0600) + if err != nil { + t.Fatal(err) + } + + stdout, stderr, err := tf.Run("plan") + if err == nil { + t.Fatalf("unexpected plan success\nstdout:\n%s", stdout) + } + if want := `provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + if want := `terraform init`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + }) + t.Run("lock file modified after plan", func(t *testing.T) { + tf := e2e.NewBinary(terraformBin, seedDir) + defer tf.Close() + workDir := tf.WorkDir() + + _, stderr, err := tf.Run("plan", "-out", "tfplan") + if err != nil { + t.Fatalf("unexpected plan failure\nstderr:\n%s", stderr) + } + + err = os.Remove(filepath.Join(workDir, ".terraform.lock.hcl")) + if err != nil { + t.Fatal(err) + } + + stdout, stderr, err := tf.Run("apply", "tfplan") + if err == nil { + t.Fatalf("unexpected apply success\nstdout:\n%s", stdout) + } + if want := `provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + if want := `Create a new plan from the updated configuration.`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + }) + t.Run("plugin cache dir entirely removed after plan", func(t *testing.T) { + tf := e2e.NewBinary(terraformBin, seedDir) + defer tf.Close() + workDir := tf.WorkDir() + + _, stderr, err := tf.Run("plan", "-out", "tfplan") + if err != nil { + t.Fatalf("unexpected plan failure\nstderr:\n%s", stderr) + } + + err = os.RemoveAll(filepath.Join(workDir, ".terraform")) + if err != nil { + t.Fatal(err) + } + + stdout, stderr, err := tf.Run("apply", "tfplan") + if err == nil { + t.Fatalf("unexpected apply success\nstdout:\n%s", stdout) + } + if want := `registry.terraform.io/hashicorp/null: there is no package for registry.terraform.io/hashicorp/null 3.1.0 cached in .terraform/providers`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + }) + t.Run("null plugin package modified after plan", func(t *testing.T) { + tf := e2e.NewBinary(terraformBin, seedDir) + defer tf.Close() + workDir := tf.WorkDir() + + _, stderr, err := tf.Run("plan", "-out", "tfplan") + if err != nil { + t.Fatalf("unexpected plan failure\nstderr:\n%s", stderr) + } + + err = ioutil.WriteFile(filepath.Join(workDir, pluginExe), []byte("tamper"), 0600) + if err != nil { + t.Fatal(err) + } + + stdout, stderr, err := tf.Run("apply", "tfplan") + if err == nil { + t.Fatalf("unexpected apply success\nstdout:\n%s", stdout) + } + if want := `registry.terraform.io/hashicorp/null: the cached package for registry.terraform.io/hashicorp/null 3.1.0 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file`; !strings.Contains(stderr, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr) + } + }) +} diff --git a/internal/command/e2etest/testdata/provider-tampering-base/provider-tampering-base.tf b/internal/command/e2etest/testdata/provider-tampering-base/provider-tampering-base.tf new file mode 100644 index 000000000000..87bd9ac20058 --- /dev/null +++ b/internal/command/e2etest/testdata/provider-tampering-base/provider-tampering-base.tf @@ -0,0 +1,12 @@ +terraform { + required_providers { + null = { + # Our version is intentionally fixed so that we have a fixed + # test case here, though we might have to update this in future + # if e.g. Terraform stops supporting plugin protocol 5, or if + # the null provider is yanked from the registry for some reason. + source = "hashicorp/null" + version = "3.1.0" + } + } +} From 44764ffdee24e2081f939ca14d1dfab2d2a33418 Mon Sep 17 00:00:00 2001 From: Jeff Escalante Date: Tue, 5 Oct 2021 16:31:02 -0400 Subject: [PATCH 131/644] fix broken logo in readme (#29705) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index c11e38babf79..4f235995f237 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ Terraform - Tutorials: [HashiCorp's Learn Platform](https://learn.hashicorp.com/terraform) - Certification Exam: [HashiCorp Certified: Terraform Associate](https://www.hashicorp.com/certification/#hashicorp-certified-terraform-associate) -Terraform +Terraform Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. From bea7e3ebcebe86144d5d07acf7c0e222d3d2ab1f Mon Sep 17 00:00:00 2001 From: Omar Ismail Date: Tue, 5 Oct 2021 16:36:50 -0400 Subject: [PATCH 132/644] Backend State Migration: change variable names from one/two to source/destination (#29699) --- internal/command/meta_backend.go | 24 +-- internal/command/meta_backend_migrate.go | 190 +++++++++++------------ 2 files changed, 107 insertions(+), 107 deletions(-) diff --git a/internal/command/meta_backend.go b/internal/command/meta_backend.go index 3828708175ae..4d9b115d417a 100644 --- a/internal/command/meta_backend.go +++ b/internal/command/meta_backend.go @@ -753,10 +753,10 @@ func (m *Meta) backend_c_r_S(c *configs.Backend, cHash int, sMgr *clistate.Local // Perform the migration err := m.backendMigrateState(&backendMigrateOpts{ - OneType: s.Backend.Type, - TwoType: "local", - One: b, - Two: localB, + SourceType: s.Backend.Type, + DestinationType: "local", + Source: b, + Destination: localB, }) if err != nil { diags = diags.Append(err) @@ -829,10 +829,10 @@ func (m *Meta) backend_C_r_s(c *configs.Backend, cHash int, sMgr *clistate.Local if len(localStates) > 0 { // Perform the migration err = m.backendMigrateState(&backendMigrateOpts{ - OneType: "local", - TwoType: c.Type, - One: localB, - Two: b, + SourceType: "local", + DestinationType: c.Type, + Source: localB, + Destination: b, }) if err != nil { diags = diags.Append(err) @@ -944,10 +944,10 @@ func (m *Meta) backend_C_r_S_changed(c *configs.Backend, cHash int, sMgr *clista // Perform the migration err := m.backendMigrateState(&backendMigrateOpts{ - OneType: s.Backend.Type, - TwoType: c.Type, - One: oldB, - Two: b, + SourceType: s.Backend.Type, + DestinationType: c.Type, + Source: oldB, + Destination: b, }) if err != nil { diags = diags.Append(err) diff --git a/internal/command/meta_backend_migrate.go b/internal/command/meta_backend_migrate.go index 0021b5850a53..d868200f1bda 100644 --- a/internal/command/meta_backend_migrate.go +++ b/internal/command/meta_backend_migrate.go @@ -21,14 +21,14 @@ import ( ) type backendMigrateOpts struct { - OneType, TwoType string - One, Two backend.Backend + SourceType, DestinationType string + Source, Destination backend.Backend // Fields below are set internally when migrate is called - oneEnv string // source env - twoEnv string // dest env - force bool // if true, won't ask for confirmation + sourceWorkspace string + destinationWorkspace string + force bool // if true, won't ask for confirmation } // backendMigrateState handles migrating (copying) state from one backend @@ -43,45 +43,45 @@ type backendMigrateOpts struct { // // This will attempt to lock both states for the migration. func (m *Meta) backendMigrateState(opts *backendMigrateOpts) error { - log.Printf("[TRACE] backendMigrateState: need to migrate from %q to %q backend config", opts.OneType, opts.TwoType) + log.Printf("[TRACE] backendMigrateState: need to migrate from %q to %q backend config", opts.SourceType, opts.DestinationType) // We need to check what the named state status is. If we're converting // from multi-state to single-state for example, we need to handle that. - var oneSingle, twoSingle bool - oneStates, err := opts.One.Workspaces() + var sourceSingleState, destinationSingleState bool + sourceWorkspaces, err := opts.Source.Workspaces() if err == backend.ErrWorkspacesNotSupported { - oneSingle = true + sourceSingleState = true err = nil } if err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateLoadStates), opts.OneType, err) + errMigrateLoadStates), opts.SourceType, err) } - twoWorkspaces, err := opts.Two.Workspaces() + destinationWorkspaces, err := opts.Destination.Workspaces() if err == backend.ErrWorkspacesNotSupported { - twoSingle = true + destinationSingleState = true err = nil } if err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateLoadStates), opts.TwoType, err) + errMigrateLoadStates), opts.DestinationType, err) } // Set up defaults - opts.oneEnv = backend.DefaultStateName - opts.twoEnv = backend.DefaultStateName + opts.sourceWorkspace = backend.DefaultStateName + opts.destinationWorkspace = backend.DefaultStateName opts.force = m.forceInitCopy // Disregard remote Terraform version for the state source backend. If it's a // Terraform Cloud remote backend, we don't care about the remote version, // as we are migrating away and will not break a remote workspace. - m.ignoreRemoteBackendVersionConflict(opts.One) + m.ignoreRemoteBackendVersionConflict(opts.Source) - for _, twoWorkspace := range twoWorkspaces { + for _, workspace := range destinationWorkspaces { // Check the remote Terraform version for the state destination backend. If // it's a Terraform Cloud remote backend, we want to ensure that we don't // break the workspace by uploading an incompatible state file. - diags := m.remoteBackendVersionCheck(opts.Two, twoWorkspace) + diags := m.remoteBackendVersionCheck(opts.Destination, workspace) if diags.HasErrors() { return diags.Err() } @@ -92,20 +92,20 @@ func (m *Meta) backendMigrateState(opts *backendMigrateOpts) error { switch { // Single-state to single-state. This is the easiest case: we just // copy the default state directly. - case oneSingle && twoSingle: + case sourceSingleState && destinationSingleState: return m.backendMigrateState_s_s(opts) // Single-state to multi-state. This is easy since we just copy // the default state and ignore the rest in the destination. - case oneSingle && !twoSingle: + case sourceSingleState && !destinationSingleState: return m.backendMigrateState_s_s(opts) // Multi-state to single-state. If the source has more than the default // state this is complicated since we have to ask the user what to do. - case !oneSingle && twoSingle: + case !sourceSingleState && destinationSingleState: // If the source only has one state and it is the default, // treat it as if it doesn't support multi-state. - if len(oneStates) == 1 && oneStates[0] == backend.DefaultStateName { + if len(sourceWorkspaces) == 1 && sourceWorkspaces[0] == backend.DefaultStateName { return m.backendMigrateState_s_s(opts) } @@ -113,10 +113,10 @@ func (m *Meta) backendMigrateState(opts *backendMigrateOpts) error { // Multi-state to multi-state. We merge the states together (migrating // each from the source to the destination one by one). - case !oneSingle && !twoSingle: + case !sourceSingleState && !destinationSingleState: // If the source only has one state and it is the default, // treat it as if it doesn't support multi-state. - if len(oneStates) == 1 && oneStates[0] == backend.DefaultStateName { + if len(sourceWorkspaces) == 1 && sourceWorkspaces[0] == backend.DefaultStateName { return m.backendMigrateState_s_s(opts) } @@ -154,10 +154,10 @@ func (m *Meta) backendMigrateState_S_S(opts *backendMigrateOpts) error { Id: "backend-migrate-multistate-to-multistate", Query: fmt.Sprintf( "Do you want to migrate all workspaces to %q?", - opts.TwoType), + opts.DestinationType), Description: fmt.Sprintf( strings.TrimSpace(inputBackendMigrateMultiToMulti), - opts.OneType, opts.TwoType), + opts.SourceType, opts.DestinationType), }) if err != nil { return fmt.Errorf( @@ -169,20 +169,20 @@ func (m *Meta) backendMigrateState_S_S(opts *backendMigrateOpts) error { } // Read all the states - oneStates, err := opts.One.Workspaces() + sourceWorkspaces, err := opts.Source.Workspaces() if err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateLoadStates), opts.OneType, err) + errMigrateLoadStates), opts.SourceType, err) } // Sort the states so they're always copied alphabetically - sort.Strings(oneStates) + sort.Strings(sourceWorkspaces) // Go through each and migrate - for _, name := range oneStates { + for _, name := range sourceWorkspaces { // Copy the same names - opts.oneEnv = name - opts.twoEnv = name + opts.sourceWorkspace = name + opts.destinationWorkspace = name // Force it, we confirmed above opts.force = true @@ -190,7 +190,7 @@ func (m *Meta) backendMigrateState_S_S(opts *backendMigrateOpts) error { // Perform the migration if err := m.backendMigrateState_s_s(opts); err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateMulti), name, opts.OneType, opts.TwoType, err) + errMigrateMulti), name, opts.SourceType, opts.DestinationType, err) } } @@ -199,7 +199,7 @@ func (m *Meta) backendMigrateState_S_S(opts *backendMigrateOpts) error { // Multi-state to single state. func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error { - log.Printf("[TRACE] backendMigrateState: target backend type %q does not support named workspaces", opts.TwoType) + log.Printf("[TRACE] backendMigrateState: destination backend type %q does not support named workspaces", opts.DestinationType) currentEnv, err := m.Workspace() if err != nil { @@ -215,10 +215,10 @@ func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error { Query: fmt.Sprintf( "Destination state %q doesn't support workspaces.\n"+ "Do you want to copy only your current workspace?", - opts.TwoType), + opts.DestinationType), Description: fmt.Sprintf( strings.TrimSpace(inputBackendMigrateMultiToSingle), - opts.OneType, opts.TwoType, currentEnv), + opts.SourceType, opts.DestinationType, currentEnv), }) if err != nil { return fmt.Errorf( @@ -231,7 +231,7 @@ func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error { } // Copy the default state - opts.oneEnv = currentEnv + opts.sourceWorkspace = currentEnv // now switch back to the default env so we can acccess the new backend m.SetWorkspace(backend.DefaultStateName) @@ -241,46 +241,46 @@ func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error { // Single state to single state, assumed default state name. func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error { - log.Printf("[TRACE] backendMigrateState: migrating %q workspace to %q workspace", opts.oneEnv, opts.twoEnv) + log.Printf("[TRACE] backendMigrateState: migrating %q workspace to %q workspace", opts.sourceWorkspace, opts.destinationWorkspace) - stateOne, err := opts.One.StateMgr(opts.oneEnv) + sourceState, err := opts.Source.StateMgr(opts.sourceWorkspace) if err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateSingleLoadDefault), opts.OneType, err) + errMigrateSingleLoadDefault), opts.SourceType, err) } - if err := stateOne.RefreshState(); err != nil { + if err := sourceState.RefreshState(); err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateSingleLoadDefault), opts.OneType, err) + errMigrateSingleLoadDefault), opts.SourceType, err) } // Do not migrate workspaces without state. - if stateOne.State().Empty() { + if sourceState.State().Empty() { log.Print("[TRACE] backendMigrateState: source workspace has empty state, so nothing to migrate") return nil } - stateTwo, err := opts.Two.StateMgr(opts.twoEnv) + destinationState, err := opts.Destination.StateMgr(opts.destinationWorkspace) if err == backend.ErrDefaultWorkspaceNotSupported { // If the backend doesn't support using the default state, we ask the user // for a new name and migrate the default state to the given named state. - stateTwo, err = func() (statemgr.Full, error) { - log.Print("[TRACE] backendMigrateState: target doesn't support a default workspace, so we must prompt for a new name") + destinationState, err = func() (statemgr.Full, error) { + log.Print("[TRACE] backendMigrateState: destination doesn't support a default workspace, so we must prompt for a new name") name, err := m.UIInput().Input(context.Background(), &terraform.InputOpts{ Id: "new-state-name", Query: fmt.Sprintf( "[reset][bold][yellow]The %q backend configuration only allows "+ "named workspaces![reset]", - opts.TwoType), + opts.DestinationType), Description: strings.TrimSpace(inputBackendNewWorkspaceName), }) if err != nil { return nil, fmt.Errorf("Error asking for new state name: %s", err) } - // Update the name of the target state. - opts.twoEnv = name + // Update the name of the destination state. + opts.destinationWorkspace = name - stateTwo, err := opts.Two.StateMgr(opts.twoEnv) + destinationState, err := opts.Destination.StateMgr(opts.destinationWorkspace) if err != nil { return nil, err } @@ -291,34 +291,34 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error { // If the currently selected workspace is the default workspace, then set // the named workspace as the new selected workspace. if workspace == backend.DefaultStateName { - if err := m.SetWorkspace(opts.twoEnv); err != nil { + if err := m.SetWorkspace(opts.destinationWorkspace); err != nil { return nil, fmt.Errorf("Failed to set new workspace: %s", err) } } - return stateTwo, nil + return destinationState, nil }() } if err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateSingleLoadDefault), opts.TwoType, err) + errMigrateSingleLoadDefault), opts.DestinationType, err) } - if err := stateTwo.RefreshState(); err != nil { + if err := destinationState.RefreshState(); err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateSingleLoadDefault), opts.TwoType, err) + errMigrateSingleLoadDefault), opts.DestinationType, err) } // Check if we need migration at all. // This is before taking a lock, because they may also correspond to the same lock. - one := stateOne.State() - two := stateTwo.State() + source := sourceState.State() + destination := destinationState.State() // no reason to migrate if the state is already there - if one.Equal(two) { + if source.Equal(destination) { // Equal isn't identical; it doesn't check lineage. - sm1, _ := stateOne.(statemgr.PersistentMeta) - sm2, _ := stateTwo.(statemgr.PersistentMeta) - if one != nil && two != nil { + sm1, _ := sourceState.(statemgr.PersistentMeta) + sm2, _ := destinationState.(statemgr.PersistentMeta) + if source != nil && destination != nil { if sm1 == nil || sm2 == nil { log.Print("[TRACE] backendMigrateState: both source and destination workspaces have no state, so no migration is needed") return nil @@ -336,56 +336,56 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error { view := views.NewStateLocker(arguments.ViewHuman, m.View) locker := clistate.NewLocker(m.stateLockTimeout, view) - lockerOne := locker.WithContext(lockCtx) - if diags := lockerOne.Lock(stateOne, "migration source state"); diags.HasErrors() { + lockerSource := locker.WithContext(lockCtx) + if diags := lockerSource.Lock(sourceState, "migration source state"); diags.HasErrors() { return diags.Err() } - defer lockerOne.Unlock() + defer lockerSource.Unlock() - lockerTwo := locker.WithContext(lockCtx) - if diags := lockerTwo.Lock(stateTwo, "migration destination state"); diags.HasErrors() { + lockerDestination := locker.WithContext(lockCtx) + if diags := lockerDestination.Lock(destinationState, "migration destination state"); diags.HasErrors() { return diags.Err() } - defer lockerTwo.Unlock() + defer lockerDestination.Unlock() // We now own a lock, so double check that we have the version // corresponding to the lock. log.Print("[TRACE] backendMigrateState: refreshing source workspace state") - if err := stateOne.RefreshState(); err != nil { + if err := sourceState.RefreshState(); err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateSingleLoadDefault), opts.OneType, err) + errMigrateSingleLoadDefault), opts.SourceType, err) } - log.Print("[TRACE] backendMigrateState: refreshing target workspace state") - if err := stateTwo.RefreshState(); err != nil { + log.Print("[TRACE] backendMigrateState: refreshing destination workspace state") + if err := destinationState.RefreshState(); err != nil { return fmt.Errorf(strings.TrimSpace( - errMigrateSingleLoadDefault), opts.OneType, err) + errMigrateSingleLoadDefault), opts.SourceType, err) } - one = stateOne.State() - two = stateTwo.State() + source = sourceState.State() + destination = destinationState.State() } var confirmFunc func(statemgr.Full, statemgr.Full, *backendMigrateOpts) (bool, error) switch { // No migration necessary - case one.Empty() && two.Empty(): + case source.Empty() && destination.Empty(): log.Print("[TRACE] backendMigrateState: both source and destination workspaces have empty state, so no migration is required") return nil // No migration necessary if we're inheriting state. - case one.Empty() && !two.Empty(): + case source.Empty() && !destination.Empty(): log.Print("[TRACE] backendMigrateState: source workspace has empty state, so no migration is required") return nil // We have existing state moving into no state. Ask the user if // they'd like to do this. - case !one.Empty() && two.Empty(): - log.Print("[TRACE] backendMigrateState: target workspace has empty state, so might copy source workspace state") + case !source.Empty() && destination.Empty(): + log.Print("[TRACE] backendMigrateState: destination workspace has empty state, so might copy source workspace state") confirmFunc = m.backendMigrateEmptyConfirm // Both states are non-empty, meaning we need to determine which // state should be used and update accordingly. - case !one.Empty() && !two.Empty(): + case !source.Empty() && !destination.Empty(): log.Print("[TRACE] backendMigrateState: both source and destination workspaces have states, so might overwrite destination with source") confirmFunc = m.backendMigrateNonEmptyConfirm } @@ -402,7 +402,7 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error { } // Confirm with the user whether we want to copy state over - confirm, err := confirmFunc(stateOne, stateTwo, opts) + confirm, err := confirmFunc(sourceState, destinationState, opts) if err != nil { log.Print("[TRACE] backendMigrateState: error reading input, so aborting migration") return err @@ -417,36 +417,36 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error { // includes preserving any lineage/serial information where possible, if // both managers support such metadata. log.Print("[TRACE] backendMigrateState: migration confirmed, so migrating") - if err := statemgr.Migrate(stateTwo, stateOne); err != nil { + if err := statemgr.Migrate(destinationState, sourceState); err != nil { return fmt.Errorf(strings.TrimSpace(errBackendStateCopy), - opts.OneType, opts.TwoType, err) + opts.SourceType, opts.DestinationType, err) } - if err := stateTwo.PersistState(); err != nil { + if err := destinationState.PersistState(); err != nil { return fmt.Errorf(strings.TrimSpace(errBackendStateCopy), - opts.OneType, opts.TwoType, err) + opts.SourceType, opts.DestinationType, err) } // And we're done. return nil } -func (m *Meta) backendMigrateEmptyConfirm(one, two statemgr.Full, opts *backendMigrateOpts) (bool, error) { +func (m *Meta) backendMigrateEmptyConfirm(source, destination statemgr.Full, opts *backendMigrateOpts) (bool, error) { inputOpts := &terraform.InputOpts{ Id: "backend-migrate-copy-to-empty", Query: "Do you want to copy existing state to the new backend?", Description: fmt.Sprintf( strings.TrimSpace(inputBackendMigrateEmpty), - opts.OneType, opts.TwoType), + opts.SourceType, opts.DestinationType), } return m.confirm(inputOpts) } func (m *Meta) backendMigrateNonEmptyConfirm( - stateOne, stateTwo statemgr.Full, opts *backendMigrateOpts) (bool, error) { + sourceState, destinationState statemgr.Full, opts *backendMigrateOpts) (bool, error) { // We need to grab both states so we can write them to a file - one := stateOne.State() - two := stateTwo.State() + source := sourceState.State() + destination := destinationState.State() // Save both to a temporary td, err := ioutil.TempDir("", "terraform") @@ -462,12 +462,12 @@ func (m *Meta) backendMigrateNonEmptyConfirm( } // Write the states - onePath := filepath.Join(td, fmt.Sprintf("1-%s.tfstate", opts.OneType)) - twoPath := filepath.Join(td, fmt.Sprintf("2-%s.tfstate", opts.TwoType)) - if err := saveHelper(opts.OneType, onePath, one); err != nil { + sourcePath := filepath.Join(td, fmt.Sprintf("1-%s.tfstate", opts.SourceType)) + destinationPath := filepath.Join(td, fmt.Sprintf("2-%s.tfstate", opts.DestinationType)) + if err := saveHelper(opts.SourceType, sourcePath, source); err != nil { return false, fmt.Errorf("Error saving temporary state: %s", err) } - if err := saveHelper(opts.TwoType, twoPath, two); err != nil { + if err := saveHelper(opts.DestinationType, destinationPath, destination); err != nil { return false, fmt.Errorf("Error saving temporary state: %s", err) } @@ -477,7 +477,7 @@ func (m *Meta) backendMigrateNonEmptyConfirm( Query: "Do you want to copy existing state to the new backend?", Description: fmt.Sprintf( strings.TrimSpace(inputBackendMigrateNonEmpty), - opts.OneType, opts.TwoType, onePath, twoPath), + opts.SourceType, opts.DestinationType, sourcePath, destinationPath), } // Confirm with the user that the copy should occur @@ -553,7 +553,7 @@ const inputBackendMigrateMultiToSingle = ` The existing %[1]q backend supports workspaces and you currently are using more than one. The newly configured %[2]q backend doesn't support workspaces. If you continue, Terraform will copy your current workspace %[3]q -to the default workspace in the target backend. Your existing workspaces in the +to the default workspace in the new backend. Your existing workspaces in the source backend won't be modified. If you want to switch workspaces, back them up, or cancel altogether, answer "no" and Terraform will abort. ` From b9f3dab03579a89225ed03c692f31902719948ae Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Wed, 1 Sep 2021 11:17:13 -0400 Subject: [PATCH 133/644] json-output: Release format version 1.0 --- internal/command/jsonplan/plan.go | 2 +- internal/command/jsonprovider/provider.go | 2 +- internal/command/jsonstate/state.go | 2 +- .../testdata/providers-schema/basic/output.json | 2 +- .../testdata/providers-schema/empty/output.json | 4 ++-- .../providers-schema/required/output.json | 2 +- .../testdata/show-json-sensitive/output.json | 4 ++-- .../testdata/show-json-state/basic/output.json | 2 +- .../testdata/show-json-state/empty/output.json | 4 ++-- .../show-json-state/modules/output.json | 2 +- .../sensitive-variables/output.json | 2 +- .../testdata/show-json/basic-create/output.json | 4 ++-- .../testdata/show-json/basic-delete/output.json | 4 ++-- .../testdata/show-json/basic-update/output.json | 4 ++-- .../testdata/show-json/drift/output.json | 4 ++-- .../show-json/module-depends-on/output.json | 2 +- .../testdata/show-json/modules/output.json | 4 ++-- .../testdata/show-json/moved-drift/output.json | 4 ++-- .../testdata/show-json/moved/output.json | 4 ++-- .../show-json/multi-resource-update/output.json | 4 ++-- .../show-json/nested-modules/output.json | 2 +- .../provider-version-no-config/output.json | 4 ++-- .../show-json/provider-version/output.json | 4 ++-- .../show-json/requires-replace/output.json | 4 ++-- .../show-json/sensitive-values/output.json | 4 ++-- .../incorrectmodulename/output.json | 2 +- .../validate-invalid/interpolation/output.json | 2 +- .../missing_defined_var/output.json | 2 +- .../validate-invalid/missing_quote/output.json | 2 +- .../validate-invalid/missing_var/output.json | 2 +- .../multiple_modules/output.json | 2 +- .../multiple_providers/output.json | 2 +- .../multiple_resources/output.json | 2 +- .../testdata/validate-invalid/output.json | 2 +- .../validate-invalid/outputs/output.json | 2 +- .../command/testdata/validate-valid/output.json | 2 +- internal/command/views/json_view.go | 2 +- internal/command/views/validate.go | 2 +- .../docs/cli/commands/providers/schema.html.md | 15 +++++++++++++-- website/docs/cli/commands/validate.html.md | 17 ++++++++++++----- website/docs/internals/json-format.html.md | 15 +++++++++++++-- .../docs/internals/machine-readable-ui.html.md | 13 ++++++++++++- 42 files changed, 103 insertions(+), 63 deletions(-) diff --git a/internal/command/jsonplan/plan.go b/internal/command/jsonplan/plan.go index c0b726422871..64d77c05a54e 100644 --- a/internal/command/jsonplan/plan.go +++ b/internal/command/jsonplan/plan.go @@ -22,7 +22,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "0.2" +const FormatVersion = "1.0" // Plan is the top-level representation of the json format of a plan. It includes // the complete config and current state. diff --git a/internal/command/jsonprovider/provider.go b/internal/command/jsonprovider/provider.go index b507bc242e9e..4487db4987ae 100644 --- a/internal/command/jsonprovider/provider.go +++ b/internal/command/jsonprovider/provider.go @@ -9,7 +9,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "0.2" +const FormatVersion = "1.0" // providers is the top-level object returned when exporting provider schemas type providers struct { diff --git a/internal/command/jsonstate/state.go b/internal/command/jsonstate/state.go index 341040d2d1c9..46532875c334 100644 --- a/internal/command/jsonstate/state.go +++ b/internal/command/jsonstate/state.go @@ -18,7 +18,7 @@ import ( // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. -const FormatVersion = "0.2" +const FormatVersion = "1.0" // state is the top-level representation of the json format of a terraform // state. diff --git a/internal/command/testdata/providers-schema/basic/output.json b/internal/command/testdata/providers-schema/basic/output.json index f14786c3e31e..dfac55b38c35 100644 --- a/internal/command/testdata/providers-schema/basic/output.json +++ b/internal/command/testdata/providers-schema/basic/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "provider_schemas": { "registry.terraform.io/hashicorp/test": { "provider": { diff --git a/internal/command/testdata/providers-schema/empty/output.json b/internal/command/testdata/providers-schema/empty/output.json index 12d30d201356..381450cade5c 100644 --- a/internal/command/testdata/providers-schema/empty/output.json +++ b/internal/command/testdata/providers-schema/empty/output.json @@ -1,3 +1,3 @@ { - "format_version": "0.2" -} \ No newline at end of file + "format_version": "1.0" +} diff --git a/internal/command/testdata/providers-schema/required/output.json b/internal/command/testdata/providers-schema/required/output.json index f14786c3e31e..dfac55b38c35 100644 --- a/internal/command/testdata/providers-schema/required/output.json +++ b/internal/command/testdata/providers-schema/required/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "provider_schemas": { "registry.terraform.io/hashicorp/test": { "provider": { diff --git a/internal/command/testdata/show-json-sensitive/output.json b/internal/command/testdata/show-json-sensitive/output.json index 5f22c4ccf3a2..206fbb7f6e60 100644 --- a/internal/command/testdata/show-json-sensitive/output.json +++ b/internal/command/testdata/show-json-sensitive/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -66,7 +66,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json-state/basic/output.json b/internal/command/testdata/show-json-state/basic/output.json index 3087ad118050..229fa00e7262 100644 --- a/internal/command/testdata/show-json-state/basic/output.json +++ b/internal/command/testdata/show-json-state/basic/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.12.0", "values": { "root_module": { diff --git a/internal/command/testdata/show-json-state/empty/output.json b/internal/command/testdata/show-json-state/empty/output.json index 12d30d201356..381450cade5c 100644 --- a/internal/command/testdata/show-json-state/empty/output.json +++ b/internal/command/testdata/show-json-state/empty/output.json @@ -1,3 +1,3 @@ { - "format_version": "0.2" -} \ No newline at end of file + "format_version": "1.0" +} diff --git a/internal/command/testdata/show-json-state/modules/output.json b/internal/command/testdata/show-json-state/modules/output.json index eeee8f6cffbc..eba163bdbb52 100644 --- a/internal/command/testdata/show-json-state/modules/output.json +++ b/internal/command/testdata/show-json-state/modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.12.0", "values": { "outputs": { diff --git a/internal/command/testdata/show-json-state/sensitive-variables/output.json b/internal/command/testdata/show-json-state/sensitive-variables/output.json index b133aeef13bf..60503cd3ad80 100644 --- a/internal/command/testdata/show-json-state/sensitive-variables/output.json +++ b/internal/command/testdata/show-json-state/sensitive-variables/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.14.0", "values": { "root_module": { diff --git a/internal/command/testdata/show-json/basic-create/output.json b/internal/command/testdata/show-json/basic-create/output.json index 3474443ed386..d1b8aae5361b 100644 --- a/internal/command/testdata/show-json/basic-create/output.json +++ b/internal/command/testdata/show-json/basic-create/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/basic-delete/output.json b/internal/command/testdata/show-json/basic-delete/output.json index ae6e67f760b3..e1779c04cc2a 100644 --- a/internal/command/testdata/show-json/basic-delete/output.json +++ b/internal/command/testdata/show-json/basic-delete/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -89,7 +89,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/basic-update/output.json b/internal/command/testdata/show-json/basic-update/output.json index 2b8bc25e3034..e4b4731426a1 100644 --- a/internal/command/testdata/show-json/basic-update/output.json +++ b/internal/command/testdata/show-json/basic-update/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -68,7 +68,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/drift/output.json b/internal/command/testdata/show-json/drift/output.json index 79e702161576..2d5c071b4300 100644 --- a/internal/command/testdata/show-json/drift/output.json +++ b/internal/command/testdata/show-json/drift/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "root_module": { "resources": [ @@ -106,7 +106,7 @@ } ], "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "root_module": { "resources": [ diff --git a/internal/command/testdata/show-json/module-depends-on/output.json b/internal/command/testdata/show-json/module-depends-on/output.json index cc7ed679f0cc..d02efaa22f0d 100644 --- a/internal/command/testdata/show-json/module-depends-on/output.json +++ b/internal/command/testdata/show-json/module-depends-on/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.13.1-dev", "planned_values": { "root_module": { diff --git a/internal/command/testdata/show-json/modules/output.json b/internal/command/testdata/show-json/modules/output.json index 440bebbff891..4ed0ea45d692 100644 --- a/internal/command/testdata/show-json/modules/output.json +++ b/internal/command/testdata/show-json/modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "outputs": { "test": { @@ -74,7 +74,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/moved-drift/output.json b/internal/command/testdata/show-json/moved-drift/output.json index 0d151808fa87..ad6b0564135e 100644 --- a/internal/command/testdata/show-json/moved-drift/output.json +++ b/internal/command/testdata/show-json/moved-drift/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "root_module": { "resources": [ @@ -108,7 +108,7 @@ } ], "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "root_module": { "resources": [ diff --git a/internal/command/testdata/show-json/moved/output.json b/internal/command/testdata/show-json/moved/output.json index 3ce28198342e..3e74a4ddb4e4 100644 --- a/internal/command/testdata/show-json/moved/output.json +++ b/internal/command/testdata/show-json/moved/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "root_module": { "resources": [ @@ -46,7 +46,7 @@ } ], "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "root_module": { "resources": [ diff --git a/internal/command/testdata/show-json/multi-resource-update/output.json b/internal/command/testdata/show-json/multi-resource-update/output.json index 262b6194b9bf..6da29965e29c 100644 --- a/internal/command/testdata/show-json/multi-resource-update/output.json +++ b/internal/command/testdata/show-json/multi-resource-update/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.13.0", "variables": { "test_var": { @@ -107,7 +107,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "terraform_version": "0.13.0", "values": { "outputs": { diff --git a/internal/command/testdata/show-json/nested-modules/output.json b/internal/command/testdata/show-json/nested-modules/output.json index 80e7ae3588e9..359ea9ae181c 100644 --- a/internal/command/testdata/show-json/nested-modules/output.json +++ b/internal/command/testdata/show-json/nested-modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "root_module": { "child_modules": [ diff --git a/internal/command/testdata/show-json/provider-version-no-config/output.json b/internal/command/testdata/show-json/provider-version-no-config/output.json index 64b93ec751c0..6a8b1f451dc0 100644 --- a/internal/command/testdata/show-json/provider-version-no-config/output.json +++ b/internal/command/testdata/show-json/provider-version-no-config/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/provider-version/output.json b/internal/command/testdata/show-json/provider-version/output.json index b5369806e933..11fd3bd64c15 100644 --- a/internal/command/testdata/show-json/provider-version/output.json +++ b/internal/command/testdata/show-json/provider-version/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "bar" @@ -57,7 +57,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/show-json/requires-replace/output.json b/internal/command/testdata/show-json/requires-replace/output.json index 077d900b13b0..e71df784f4f7 100644 --- a/internal/command/testdata/show-json/requires-replace/output.json +++ b/internal/command/testdata/show-json/requires-replace/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "planned_values": { "root_module": { "resources": [ @@ -48,7 +48,7 @@ } ], "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "root_module": { "resources": [ diff --git a/internal/command/testdata/show-json/sensitive-values/output.json b/internal/command/testdata/show-json/sensitive-values/output.json index 7cbc9ccf0e75..d7e4719c71f5 100644 --- a/internal/command/testdata/show-json/sensitive-values/output.json +++ b/internal/command/testdata/show-json/sensitive-values/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.2", + "format_version": "1.0", "variables": { "test_var": { "value": "boop" @@ -69,7 +69,7 @@ } }, "prior_state": { - "format_version": "0.2", + "format_version": "1.0", "values": { "outputs": { "test": { diff --git a/internal/command/testdata/validate-invalid/incorrectmodulename/output.json b/internal/command/testdata/validate-invalid/incorrectmodulename/output.json index 0c2ce68abd37..f144313fa455 100644 --- a/internal/command/testdata/validate-invalid/incorrectmodulename/output.json +++ b/internal/command/testdata/validate-invalid/incorrectmodulename/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 4, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/interpolation/output.json b/internal/command/testdata/validate-invalid/interpolation/output.json index 7845ec0f4e81..2843b19121fc 100644 --- a/internal/command/testdata/validate-invalid/interpolation/output.json +++ b/internal/command/testdata/validate-invalid/interpolation/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 2, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_defined_var/output.json b/internal/command/testdata/validate-invalid/missing_defined_var/output.json index c2a57c5e6a98..40258a98cd27 100644 --- a/internal/command/testdata/validate-invalid/missing_defined_var/output.json +++ b/internal/command/testdata/validate-invalid/missing_defined_var/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": true, "error_count": 0, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_quote/output.json b/internal/command/testdata/validate-invalid/missing_quote/output.json index cdf99d8b2a2b..87aeca8b7817 100644 --- a/internal/command/testdata/validate-invalid/missing_quote/output.json +++ b/internal/command/testdata/validate-invalid/missing_quote/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/missing_var/output.json b/internal/command/testdata/validate-invalid/missing_var/output.json index 2a4e0be71ebd..6f0b9d5d4c8d 100644 --- a/internal/command/testdata/validate-invalid/missing_var/output.json +++ b/internal/command/testdata/validate-invalid/missing_var/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_modules/output.json b/internal/command/testdata/validate-invalid/multiple_modules/output.json index 4cd6dfb9f0ad..1aeaf929a913 100644 --- a/internal/command/testdata/validate-invalid/multiple_modules/output.json +++ b/internal/command/testdata/validate-invalid/multiple_modules/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_providers/output.json b/internal/command/testdata/validate-invalid/multiple_providers/output.json index 63eb2d193820..309cf0ea7c34 100644 --- a/internal/command/testdata/validate-invalid/multiple_providers/output.json +++ b/internal/command/testdata/validate-invalid/multiple_providers/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/multiple_resources/output.json b/internal/command/testdata/validate-invalid/multiple_resources/output.json index 33d5052284e9..ded584e6846c 100644 --- a/internal/command/testdata/validate-invalid/multiple_resources/output.json +++ b/internal/command/testdata/validate-invalid/multiple_resources/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/output.json b/internal/command/testdata/validate-invalid/output.json index 663fe0153071..73254853932f 100644 --- a/internal/command/testdata/validate-invalid/output.json +++ b/internal/command/testdata/validate-invalid/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 1, "warning_count": 0, diff --git a/internal/command/testdata/validate-invalid/outputs/output.json b/internal/command/testdata/validate-invalid/outputs/output.json index d05ed4b77173..f774b458be4c 100644 --- a/internal/command/testdata/validate-invalid/outputs/output.json +++ b/internal/command/testdata/validate-invalid/outputs/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": false, "error_count": 2, "warning_count": 0, diff --git a/internal/command/testdata/validate-valid/output.json b/internal/command/testdata/validate-valid/output.json index c2a57c5e6a98..40258a98cd27 100644 --- a/internal/command/testdata/validate-valid/output.json +++ b/internal/command/testdata/validate-valid/output.json @@ -1,5 +1,5 @@ { - "format_version": "0.1", + "format_version": "1.0", "valid": true, "error_count": 0, "warning_count": 0, diff --git a/internal/command/views/json_view.go b/internal/command/views/json_view.go index e1c3db6d74e4..f92036d5c0d7 100644 --- a/internal/command/views/json_view.go +++ b/internal/command/views/json_view.go @@ -13,7 +13,7 @@ import ( // This version describes the schema of JSON UI messages. This version must be // updated after making any changes to this view, the jsonHook, or any of the // command/views/json package. -const JSON_UI_VERSION = "0.1.0" +const JSON_UI_VERSION = "1.0" func NewJSONView(view *View) *JSONView { log := hclog.New(&hclog.LoggerOptions{ diff --git a/internal/command/views/validate.go b/internal/command/views/validate.go index 1e597277a478..08ce913f82ce 100644 --- a/internal/command/views/validate.go +++ b/internal/command/views/validate.go @@ -81,7 +81,7 @@ func (v *ValidateJSON) Results(diags tfdiags.Diagnostics) int { // FormatVersion represents the version of the json format and will be // incremented for any change to this format that requires changes to a // consuming parser. - const FormatVersion = "0.1" + const FormatVersion = "1.0" type Output struct { FormatVersion string `json:"format_version"` diff --git a/website/docs/cli/commands/providers/schema.html.md b/website/docs/cli/commands/providers/schema.html.md index e97e50f2305e..2a3dddc1338e 100644 --- a/website/docs/cli/commands/providers/schema.html.md +++ b/website/docs/cli/commands/providers/schema.html.md @@ -23,7 +23,18 @@ The list of available flags are: Please note that, at this time, the `-json` flag is a _required_ option. In future releases, this command will be extended to allow for additional options. --> **Note:** The output includes a `format_version` key, which currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version and make stronger promises about compatibility. We do not anticipate any significant breaking changes to the format before its first major version, however. +The output includes a `format_version` key, which as of Terraform 1.1.0 has +value `"1.0"`. The semantics of this version are: + +- We will increment the minor version, e.g. `"1.1"`, for backward-compatible + changes or additions. Ignore any object properties with unrecognized names to + remain forward-compatible with future minor versions. +- We will increment the major version, e.g. `"2.0"`, for changes that are not + backward-compatible. Reject any input which reports an unsupported major + version. + +We will introduce new major versions only within the bounds of +[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). ## Format Summary @@ -41,7 +52,7 @@ The JSON output format consists of the following objects and sub-objects: ```javascript { - "format_version": "0.1", + "format_version": "1.0", // "provider_schemas" describes the provider schemas for all // providers throughout the configuration tree. diff --git a/website/docs/cli/commands/validate.html.md b/website/docs/cli/commands/validate.html.md index 583186e3d069..e81da01b27a4 100644 --- a/website/docs/cli/commands/validate.html.md +++ b/website/docs/cli/commands/validate.html.md @@ -57,11 +57,18 @@ to the JSON output setting. For that reason, external software consuming Terraform's output should be prepared to find data on stdout that _isn't_ valid JSON, which it should then treat as a generic error case. -**Note:** The output includes a `format_version` key, which currently has major -version zero to indicate that the format is experimental and subject to change. -A future version will assign a non-zero major version and make stronger -promises about compatibility. We do not anticipate any significant breaking -changes to the format before its first major version, however. +The output includes a `format_version` key, which as of Terraform 1.1.0 has +value `"1.0"`. The semantics of this version are: + +- We will increment the minor version, e.g. `"1.1"`, for backward-compatible + changes or additions. Ignore any object properties with unrecognized names to + remain forward-compatible with future minor versions. +- We will increment the major version, e.g. `"2.0"`, for changes that are not + backward-compatible. Reject any input which reports an unsupported major + version. + +We will introduce new major versions only within the bounds of +[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). In the normal case, Terraform will print a JSON object to the standard output stream. The top-level JSON object will have the following properties: diff --git a/website/docs/internals/json-format.html.md b/website/docs/internals/json-format.html.md index aa4df209af18..cd4e2213a481 100644 --- a/website/docs/internals/json-format.html.md +++ b/website/docs/internals/json-format.html.md @@ -16,7 +16,18 @@ Since the format of plan files isn't suited for use with external tools (and lik Use `terraform show -json ` to generate a JSON representation of a plan or state file. See [the `terraform show` documentation](/docs/cli/commands/show.html) for more details. --> **Note:** The output includes a `format_version` key, which currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version and make stronger promises about compatibility. We do not anticipate any significant breaking changes to the format before its first major version, however. +The output includes a `format_version` key, which as of Terraform 1.1.0 has +value `"1.0"`. The semantics of this version are: + +- We will increment the minor version, e.g. `"1.1"`, for backward-compatible + changes or additions. Ignore any object properties with unrecognized names to + remain forward-compatible with future minor versions. +- We will increment the major version, e.g. `"2.0"`, for changes that are not + backward-compatible. Reject any input which reports an unsupported major + version. + +We will introduce new major versions only within the bounds of +[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). ## Format Summary @@ -60,7 +71,7 @@ For ease of consumption by callers, the plan representation includes a partial r ```javascript { - "format_version": "0.2", + "format_version": "1.0", // "prior_state" is a representation of the state that the configuration is // being applied to, using the state representation described above. diff --git a/website/docs/internals/machine-readable-ui.html.md b/website/docs/internals/machine-readable-ui.html.md index 94427cc6d38e..35f06049b85f 100644 --- a/website/docs/internals/machine-readable-ui.html.md +++ b/website/docs/internals/machine-readable-ui.html.md @@ -14,7 +14,18 @@ By default, many Terraform commands display UI output as unstructured text, inte For long-running commands such as `plan`, `apply`, and `refresh`, the `-json` flag outputs a stream of JSON UI messages, one per line. These can be processed one message at a time, with integrating software filtering, combining, or modifying the output as desired. --> **Note:** The first message output has type `version`, and includes a `ui` key, which currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version and make stronger promises about compatibility. We do not anticipate any significant breaking changes to the format before its first major version, however. +The first message output has type `version`, and includes a `ui` key, which as of Terraform 1.1.0 has +value `"1.0"`. The semantics of this version are: + +- We will increment the minor version, e.g. `"1.1"`, for backward-compatible + changes or additions. Ignore any object properties with unrecognized names to + remain forward-compatible with future minor versions. +- We will increment the major version, e.g. `"2.0"`, for changes that are not + backward-compatible. Reject any input which reports an unsupported major + version. + +We will introduce new major versions only within the bounds of +[the Terraform 1.0 Compatibility Promises](https://www.terraform.io/docs/language/v1-compatibility-promises.html). ## Sample JSON Output From db1a97f9af1a8a67f3b9ebc296bb202cc538b011 Mon Sep 17 00:00:00 2001 From: hc-github-team-tf-core Date: Wed, 6 Oct 2021 16:36:49 +0000 Subject: [PATCH 134/644] Release v1.1.0-alpha20211006 --- version/version.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/version/version.go b/version/version.go index 86f22153dde3..7cd6b8f0bab1 100644 --- a/version/version.go +++ b/version/version.go @@ -16,7 +16,7 @@ var Version = "1.1.0" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release // such as "dev" (in development), "beta", "rc1", etc. -var Prerelease = "dev" +var Prerelease = "alpha20211006" // SemVer is an instance of version.Version. This has the secondary // benefit of verifying during tests and init time that our version is a From d05fa3049ed5f52914848394fefc956e8eb0b9fd Mon Sep 17 00:00:00 2001 From: hc-github-team-tf-core Date: Wed, 6 Oct 2021 16:52:28 +0000 Subject: [PATCH 135/644] Cleanup after v1.1.0-alpha20211006 release --- version/version.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/version/version.go b/version/version.go index 7cd6b8f0bab1..86f22153dde3 100644 --- a/version/version.go +++ b/version/version.go @@ -16,7 +16,7 @@ var Version = "1.1.0" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release // such as "dev" (in development), "beta", "rc1", etc. -var Prerelease = "alpha20211006" +var Prerelease = "dev" // SemVer is an instance of version.Version. This has the secondary // benefit of verifying during tests and init time that our version is a From ad9944e523f404cb9d54be8f69697e4512057abb Mon Sep 17 00:00:00 2001 From: James Bardin Date: Thu, 7 Oct 2021 16:48:56 -0400 Subject: [PATCH 136/644] test that providers are configured for calls Have the MockProvider ensure that Configure is always called before any methods that may require a configured provider. Ensure the MockProvider *Called fields are zeroed out when re-using the provider instance. --- internal/terraform/provider_mock.go | 25 +++++++++++++++++++++++++ internal/terraform/terraform_test.go | 17 +++++++++++++++++ 2 files changed, 42 insertions(+) diff --git a/internal/terraform/provider_mock.go b/internal/terraform/provider_mock.go index 47227759b21d..b6988f6eb644 100644 --- a/internal/terraform/provider_mock.go +++ b/internal/terraform/provider_mock.go @@ -297,6 +297,11 @@ func (p *MockProvider) ReadResource(r providers.ReadResourceRequest) (resp provi p.ReadResourceCalled = true p.ReadResourceRequest = r + if !p.ConfigureProviderCalled { + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("Configure not called before ReadResource %q", r.TypeName)) + return resp + } + if p.ReadResourceFn != nil { return p.ReadResourceFn(r) } @@ -330,6 +335,11 @@ func (p *MockProvider) PlanResourceChange(r providers.PlanResourceChangeRequest) p.Lock() defer p.Unlock() + if !p.ConfigureProviderCalled { + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("Configure not called before PlanResourceChange %q", r.TypeName)) + return resp + } + p.PlanResourceChangeCalled = true p.PlanResourceChangeRequest = r @@ -400,6 +410,11 @@ func (p *MockProvider) ApplyResourceChange(r providers.ApplyResourceChangeReques p.ApplyResourceChangeRequest = r p.Unlock() + if !p.ConfigureProviderCalled { + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("Configure not called before ApplyResourceChange %q", r.TypeName)) + return resp + } + if p.ApplyResourceChangeFn != nil { return p.ApplyResourceChangeFn(r) } @@ -460,6 +475,11 @@ func (p *MockProvider) ImportResourceState(r providers.ImportResourceStateReques p.Lock() defer p.Unlock() + if !p.ConfigureProviderCalled { + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("Configure not called before ImportResourceState %q", r.TypeName)) + return resp + } + p.ImportResourceStateCalled = true p.ImportResourceStateRequest = r if p.ImportResourceStateFn != nil { @@ -494,6 +514,11 @@ func (p *MockProvider) ReadDataSource(r providers.ReadDataSourceRequest) (resp p p.Lock() defer p.Unlock() + if !p.ConfigureProviderCalled { + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("Configure not called before ReadDataSource %q", r.TypeName)) + return resp + } + p.ReadDataSourceCalled = true p.ReadDataSourceRequest = r diff --git a/internal/terraform/terraform_test.go b/internal/terraform/terraform_test.go index 95ab317632a3..7eccb51c2b5f 100644 --- a/internal/terraform/terraform_test.go +++ b/internal/terraform/terraform_test.go @@ -166,6 +166,23 @@ func testSetResourceInstanceTainted(module *states.Module, resource, attrsJson, func testProviderFuncFixed(rp providers.Interface) providers.Factory { return func() (providers.Interface, error) { + if p, ok := rp.(*MockProvider); ok { + // make sure none of the methods were "called" on this new instance + p.GetProviderSchemaCalled = false + p.ValidateProviderConfigCalled = false + p.ValidateResourceConfigCalled = false + p.ValidateDataResourceConfigCalled = false + p.UpgradeResourceStateCalled = false + p.ConfigureProviderCalled = false + p.StopCalled = false + p.ReadResourceCalled = false + p.PlanResourceChangeCalled = false + p.ApplyResourceChangeCalled = false + p.ImportResourceStateCalled = false + p.ReadDataSourceCalled = false + p.CloseCalled = false + } + return rp, nil } } From 22b400b8dea4d44329699efe6fbb8e54b767ffd1 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Thu, 7 Oct 2021 16:51:24 -0400 Subject: [PATCH 137/644] skip refreshing deposed during destroy plan The destroy plan should not require a configured provider (the complete configuration is not evaluated, so they cannot be configured). Deposed instances were being refreshed during the destroy plan, because this instance type is only ever destroyed and shares the same implementation between plan and walkPlanDestroy. Skip refreshing during walkPlanDestroy. --- internal/terraform/context_apply2_test.go | 47 +++++++++++++++++++ .../node_resource_destroy_deposed.go | 7 ++- 2 files changed, 53 insertions(+), 1 deletion(-) diff --git a/internal/terraform/context_apply2_test.go b/internal/terraform/context_apply2_test.go index 82a48e2d7e56..c405f30aaff2 100644 --- a/internal/terraform/context_apply2_test.go +++ b/internal/terraform/context_apply2_test.go @@ -549,3 +549,50 @@ resource "test_object" "y" { } } } + +func TestContext2Apply_destroyWithDeposed(t *testing.T) { + m := testModuleInline(t, map[string]string{ + "main.tf": ` +resource "test_object" "x" { + test_string = "ok" + lifecycle { + create_before_destroy = true + } +}`, + }) + + p := simpleMockProvider() + + deposedKey := states.NewDeposedKey() + + state := states.NewState() + root := state.EnsureModule(addrs.RootModuleInstance) + root.SetResourceInstanceDeposed( + mustResourceInstanceAddr("test_object.x").Resource, + deposedKey, + &states.ResourceInstanceObjectSrc{ + Status: states.ObjectTainted, + AttrsJSON: []byte(`{"test_string":"deposed"}`), + }, + mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), + ) + + ctx := testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("test"): testProviderFuncFixed(p), + }, + }) + + plan, diags := ctx.Plan(m, state, &PlanOpts{ + Mode: plans.DestroyMode, + }) + if diags.HasErrors() { + t.Fatalf("plan: %s", diags.Err()) + } + + _, diags = ctx.Apply(plan, m) + if diags.HasErrors() { + t.Fatalf("apply: %s", diags.Err()) + } + +} diff --git a/internal/terraform/node_resource_destroy_deposed.go b/internal/terraform/node_resource_destroy_deposed.go index ceb8739d5954..7d639d135c66 100644 --- a/internal/terraform/node_resource_destroy_deposed.go +++ b/internal/terraform/node_resource_destroy_deposed.go @@ -95,7 +95,12 @@ func (n *NodePlanDeposedResourceInstanceObject) Execute(ctx EvalContext, op walk return diags } - if !n.skipRefresh { + // We don't refresh during the planDestroy walk, since that is only adding + // the destroy changes to the plan and the provider will not be configured + // at this point. The other nodes use separate types for plan and destroy, + // while deposed instances are always a destroy operation, so the logic + // here is a bit overloaded. + if !n.skipRefresh && op != walkPlanDestroy { // Refresh this object even though it is going to be destroyed, in // case it's already been deleted outside of Terraform. If this is a // normal plan, providers expect a Read request to remove missing From 348c761bea08f1c9b94de23452560a58b1462cfc Mon Sep 17 00:00:00 2001 From: Megan Bang Date: Thu, 7 Oct 2021 16:28:47 -0500 Subject: [PATCH 138/644] add better error if credentials are invalid --- internal/backend/remote-state/gcs/backend.go | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/internal/backend/remote-state/gcs/backend.go b/internal/backend/remote-state/gcs/backend.go index af2a667eb184..44060049189e 100644 --- a/internal/backend/remote-state/gcs/backend.go +++ b/internal/backend/remote-state/gcs/backend.go @@ -141,6 +141,10 @@ func (b *Backend) configure(ctx context.Context) error { return fmt.Errorf("Error loading credentials: %s", err) } + if !strings.HasPrefix(contents, "{") { + return fmt.Errorf("contents of credentials are invalid") + } + credOptions = append(credOptions, option.WithCredentialsJSON([]byte(contents))) } From 81201d69a34973f108e3e624c3023cab27d52508 Mon Sep 17 00:00:00 2001 From: Megan Bang Date: Thu, 7 Oct 2021 16:33:21 -0500 Subject: [PATCH 139/644] check valid json --- internal/backend/remote-state/gcs/backend.go | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/internal/backend/remote-state/gcs/backend.go b/internal/backend/remote-state/gcs/backend.go index 44060049189e..bb13137cfb6e 100644 --- a/internal/backend/remote-state/gcs/backend.go +++ b/internal/backend/remote-state/gcs/backend.go @@ -4,6 +4,7 @@ package gcs import ( "context" "encoding/base64" + "encoding/json" "fmt" "os" "strings" @@ -141,7 +142,7 @@ func (b *Backend) configure(ctx context.Context) error { return fmt.Errorf("Error loading credentials: %s", err) } - if !strings.HasPrefix(contents, "{") { + if !json.Valid([]byte(contents)) { return fmt.Errorf("contents of credentials are invalid") } From 03f71c2f062233957c7d9b1543c5b75094f2d030 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 8 Oct 2021 08:41:58 -0400 Subject: [PATCH 140/644] fixup tests for MockProvider changes Resetting the *Called fields and enforcing configuration broke a few tests. --- internal/terraform/context_plan2_test.go | 15 ++++++++++++--- .../node_resource_destroy_deposed_test.go | 2 ++ .../terraform/node_resource_plan_orphan_test.go | 2 ++ internal/terraform/transform_import_state_test.go | 2 ++ 4 files changed, 18 insertions(+), 3 deletions(-) diff --git a/internal/terraform/context_plan2_test.go b/internal/terraform/context_plan2_test.go index b1034d7fb62e..9552a3ca7206 100644 --- a/internal/terraform/context_plan2_test.go +++ b/internal/terraform/context_plan2_test.go @@ -3,6 +3,7 @@ package terraform import ( "bytes" "errors" + "fmt" "strings" "testing" @@ -419,7 +420,12 @@ resource "test_object" "a" { }, }, } + + // This is called from the first instance of this provider, so we can't + // check p.ReadResourceCalled after plan. + readResourceCalled := false p.ReadResourceFn = func(req providers.ReadResourceRequest) (resp providers.ReadResourceResponse) { + readResourceCalled = true newVal, err := cty.Transform(req.PriorState, func(path cty.Path, v cty.Value) (cty.Value, error) { if len(path) == 1 && path[0] == (cty.GetAttrStep{Name: "arg"}) { return cty.StringVal("current"), nil @@ -435,7 +441,10 @@ resource "test_object" "a" { NewState: newVal, } } + + upgradeResourceStateCalled := false p.UpgradeResourceStateFn = func(req providers.UpgradeResourceStateRequest) (resp providers.UpgradeResourceStateResponse) { + upgradeResourceStateCalled = true t.Logf("UpgradeResourceState %s", req.RawStateJSON) // In the destroy-with-refresh codepath we end up calling @@ -479,10 +488,10 @@ resource "test_object" "a" { }) assertNoErrors(t, diags) - if !p.UpgradeResourceStateCalled { + if !upgradeResourceStateCalled { t.Errorf("Provider's UpgradeResourceState wasn't called; should've been") } - if !p.ReadResourceCalled { + if !readResourceCalled { t.Errorf("Provider's ReadResource wasn't called; should've been") } @@ -682,7 +691,7 @@ func TestContext2Plan_destroyNoProviderConfig(t *testing.T) { p.ValidateProviderConfigFn = func(req providers.ValidateProviderConfigRequest) (resp providers.ValidateProviderConfigResponse) { v := req.Config.GetAttr("test_string") if v.IsNull() || !v.IsKnown() || v.AsString() != "ok" { - resp.Diagnostics = resp.Diagnostics.Append(errors.New("invalid provider configuration")) + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("invalid provider configuration: %#v", req.Config)) } return resp } diff --git a/internal/terraform/node_resource_destroy_deposed_test.go b/internal/terraform/node_resource_destroy_deposed_test.go index 2a2fe9981b29..f173002a28df 100644 --- a/internal/terraform/node_resource_destroy_deposed_test.go +++ b/internal/terraform/node_resource_destroy_deposed_test.go @@ -26,6 +26,7 @@ func TestNodePlanDeposedResourceInstanceObject_Execute(t *testing.T) { ) p := testProvider("test") + p.ConfigureProvider(providers.ConfigureProviderRequest{}) p.UpgradeResourceStateResponse = &providers.UpgradeResourceStateResponse{ UpgradedState: cty.ObjectVal(map[string]cty.Value{ "id": cty.StringVal("bar"), @@ -106,6 +107,7 @@ func TestNodeDestroyDeposedResourceInstanceObject_Execute(t *testing.T) { } p := testProvider("test") + p.ConfigureProvider(providers.ConfigureProviderRequest{}) p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(schema) p.UpgradeResourceStateResponse = &providers.UpgradeResourceStateResponse{ diff --git a/internal/terraform/node_resource_plan_orphan_test.go b/internal/terraform/node_resource_plan_orphan_test.go index 3758fe399440..f46c7a7091c1 100644 --- a/internal/terraform/node_resource_plan_orphan_test.go +++ b/internal/terraform/node_resource_plan_orphan_test.go @@ -33,6 +33,7 @@ func TestNodeResourcePlanOrphanExecute(t *testing.T) { ) p := simpleMockProvider() + p.ConfigureProvider(providers.ConfigureProviderRequest{}) ctx := &MockEvalContext{ StateState: state.SyncWrapper(), RefreshStateState: state.DeepCopy().SyncWrapper(), @@ -93,6 +94,7 @@ func TestNodeResourcePlanOrphanExecute_alreadyDeleted(t *testing.T) { changes := plans.NewChanges() p := simpleMockProvider() + p.ConfigureProvider(providers.ConfigureProviderRequest{}) p.ReadResourceResponse = &providers.ReadResourceResponse{ NewState: cty.NullVal(p.GetProviderSchemaResponse.ResourceTypes["test_string"].Block.ImpliedType()), } diff --git a/internal/terraform/transform_import_state_test.go b/internal/terraform/transform_import_state_test.go index 6e3245bd7be2..919f09d84b8a 100644 --- a/internal/terraform/transform_import_state_test.go +++ b/internal/terraform/transform_import_state_test.go @@ -24,6 +24,7 @@ func TestGraphNodeImportStateExecute(t *testing.T) { }, }, } + provider.ConfigureProvider(providers.ConfigureProviderRequest{}) ctx := &MockEvalContext{ StateState: state.SyncWrapper(), @@ -64,6 +65,7 @@ func TestGraphNodeImportStateExecute(t *testing.T) { func TestGraphNodeImportStateSubExecute(t *testing.T) { state := states.NewState() provider := testProvider("aws") + provider.ConfigureProvider(providers.ConfigureProviderRequest{}) ctx := &MockEvalContext{ StateState: state.SyncWrapper(), ProviderProvider: provider, From 7dda3366a672507d8fcd480a22990271d6ee26ab Mon Sep 17 00:00:00 2001 From: megan07 Date: Fri, 8 Oct 2021 10:02:05 -0500 Subject: [PATCH 141/644] Update internal/backend/remote-state/gcs/backend.go Co-authored-by: appilon --- internal/backend/remote-state/gcs/backend.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/internal/backend/remote-state/gcs/backend.go b/internal/backend/remote-state/gcs/backend.go index bb13137cfb6e..1f157779967e 100644 --- a/internal/backend/remote-state/gcs/backend.go +++ b/internal/backend/remote-state/gcs/backend.go @@ -143,7 +143,7 @@ func (b *Backend) configure(ctx context.Context) error { } if !json.Valid([]byte(contents)) { - return fmt.Errorf("contents of credentials are invalid") + return fmt.Errorf("contents of credentials are invalid json") } credOptions = append(credOptions, option.WithCredentialsJSON([]byte(contents))) From a036109bc19b67f8b0ccb3e18e386a8c1f94496a Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 8 Oct 2021 15:23:13 -0400 Subject: [PATCH 142/644] add comment about when we call ConfigureProvider --- internal/terraform/node_provider.go | 3 +++ 1 file changed, 3 insertions(+) diff --git a/internal/terraform/node_provider.go b/internal/terraform/node_provider.go index 33cad198abf8..2e611d5660e4 100644 --- a/internal/terraform/node_provider.go +++ b/internal/terraform/node_provider.go @@ -38,6 +38,9 @@ func (n *NodeApplyableProvider) Execute(ctx EvalContext, op walkOperation) (diag log.Printf("[TRACE] NodeApplyableProvider: validating configuration for %s", n.Addr) return diags.Append(n.ValidateProvider(ctx, provider)) case walkPlan, walkApply, walkDestroy: + // walkPlanDestroy is purposely skipped here, since the config is not + // evaluated, and the provider is not needed to create delete actions + // for all instances. log.Printf("[TRACE] NodeApplyableProvider: configuring %s", n.Addr) return diags.Append(n.ConfigureProvider(ctx, provider, false)) case walkImport: From 05954a328df269e0e31de86e078f06d97c4f8458 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 8 Oct 2021 15:53:08 -0400 Subject: [PATCH 143/644] udpate CI go version Since we have to specify the minor release for `.go-version`, make sure the CI version doesn't automatically udpate. --- .circleci/config.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.circleci/config.yml b/.circleci/config.yml index b9132f634ac8..e462189b31b7 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -10,7 +10,7 @@ references: executors: go: docker: - - image: docker.mirror.hashicorp.services/cimg/go:1.17 + - image: docker.mirror.hashicorp.services/cimg/go:1.17.2 environment: CONSUL_VERSION: 1.7.2 GOMAXPROCS: 4 From 4c2dbec5b179a258da253f308723cc43c00ead7e Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 8 Oct 2021 15:54:02 -0400 Subject: [PATCH 144/644] update to go1.17.2 --- .go-version | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.go-version b/.go-version index 511a76e6faf8..06fb41b6322f 100644 --- a/.go-version +++ b/.go-version @@ -1 +1 @@ -1.17.1 +1.17.2 From 94ed6a0c846a9041ebb83d18c3a6d7005ca2252e Mon Sep 17 00:00:00 2001 From: James Bardin Date: Fri, 8 Oct 2021 15:54:13 -0400 Subject: [PATCH 145/644] update the go.mod format for go1.17.2 --- go.mod | 165 +++++++++++++++++++++++++++++---------------------------- 1 file changed, 84 insertions(+), 81 deletions(-) diff --git a/go.mod b/go.mod index bc3fae7b74e8..756e51bef18d 100644 --- a/go.mod +++ b/go.mod @@ -1,70 +1,33 @@ module github.com/hashicorp/terraform require ( - cloud.google.com/go v0.79.0 // indirect cloud.google.com/go/storage v1.10.0 github.com/Azure/azure-sdk-for-go v52.5.0+incompatible - github.com/Azure/go-autorest v14.2.0+incompatible // indirect github.com/Azure/go-autorest/autorest v0.11.18 - github.com/Azure/go-autorest/autorest/adal v0.9.13 // indirect - github.com/Azure/go-autorest/autorest/azure/cli v0.4.2 // indirect - github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect - github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect - github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect - github.com/Azure/go-autorest/logger v0.2.1 // indirect - github.com/Azure/go-autorest/tracing v0.6.0 // indirect - github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c // indirect - github.com/BurntSushi/toml v0.3.1 // indirect - github.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d // indirect - github.com/Masterminds/goutils v1.1.0 // indirect - github.com/Masterminds/semver v1.5.0 // indirect - github.com/Masterminds/sprig v2.22.0+incompatible // indirect - github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af // indirect github.com/agext/levenshtein v1.2.2 github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190329064014-6e358769c32a github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190103054945-8205d1f41e70 github.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible - github.com/antchfx/xpath v0.0.0-20190129040759-c8489ed3251e // indirect - github.com/antchfx/xquery v0.0.0-20180515051857-ad5b8c7a47b0 // indirect github.com/apparentlymart/go-cidr v1.1.0 github.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0 github.com/apparentlymart/go-shquot v0.0.1 - github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect github.com/apparentlymart/go-userdirs v0.0.0-20200915174352-b0c018a67c13 github.com/apparentlymart/go-versions v1.0.1 github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2 - github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da // indirect - github.com/armon/go-radix v1.0.0 // indirect github.com/aws/aws-sdk-go v1.40.25 - github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect - github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect github.com/bgentry/speakeasy v0.1.0 github.com/bmatcuk/doublestar v1.1.5 github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e - github.com/coreos/go-semver v0.2.0 // indirect - github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d // indirect github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f github.com/davecgh/go-spew v1.1.1 - github.com/dimchansky/utfbom v1.1.1 // indirect - github.com/dylanmei/iso8601 v0.1.0 // indirect github.com/dylanmei/winrmtest v0.0.0-20190225150635-99b7fe2fddf1 - github.com/fatih/color v1.9.0 // indirect - github.com/form3tech-oss/jwt-go v3.2.2+incompatible // indirect github.com/go-test/deep v1.0.3 - github.com/gofrs/uuid v3.3.0+incompatible // indirect - github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d // indirect - github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect github.com/golang/mock v1.5.0 github.com/golang/protobuf v1.4.3 github.com/google/go-cmp v0.5.5 - github.com/google/go-querystring v1.1.0 // indirect - github.com/google/gofuzz v1.0.0 // indirect github.com/google/uuid v1.2.0 - github.com/googleapis/gax-go/v2 v2.0.5 // indirect - github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d // indirect github.com/gophercloud/gophercloud v0.10.1-0.20200424014253-c3bfe50899e5 github.com/gophercloud/utils v0.0.0-20200423144003-7c72efc7435d - github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 // indirect github.com/hashicorp/aws-sdk-go-base v0.7.1 github.com/hashicorp/consul/api v1.9.1 github.com/hashicorp/consul/sdk v0.8.0 @@ -74,40 +37,23 @@ require ( github.com/hashicorp/go-cleanhttp v0.5.2 github.com/hashicorp/go-getter v1.5.2 github.com/hashicorp/go-hclog v0.15.0 - github.com/hashicorp/go-immutable-radix v1.0.0 // indirect - github.com/hashicorp/go-msgpack v0.5.4 // indirect github.com/hashicorp/go-multierror v1.1.1 github.com/hashicorp/go-plugin v1.4.1 github.com/hashicorp/go-retryablehttp v0.5.2 - github.com/hashicorp/go-rootcerts v1.0.2 // indirect - github.com/hashicorp/go-safetemp v1.0.0 // indirect - github.com/hashicorp/go-slug v0.4.1 // indirect github.com/hashicorp/go-tfe v0.15.0 github.com/hashicorp/go-uuid v1.0.1 github.com/hashicorp/go-version v1.2.1 - github.com/hashicorp/golang-lru v0.5.1 // indirect github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f github.com/hashicorp/hcl/v2 v2.10.1 - github.com/hashicorp/jsonapi v0.0.0-20210518035559-1e50d74c8db3 // indirect - github.com/hashicorp/serf v0.9.5 // indirect github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2 github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 - github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect - github.com/huandu/xstrings v1.3.2 // indirect - github.com/imdario/mergo v0.3.11 // indirect github.com/jmespath/go-jmespath v0.4.0 github.com/joyent/triton-go v0.0.0-20180313100802-d8f9c0314926 - github.com/json-iterator/go v1.1.7 // indirect - github.com/jstemmer/go-junit-report v0.9.1 // indirect - github.com/jtolds/gls v4.2.1+incompatible // indirect github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 - github.com/klauspost/compress v1.11.2 // indirect github.com/lib/pq v1.8.0 github.com/likexian/gokit v0.20.15 github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82 - github.com/masterzen/simplexml v0.0.0-20190410153822-31eea3082786 // indirect github.com/masterzen/winrm v0.0.0-20200615185753-c42b5136ff88 - github.com/mattn/go-colorable v0.1.6 // indirect github.com/mattn/go-isatty v0.0.12 github.com/mattn/go-shellwords v1.0.4 github.com/mitchellh/cli v1.1.2 @@ -115,71 +61,128 @@ require ( github.com/mitchellh/copystructure v1.0.0 github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/go-linereader v0.0.0-20190213213312-1b945b3263eb - github.com/mitchellh/go-testing-interface v1.0.0 // indirect github.com/mitchellh/go-wordwrap v1.0.0 github.com/mitchellh/gox v1.0.1 - github.com/mitchellh/iochan v1.0.0 // indirect github.com/mitchellh/mapstructure v1.1.2 github.com/mitchellh/panicwrap v1.0.0 github.com/mitchellh/reflectwalk v1.0.1 - github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect - github.com/modern-go/reflect2 v1.0.1 // indirect - github.com/mozillazg/go-httpheader v0.3.0 // indirect github.com/nishanths/exhaustive v0.2.3 - github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d // indirect - github.com/oklog/run v1.0.0 // indirect github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23 github.com/pkg/errors v0.9.1 github.com/posener/complete v1.2.3 - github.com/satori/go.uuid v1.2.0 // indirect - github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d // indirect - github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a // indirect github.com/spf13/afero v1.2.2 - github.com/spf13/pflag v1.0.3 // indirect github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.232 github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233 github.com/tencentyun/cos-go-sdk-v5 v0.7.29 github.com/tombuildsstuff/giovanni v0.15.1 - github.com/ulikunitz/xz v0.5.8 // indirect - github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect - github.com/vmihailenco/tagparser v0.1.1 // indirect github.com/xanzy/ssh-agent v0.2.1 github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557 github.com/zclconf/go-cty v1.9.1 github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b github.com/zclconf/go-cty-yaml v1.0.2 go.etcd.io/etcd v0.5.0-alpha.5.0.20210428180535-15715dcf1ace - go.opencensus.io v0.23.0 // indirect - go.uber.org/atomic v1.3.2 // indirect - go.uber.org/multierr v1.1.0 // indirect - go.uber.org/zap v1.10.0 // indirect golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2 - golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5 // indirect golang.org/x/mod v0.4.2 golang.org/x/net v0.0.0-20210614182718-04defd469f4e golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84 golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf golang.org/x/text v0.3.6 - golang.org/x/time v0.0.0-20191024005414-555d28b269f0 // indirect golang.org/x/tools v0.1.4 - golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect google.golang.org/api v0.44.0-impersonate-preview - google.golang.org/appengine v1.6.7 // indirect - google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6 // indirect google.golang.org/grpc v1.36.0 google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 google.golang.org/protobuf v1.25.0 - gopkg.in/inf.v0 v0.9.0 // indirect - gopkg.in/ini.v1 v1.42.0 // indirect - gopkg.in/yaml.v2 v2.3.0 // indirect honnef.co/go/tools v0.0.1-2020.1.4 k8s.io/api v0.0.0-20190620084959-7cf5895f2711 k8s.io/apimachinery v0.0.0-20190913080033-27d36303b655 k8s.io/client-go v10.0.0+incompatible - k8s.io/klog v0.4.0 // indirect k8s.io/utils v0.0.0-20200411171748-3d5a2fe318e4 +) + +require ( + cloud.google.com/go v0.79.0 // indirect + github.com/Azure/go-autorest v14.2.0+incompatible // indirect + github.com/Azure/go-autorest/autorest/adal v0.9.13 // indirect + github.com/Azure/go-autorest/autorest/azure/cli v0.4.2 // indirect + github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect + github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect + github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect + github.com/Azure/go-autorest/logger v0.2.1 // indirect + github.com/Azure/go-autorest/tracing v0.6.0 // indirect + github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c // indirect + github.com/BurntSushi/toml v0.3.1 // indirect + github.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d // indirect + github.com/Masterminds/goutils v1.1.0 // indirect + github.com/Masterminds/semver v1.5.0 // indirect + github.com/Masterminds/sprig v2.22.0+incompatible // indirect + github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af // indirect + github.com/antchfx/xpath v0.0.0-20190129040759-c8489ed3251e // indirect + github.com/antchfx/xquery v0.0.0-20180515051857-ad5b8c7a47b0 // indirect + github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect + github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da // indirect + github.com/armon/go-radix v1.0.0 // indirect + github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect + github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect + github.com/coreos/go-semver v0.2.0 // indirect + github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d // indirect + github.com/dimchansky/utfbom v1.1.1 // indirect + github.com/dylanmei/iso8601 v0.1.0 // indirect + github.com/fatih/color v1.9.0 // indirect + github.com/form3tech-oss/jwt-go v3.2.2+incompatible // indirect + github.com/gofrs/uuid v3.3.0+incompatible // indirect + github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d // indirect + github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect + github.com/google/go-querystring v1.1.0 // indirect + github.com/google/gofuzz v1.0.0 // indirect + github.com/googleapis/gax-go/v2 v2.0.5 // indirect + github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d // indirect + github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 // indirect + github.com/hashicorp/go-immutable-radix v1.0.0 // indirect + github.com/hashicorp/go-msgpack v0.5.4 // indirect + github.com/hashicorp/go-rootcerts v1.0.2 // indirect + github.com/hashicorp/go-safetemp v1.0.0 // indirect + github.com/hashicorp/go-slug v0.4.1 // indirect + github.com/hashicorp/golang-lru v0.5.1 // indirect + github.com/hashicorp/jsonapi v0.0.0-20210518035559-1e50d74c8db3 // indirect + github.com/hashicorp/serf v0.9.5 // indirect + github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect + github.com/huandu/xstrings v1.3.2 // indirect + github.com/imdario/mergo v0.3.11 // indirect + github.com/json-iterator/go v1.1.7 // indirect + github.com/jstemmer/go-junit-report v0.9.1 // indirect + github.com/jtolds/gls v4.2.1+incompatible // indirect + github.com/klauspost/compress v1.11.2 // indirect + github.com/masterzen/simplexml v0.0.0-20190410153822-31eea3082786 // indirect + github.com/mattn/go-colorable v0.1.6 // indirect + github.com/mitchellh/go-testing-interface v1.0.0 // indirect + github.com/mitchellh/iochan v1.0.0 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.1 // indirect + github.com/mozillazg/go-httpheader v0.3.0 // indirect + github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d // indirect + github.com/oklog/run v1.0.0 // indirect + github.com/satori/go.uuid v1.2.0 // indirect + github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d // indirect + github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a // indirect + github.com/spf13/pflag v1.0.3 // indirect + github.com/ulikunitz/xz v0.5.8 // indirect + github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect + github.com/vmihailenco/tagparser v0.1.1 // indirect + go.opencensus.io v0.23.0 // indirect + go.uber.org/atomic v1.3.2 // indirect + go.uber.org/multierr v1.1.0 // indirect + go.uber.org/zap v1.10.0 // indirect + golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5 // indirect + golang.org/x/time v0.0.0-20191024005414-555d28b269f0 // indirect + golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect + google.golang.org/appengine v1.6.7 // indirect + google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6 // indirect + gopkg.in/inf.v0 v0.9.0 // indirect + gopkg.in/ini.v1 v1.42.0 // indirect + gopkg.in/yaml.v2 v2.3.0 // indirect + k8s.io/klog v0.4.0 // indirect sigs.k8s.io/yaml v1.1.0 // indirect ) From 3f4b680f1cb121568e6f6943b56ee754165857ee Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 6 Oct 2021 12:51:03 -0400 Subject: [PATCH 146/644] Check for nil change during apply Because NodeAbstractResourceInstance.readDiff can return a nil change, we must check for that in all callers. --- internal/terraform/node_resource_destroy.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/internal/terraform/node_resource_destroy.go b/internal/terraform/node_resource_destroy.go index c4f4d4b015e5..8dd9e21b946e 100644 --- a/internal/terraform/node_resource_destroy.go +++ b/internal/terraform/node_resource_destroy.go @@ -166,7 +166,7 @@ func (n *NodeDestroyResourceInstance) managedResourceExecute(ctx EvalContext) (d changeApply, err = n.readDiff(ctx, providerSchema) diags = diags.Append(err) - if diags.HasErrors() { + if changeApply == nil || diags.HasErrors() { return diags } From 2e6b6e9a6b7d5802663d9eb6c578bf0900512600 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Mon, 11 Oct 2021 16:37:11 -0700 Subject: [PATCH 147/644] go.mod: go get google.golang.org/protobuf@v1.27.1 --- go.mod | 4 ++-- go.sum | 7 +++++-- internal/plans/internal/planproto/planfile.pb.go | 7 +------ internal/tfplugin5/tfplugin5.pb.go | 7 +------ internal/tfplugin6/tfplugin6.pb.go | 7 +------ 5 files changed, 10 insertions(+), 22 deletions(-) diff --git a/go.mod b/go.mod index 756e51bef18d..65282e52fa96 100644 --- a/go.mod +++ b/go.mod @@ -23,7 +23,7 @@ require ( github.com/dylanmei/winrmtest v0.0.0-20190225150635-99b7fe2fddf1 github.com/go-test/deep v1.0.3 github.com/golang/mock v1.5.0 - github.com/golang/protobuf v1.4.3 + github.com/golang/protobuf v1.5.0 github.com/google/go-cmp v0.5.5 github.com/google/uuid v1.2.0 github.com/gophercloud/gophercloud v0.10.1-0.20200424014253-c3bfe50899e5 @@ -93,7 +93,7 @@ require ( google.golang.org/api v0.44.0-impersonate-preview google.golang.org/grpc v1.36.0 google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 - google.golang.org/protobuf v1.25.0 + google.golang.org/protobuf v1.27.1 honnef.co/go/tools v0.0.1-2020.1.4 k8s.io/api v0.0.0-20190620084959-7cf5895f2711 k8s.io/apimachinery v0.0.0-20190913080033-27d36303b655 diff --git a/go.sum b/go.sum index b5c72603f3a3..7d2209ad1619 100644 --- a/go.sum +++ b/go.sum @@ -256,8 +256,9 @@ github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:W github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0= github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM= github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= +github.com/golang/protobuf v1.5.0 h1:LUVKkCeviFUMKqHa4tXIIij/lbhnMbP7Fn5wKdKkRh4= +github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= github.com/google/btree v0.0.0-20160524151835-7d79101e329e/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo= @@ -1037,8 +1038,10 @@ google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2 google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= -google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= +google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= +google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ= +google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= diff --git a/internal/plans/internal/planproto/planfile.pb.go b/internal/plans/internal/planproto/planfile.pb.go index a8810e0d746a..756cfdabeb06 100644 --- a/internal/plans/internal/planproto/planfile.pb.go +++ b/internal/plans/internal/planproto/planfile.pb.go @@ -1,13 +1,12 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.25.0 +// protoc-gen-go v1.27.1 // protoc v3.15.6 // source: planfile.proto package planproto import ( - proto "github.com/golang/protobuf/proto" protoreflect "google.golang.org/protobuf/reflect/protoreflect" protoimpl "google.golang.org/protobuf/runtime/protoimpl" reflect "reflect" @@ -21,10 +20,6 @@ const ( _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) -// This is a compile-time assertion that a sufficiently up-to-date version -// of the legacy proto package is being used. -const _ = proto.ProtoPackageIsVersion4 - // Mode describes the planning mode that created the plan. type Mode int32 diff --git a/internal/tfplugin5/tfplugin5.pb.go b/internal/tfplugin5/tfplugin5.pb.go index d7db5a4f6448..ae5ec5e9b9ae 100644 --- a/internal/tfplugin5/tfplugin5.pb.go +++ b/internal/tfplugin5/tfplugin5.pb.go @@ -19,7 +19,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.25.0 +// protoc-gen-go v1.27.1 // protoc v3.15.6 // source: tfplugin5.proto @@ -27,7 +27,6 @@ package tfplugin5 import ( context "context" - proto "github.com/golang/protobuf/proto" grpc "google.golang.org/grpc" codes "google.golang.org/grpc/codes" status "google.golang.org/grpc/status" @@ -44,10 +43,6 @@ const ( _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) -// This is a compile-time assertion that a sufficiently up-to-date version -// of the legacy proto package is being used. -const _ = proto.ProtoPackageIsVersion4 - type StringKind int32 const ( diff --git a/internal/tfplugin6/tfplugin6.pb.go b/internal/tfplugin6/tfplugin6.pb.go index afd77a0eedbb..848aed807392 100644 --- a/internal/tfplugin6/tfplugin6.pb.go +++ b/internal/tfplugin6/tfplugin6.pb.go @@ -19,7 +19,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.25.0 +// protoc-gen-go v1.27.1 // protoc v3.15.6 // source: tfplugin6.proto @@ -27,7 +27,6 @@ package tfplugin6 import ( context "context" - proto "github.com/golang/protobuf/proto" grpc "google.golang.org/grpc" codes "google.golang.org/grpc/codes" status "google.golang.org/grpc/status" @@ -44,10 +43,6 @@ const ( _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20) ) -// This is a compile-time assertion that a sufficiently up-to-date version -// of the legacy proto package is being used. -const _ = proto.ProtoPackageIsVersion4 - type StringKind int32 const ( From 2dd15caf871df59869fb3ec3eb828b530b455c6c Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 8 Oct 2021 20:35:55 +0000 Subject: [PATCH 148/644] Bump github.com/hashicorp/go-plugin from 1.4.1 to 1.4.3 Bumps [github.com/hashicorp/go-plugin](https://github.com/hashicorp/go-plugin) from 1.4.1 to 1.4.3. - [Release notes](https://github.com/hashicorp/go-plugin/releases) - [Commits](https://github.com/hashicorp/go-plugin/compare/v1.4.1...v1.4.3) --- updated-dependencies: - dependency-name: github.com/hashicorp/go-plugin dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- go.mod | 2 +- go.sum | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index 65282e52fa96..87c8b4bbf5d4 100644 --- a/go.mod +++ b/go.mod @@ -38,7 +38,7 @@ require ( github.com/hashicorp/go-getter v1.5.2 github.com/hashicorp/go-hclog v0.15.0 github.com/hashicorp/go-multierror v1.1.1 - github.com/hashicorp/go-plugin v1.4.1 + github.com/hashicorp/go-plugin v1.4.3 github.com/hashicorp/go-retryablehttp v0.5.2 github.com/hashicorp/go-tfe v0.15.0 github.com/hashicorp/go-uuid v1.0.1 diff --git a/go.sum b/go.sum index 7d2209ad1619..e2505ab34d6a 100644 --- a/go.sum +++ b/go.sum @@ -359,8 +359,8 @@ github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHh github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM= -github.com/hashicorp/go-plugin v1.4.1 h1:6UltRQlLN9iZO513VveELp5xyaFxVD2+1OVylE+2E+w= -github.com/hashicorp/go-plugin v1.4.1/go.mod h1:5fGEH17QVwTTcR0zV7yhDPLLmFX9YSZ38b18Udy6vYQ= +github.com/hashicorp/go-plugin v1.4.3 h1:DXmvivbWD5qdiBts9TpBC7BYL1Aia5sxbRgQB+v6UZM= +github.com/hashicorp/go-plugin v1.4.3/go.mod h1:5fGEH17QVwTTcR0zV7yhDPLLmFX9YSZ38b18Udy6vYQ= github.com/hashicorp/go-retryablehttp v0.5.2 h1:AoISa4P4IsW0/m4T6St8Yw38gTl5GtBAgfkhYh1xAz4= github.com/hashicorp/go-retryablehttp v0.5.2/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs= github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc= From eec4a838e0a8ccd3192010903a21a1d818eeaa31 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 8 Oct 2021 20:35:14 +0000 Subject: [PATCH 149/644] Bump github.com/mitchellh/copystructure from 1.0.0 to 1.2.0 Bumps [github.com/mitchellh/copystructure](https://github.com/mitchellh/copystructure) from 1.0.0 to 1.2.0. - [Release notes](https://github.com/mitchellh/copystructure/releases) - [Commits](https://github.com/mitchellh/copystructure/compare/v1.0.0...v1.2.0) --- updated-dependencies: - dependency-name: github.com/mitchellh/copystructure dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- go.mod | 4 ++-- go.sum | 7 ++++--- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/go.mod b/go.mod index 87c8b4bbf5d4..d090111e0d76 100644 --- a/go.mod +++ b/go.mod @@ -58,14 +58,14 @@ require ( github.com/mattn/go-shellwords v1.0.4 github.com/mitchellh/cli v1.1.2 github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db - github.com/mitchellh/copystructure v1.0.0 + github.com/mitchellh/copystructure v1.2.0 github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/go-linereader v0.0.0-20190213213312-1b945b3263eb github.com/mitchellh/go-wordwrap v1.0.0 github.com/mitchellh/gox v1.0.1 github.com/mitchellh/mapstructure v1.1.2 github.com/mitchellh/panicwrap v1.0.0 - github.com/mitchellh/reflectwalk v1.0.1 + github.com/mitchellh/reflectwalk v1.0.2 github.com/nishanths/exhaustive v0.2.3 github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23 diff --git a/go.sum b/go.sum index e2505ab34d6a..908cae84a14b 100644 --- a/go.sum +++ b/go.sum @@ -499,8 +499,9 @@ github.com/mitchellh/cli v1.1.2 h1:PvH+lL2B7IQ101xQL63Of8yFS2y+aDlsFcsqNc+u/Kw= github.com/mitchellh/cli v1.1.2/go.mod h1:6iaV0fGdElS6dPBx0EApTxHrcWvmJphyh2n8YBLPPZ4= github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ= github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw= -github.com/mitchellh/copystructure v1.0.0 h1:Laisrj+bAB6b/yJwB5Bt3ITZhGJdqmxquMKeZ+mmkFQ= github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw= +github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw= +github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s= github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= @@ -522,8 +523,8 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh github.com/mitchellh/panicwrap v1.0.0 h1:67zIyVakCIvcs69A0FGfZjBdPleaonSgGlXRSRlb6fE= github.com/mitchellh/panicwrap v1.0.0/go.mod h1:pKvZHwWrZowLUzftuFq7coarnxbBXU4aQh3N0BJOeeA= github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= -github.com/mitchellh/reflectwalk v1.0.1 h1:FVzMWA5RllMAKIdUSC8mdWo3XtwoecrH79BY70sEEpE= -github.com/mitchellh/reflectwalk v1.0.1/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= +github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ= +github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= From 55f0d063146eee6be5918587e5eff2fd42d01c2f Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 12 Oct 2021 10:23:56 -0700 Subject: [PATCH 150/644] go.mod: go get github.com/lib/pq@v1.10.3 This is just a routine upgrade to the latest v1 release. --- go.mod | 2 +- go.sum | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index d090111e0d76..11b19549a46b 100644 --- a/go.mod +++ b/go.mod @@ -50,7 +50,7 @@ require ( github.com/jmespath/go-jmespath v0.4.0 github.com/joyent/triton-go v0.0.0-20180313100802-d8f9c0314926 github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 - github.com/lib/pq v1.8.0 + github.com/lib/pq v1.10.3 github.com/likexian/gokit v0.20.15 github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82 github.com/masterzen/winrm v0.0.0-20200615185753-c42b5136ff88 diff --git a/go.sum b/go.sum index 908cae84a14b..dc15ae95e8a9 100644 --- a/go.sum +++ b/go.sum @@ -456,8 +456,8 @@ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= -github.com/lib/pq v1.8.0 h1:9xohqzkUwzR4Ga4ivdTcawVS89YSDVxXMa3xJX3cGzg= -github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= +github.com/lib/pq v1.10.3 h1:v9QZf2Sn6AmjXtQeFpdoq/eaNtYP6IN+7lcrygsIAtg= +github.com/lib/pq v1.10.3/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/likexian/gokit v0.0.0-20190309162924-0a377eecf7aa/go.mod h1:QdfYv6y6qPA9pbBA2qXtoT8BMKha6UyNbxWGWl/9Jfk= github.com/likexian/gokit v0.0.0-20190418170008-ace88ad0983b/go.mod h1:KKqSnk/VVSW8kEyO2vVCXoanzEutKdlBAPohmGXkxCk= github.com/likexian/gokit v0.0.0-20190501133040-e77ea8b19cdc/go.mod h1:3kvONayqCaj+UgrRZGpgfXzHdMYCAO0KAt4/8n0L57Y= From 2fd5ca37674dd869bf784586556ec0ad4bc2d7eb Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 12 Oct 2021 17:24:56 +0000 Subject: [PATCH 151/644] build(deps): bump honnef.co/go/tools from 0.0.1-2020.1.4 to 0.3.0-0.dev Bumps [honnef.co/go/tools](https://github.com/dominikh/go-tools) from 0.0.1-2020.1.4 to 0.3.0-0.dev. - [Release notes](https://github.com/dominikh/go-tools/releases) - [Commits](https://github.com/dominikh/go-tools/compare/v0.0.1-2020.1.4...v0.3.0-0.dev) --- updated-dependencies: - dependency-name: honnef.co/go/tools dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- go.mod | 2 +- go.sum | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/go.mod b/go.mod index 11b19549a46b..db77246a2e02 100644 --- a/go.mod +++ b/go.mod @@ -94,7 +94,7 @@ require ( google.golang.org/grpc v1.36.0 google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 google.golang.org/protobuf v1.27.1 - honnef.co/go/tools v0.0.1-2020.1.4 + honnef.co/go/tools v0.3.0-0.dev k8s.io/api v0.0.0-20190620084959-7cf5895f2711 k8s.io/apimachinery v0.0.0-20190913080033-27d36303b655 k8s.io/client-go v10.0.0+incompatible diff --git a/go.sum b/go.sum index dc15ae95e8a9..bc61ec030ba9 100644 --- a/go.sum +++ b/go.sum @@ -1074,8 +1074,9 @@ honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWh honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -honnef.co/go/tools v0.0.1-2020.1.4 h1:UoveltGrhghAA7ePc+e+QYDHXrBps2PqFZiHkGR/xK8= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= +honnef.co/go/tools v0.3.0-0.dev h1:6vzhjcOJu1nJRa2G8QLXf3DPeg601NuerY16vrb01GY= +honnef.co/go/tools v0.3.0-0.dev/go.mod h1:lPVVZ2BS5TfnjLyizF7o7hv7j9/L+8cZY2hLyjP9cGY= k8s.io/api v0.0.0-20190620084959-7cf5895f2711 h1:BblVYz/wE5WtBsD/Gvu54KyBUTJMflolzc5I2DTvh50= k8s.io/api v0.0.0-20190620084959-7cf5895f2711/go.mod h1:TBhBqb1AWbBQbW3XRusr7n7E4v2+5ZY8r8sAMnyFC5A= k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719/go.mod h1:I4A+glKBHiTgiEjQiCCQfCAIcIMFGt291SmsvcrFzJA= From 02ca4e970c44e2ec6cbe7dab6ab90f7866aacb38 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 12 Oct 2021 09:59:02 -0700 Subject: [PATCH 152/644] go.mod: replace github.com/dgrijalva/jwt-go with .../golang-jwt/jwt CVE-2020-26160 is a high-severity advisory reported against this module. The dgrijalva package is no longer maintained but our legacy etcv2 backend depends on it indirectly, via go.etcd.io/etcd/client. The golang-jwt package is the blessed successor of the original, and has a v3 line which is compatible with the v3 line of dgrijalva, and so through this replace we can get a fix for the advisory without other significant behavior change. We've preserved the etcdv2 backend as-is on a best-effort basis in order to support anyone who is already using it, but recommend that users switch to etcdv3 or to some other backend for ongoing use. We also have future plans to make state storage be a matter for provider plugins rather than built in to Terraform CLI, at which point this backend will either become obsolete or be factored out into its own plugin, at which point we can remove this "replace" directive and the associated dependency altogether. --- go.mod | 6 ++++++ go.sum | 5 ++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index db77246a2e02..dfcde125f7e5 100644 --- a/go.mod +++ b/go.mod @@ -192,4 +192,10 @@ replace github.com/golang/mock v1.5.0 => github.com/golang/mock v1.4.4 replace k8s.io/client-go => k8s.io/client-go v0.0.0-20190620085101-78d2af792bab +// github.com/dgrijalva/jwt-go is no longer maintained but is an indirect +// dependency of the old etcdv2 backend, and so we need to keep this working +// until that backend is removed. github.com/golang-jwt/jwt/v3 is a drop-in +// replacement that includes a fix for CVE-2020-26160. +replace github.com/dgrijalva/jwt-go => github.com/golang-jwt/jwt v3.2.1+incompatible + go 1.17 diff --git a/go.sum b/go.sum index bc61ec030ba9..bec4551f8e45 100644 --- a/go.sum +++ b/go.sum @@ -172,9 +172,6 @@ github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2 github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/dgrijalva/jwt-go v0.0.0-20160705203006-01aeca54ebda/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= -github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM= -github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8= github.com/dimchansky/utfbom v1.1.1 h1:vV6w1AhK4VMnhBno/TPVCoK9U/LP0PkLCS9tbxHdi/U= github.com/dimchansky/utfbom v1.1.1/go.mod h1:SxdoEBH5qIqFocHMyGOXVAybYJdr71b1Q/j0mACtrfE= @@ -226,6 +223,8 @@ github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7a github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d h1:3PaI8p3seN09VjbTYC/QWlUZdZ1qS1zGjy7LH2Wt07I= github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= +github.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c= +github.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= From 051629d74a508024f06fef640aba395e3ce4a911 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 12 Oct 2021 17:33:17 +0000 Subject: [PATCH 153/644] build(deps): bump github.com/agext/levenshtein from 1.2.2 to 1.2.3 Bumps [github.com/agext/levenshtein](https://github.com/agext/levenshtein) from 1.2.2 to 1.2.3. - [Release notes](https://github.com/agext/levenshtein/releases) - [Commits](https://github.com/agext/levenshtein/compare/v1.2.2...v1.2.3) --- updated-dependencies: - dependency-name: github.com/agext/levenshtein dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- go.mod | 2 +- go.sum | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/go.mod b/go.mod index dfcde125f7e5..35b4b645a789 100644 --- a/go.mod +++ b/go.mod @@ -4,7 +4,7 @@ require ( cloud.google.com/go/storage v1.10.0 github.com/Azure/azure-sdk-for-go v52.5.0+incompatible github.com/Azure/go-autorest/autorest v0.11.18 - github.com/agext/levenshtein v1.2.2 + github.com/agext/levenshtein v1.2.3 github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190329064014-6e358769c32a github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190103054945-8205d1f41e70 github.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible diff --git a/go.sum b/go.sum index bec4551f8e45..fa2a319b96f1 100644 --- a/go.sum +++ b/go.sum @@ -93,8 +93,9 @@ github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mo github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af h1:DBNMBMuMiWYu0b+8KMJuWmfCkcxl09JwdlqwDZZ6U14= github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af/go.mod h1:5Jv4cbFiHJMsVxt52+i0Ha45fjshj6wxYr1r19tB9bw= github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558= -github.com/agext/levenshtein v1.2.2 h1:0S/Yg6LYmFJ5stwQeRp6EeOcCbj7xiqQSdNelsXvaqE= github.com/agext/levenshtein v1.2.2/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558= +github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo= +github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190329064014-6e358769c32a h1:APorzFpCcv6wtD5vmRWYqNm4N55kbepL7c7kTq9XI6A= From 656f03b250a8ce532c9ab6c7d9f75f6746b090d9 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 12 Oct 2021 13:47:47 -0400 Subject: [PATCH 154/644] fix test fixture had the instance in the wrong mod Make the state match the fixture config. The old test was not technically invalid, but because it caused multiple instances of the provider to be created, they were backed by the same MockProvider value resulting in the `*Called` fields interfering. --- internal/terraform/context_apply_test.go | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/internal/terraform/context_apply_test.go b/internal/terraform/context_apply_test.go index a525ab52d19c..8bacd2d9184a 100644 --- a/internal/terraform/context_apply_test.go +++ b/internal/terraform/context_apply_test.go @@ -1543,8 +1543,8 @@ func TestContext2Apply_destroyModuleVarProviderConfig(t *testing.T) { p := testProvider("aws") p.PlanResourceChangeFn = testDiffFn state := states.NewState() - root := state.EnsureModule(addrs.RootModuleInstance) - root.SetResourceInstanceCurrent( + child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey)) + child.SetResourceInstanceCurrent( mustResourceInstanceAddr("aws_instance.foo").Resource, &states.ResourceInstanceObjectSrc{ Status: states.ObjectReady, From 903ae5edfd1ed00db91d233ed9823b5c8069ba23 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 12 Oct 2021 14:08:37 -0400 Subject: [PATCH 155/644] fix test fixtures with multiple providers Allow these to share the same backing MockProvider. --- internal/terraform/context_apply_test.go | 12 +++++++++--- internal/terraform/context_input_test.go | 4 +++- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/internal/terraform/context_apply_test.go b/internal/terraform/context_apply_test.go index a525ab52d19c..06ff0de10be0 100644 --- a/internal/terraform/context_apply_test.go +++ b/internal/terraform/context_apply_test.go @@ -582,7 +582,9 @@ func TestContext2Apply_providerAlias(t *testing.T) { p.ApplyResourceChangeFn = testApplyFn ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), + addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { + return p, nil + }, }, }) @@ -616,7 +618,9 @@ func TestContext2Apply_providerAliasConfigure(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("another"): testProviderFuncFixed(p2), + addrs.NewDefaultProvider("another"): func() (providers.Interface, error) { + return p2, nil + }, }, }) @@ -1554,7 +1558,9 @@ func TestContext2Apply_destroyModuleVarProviderConfig(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), + addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { + return p, nil + }, }, }) diff --git a/internal/terraform/context_input_test.go b/internal/terraform/context_input_test.go index 819856a6c7d6..a9ec11b38fc6 100644 --- a/internal/terraform/context_input_test.go +++ b/internal/terraform/context_input_test.go @@ -117,7 +117,9 @@ func TestContext2Input_providerMulti(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): testProviderFuncFixed(p), + addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { + return p, nil + }, }, UIInput: inp, }) From d76759a6a961b1b20886d6b6528436257117ecf4 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 12 Oct 2021 11:04:01 -0700 Subject: [PATCH 156/644] configs/configload: snapshotDir must be used via pointer A snapshotDir tracks its current position as part of its state, so we need to use it via pointer rather than value so that Readdirnames can actually update that position, or else we'll just get stuck at position zero. In practice this wasn't hurting anything because we only call Readdir once on our snapshots, to read the whole directory at once. Still nice to fix to avoid a gotcha for future maintenence, though. --- internal/configs/configload/loader_snapshot.go | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/internal/configs/configload/loader_snapshot.go b/internal/configs/configload/loader_snapshot.go index c1b54b8c2d6a..915665f833ba 100644 --- a/internal/configs/configload/loader_snapshot.go +++ b/internal/configs/configload/loader_snapshot.go @@ -258,7 +258,7 @@ func (fs snapshotFS) Open(name string) (afero.File, error) { filenames = append(filenames, n) } sort.Strings(filenames) - return snapshotDir{ + return &snapshotDir{ filenames: filenames, }, nil } @@ -310,7 +310,7 @@ func (fs snapshotFS) Stat(name string) (os.FileInfo, error) { if err != nil { return nil, err } - _, isDir := f.(snapshotDir) + _, isDir := f.(*snapshotDir) return snapshotFileInfo{ name: filepath.Base(name), isDir: isDir, @@ -377,9 +377,9 @@ type snapshotDir struct { at int } -var _ afero.File = snapshotDir{} +var _ afero.File = (*snapshotDir)(nil) -func (f snapshotDir) Readdir(count int) ([]os.FileInfo, error) { +func (f *snapshotDir) Readdir(count int) ([]os.FileInfo, error) { names, err := f.Readdirnames(count) if err != nil { return nil, err @@ -394,7 +394,7 @@ func (f snapshotDir) Readdir(count int) ([]os.FileInfo, error) { return ret, nil } -func (f snapshotDir) Readdirnames(count int) ([]string, error) { +func (f *snapshotDir) Readdirnames(count int) ([]string, error) { var outLen int names := f.filenames[f.at:] if count > 0 { From 965c0f3f919b006321339ca3358913f96b4cc046 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Tue, 12 Oct 2021 11:09:00 -0700 Subject: [PATCH 157/644] build: Run staticcheck with "go run" Running the tool this way ensures that we'll always run the version selected by our go.mod file, rather than whatever happened to be available in $GOPATH/bin on the system where we're running this. This change caused some contexts to now be using a newer version of staticcheck with additional checks, and so this commit also includes some changes to quiet the new warnings without any change in overall behavior. --- internal/lang/funcs/cidr_test.go | 2 +- internal/plugin/discovery/find.go | 4 +--- scripts/staticcheck.sh | 2 +- 3 files changed, 3 insertions(+), 5 deletions(-) diff --git a/internal/lang/funcs/cidr_test.go b/internal/lang/funcs/cidr_test.go index 5246531111e9..cb8d810a9392 100644 --- a/internal/lang/funcs/cidr_test.go +++ b/internal/lang/funcs/cidr_test.go @@ -246,7 +246,7 @@ func TestCidrSubnet(t *testing.T) { }, { // fractions are Not Ok cty.StringVal("10.256.0.0/8"), - cty.NumberFloatVal(2 / 3), + cty.NumberFloatVal(2.0 / 3.0), cty.NumberFloatVal(.75), cty.UnknownVal(cty.String), true, diff --git a/internal/plugin/discovery/find.go b/internal/plugin/discovery/find.go index f053312b00de..027a887ebf61 100644 --- a/internal/plugin/discovery/find.go +++ b/internal/plugin/discovery/find.go @@ -154,9 +154,7 @@ func ResolvePluginPaths(paths []string) PluginMetaSet { // Trim the .exe suffix used on Windows before we start wrangling // the remainder of the path. - if strings.HasSuffix(baseName, ".exe") { - baseName = baseName[:len(baseName)-4] - } + baseName = strings.TrimSuffix(baseName, ".exe") parts := strings.SplitN(baseName, "_v", 2) name := parts[0] diff --git a/scripts/staticcheck.sh b/scripts/staticcheck.sh index 66c47092f9f9..2dd08309a6f1 100755 --- a/scripts/staticcheck.sh +++ b/scripts/staticcheck.sh @@ -13,4 +13,4 @@ packages=$(go list ./... | egrep -v ${skip}) # We are skipping style-related checks, since terraform intentionally breaks # some of these. The goal here is to find issues that reduce code clarity, or # may result in bugs. -staticcheck -checks 'all,-ST*' ${packages} +go run honnef.co/go/tools/cmd/staticcheck -checks 'all,-ST*' ${packages} From 73b1263a869a00d1751ba38564b5c7e05d163467 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Tue, 12 Oct 2021 17:41:04 -0400 Subject: [PATCH 158/644] wrap multiple provider creations into a factory fn When a test uses multiple instances of the same provider, we may need to have separate objects to prevent overwriting of the MockProvider state. Create a completely new MockProvider in each factory function call rather than re-using the original provider value. --- internal/terraform/context_apply_test.go | 68 +++++++++++++++--------- internal/terraform/context_input_test.go | 32 ++++++++--- 2 files changed, 68 insertions(+), 32 deletions(-) diff --git a/internal/terraform/context_apply_test.go b/internal/terraform/context_apply_test.go index 06ff0de10be0..09babdbe0a4a 100644 --- a/internal/terraform/context_apply_test.go +++ b/internal/terraform/context_apply_test.go @@ -577,14 +577,18 @@ func TestContext2Apply_refCount(t *testing.T) { func TestContext2Apply_providerAlias(t *testing.T) { m := testModule(t, "apply-provider-alias") - p := testProvider("aws") - p.PlanResourceChangeFn = testDiffFn - p.ApplyResourceChangeFn = testApplyFn + + // Each provider instance must be completely independent to ensure that we + // are verifying the correct state of each. + p := func() (providers.Interface, error) { + p := testProvider("aws") + p.PlanResourceChangeFn = testDiffFn + p.ApplyResourceChangeFn = testApplyFn + return p, nil + } ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { - return p, nil - }, + addrs.NewDefaultProvider("aws"): p, }, }) @@ -612,15 +616,18 @@ func TestContext2Apply_providerAlias(t *testing.T) { func TestContext2Apply_providerAliasConfigure(t *testing.T) { m := testModule(t, "apply-provider-alias-configure") - p2 := testProvider("another") - p2.ApplyResourceChangeFn = testApplyFn - p2.PlanResourceChangeFn = testDiffFn + // Each provider instance must be completely independent to ensure that we + // are verifying the correct state of each. + p := func() (providers.Interface, error) { + p := testProvider("another") + p.ApplyResourceChangeFn = testApplyFn + p.PlanResourceChangeFn = testDiffFn + return p, nil + } ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("another"): func() (providers.Interface, error) { - return p2, nil - }, + addrs.NewDefaultProvider("another"): p, }, }) @@ -633,17 +640,29 @@ func TestContext2Apply_providerAliasConfigure(t *testing.T) { // Configure to record calls AFTER Plan above var configCount int32 - p2.ConfigureProviderFn = func(req providers.ConfigureProviderRequest) (resp providers.ConfigureProviderResponse) { - atomic.AddInt32(&configCount, 1) + p = func() (providers.Interface, error) { + p := testProvider("another") + p.ApplyResourceChangeFn = testApplyFn + p.PlanResourceChangeFn = testDiffFn + p.ConfigureProviderFn = func(req providers.ConfigureProviderRequest) (resp providers.ConfigureProviderResponse) { + atomic.AddInt32(&configCount, 1) - foo := req.Config.GetAttr("foo").AsString() - if foo != "bar" { - resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("foo: %#v", foo)) - } + foo := req.Config.GetAttr("foo").AsString() + if foo != "bar" { + resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("foo: %#v", foo)) + } - return + return + } + return p, nil } + ctx = testContext2(t, &ContextOpts{ + Providers: map[addrs.Provider]providers.Factory{ + addrs.NewDefaultProvider("another"): p, + }, + }) + state, diags := ctx.Apply(plan, m) if diags.HasErrors() { t.Fatalf("diags: %s", diags.Err()) @@ -1544,8 +1563,11 @@ func TestContext2Apply_destroySkipsCBD(t *testing.T) { func TestContext2Apply_destroyModuleVarProviderConfig(t *testing.T) { m := testModule(t, "apply-destroy-mod-var-provider-config") - p := testProvider("aws") - p.PlanResourceChangeFn = testDiffFn + p := func() (providers.Interface, error) { + p := testProvider("aws") + p.PlanResourceChangeFn = testDiffFn + return p, nil + } state := states.NewState() root := state.EnsureModule(addrs.RootModuleInstance) root.SetResourceInstanceCurrent( @@ -1558,9 +1580,7 @@ func TestContext2Apply_destroyModuleVarProviderConfig(t *testing.T) { ) ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { - return p, nil - }, + addrs.NewDefaultProvider("aws"): p, }, }) diff --git a/internal/terraform/context_input_test.go b/internal/terraform/context_input_test.go index a9ec11b38fc6..5216efb5965f 100644 --- a/internal/terraform/context_input_test.go +++ b/internal/terraform/context_input_test.go @@ -85,8 +85,7 @@ func TestContext2Input_provider(t *testing.T) { func TestContext2Input_providerMulti(t *testing.T) { m := testModule(t, "input-provider-multi") - p := testProvider("aws") - p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ + getProviderSchemaResponse := getProviderSchemaResponseFromProviderSchema(&ProviderSchema{ Provider: &configschema.Block{ Attributes: map[string]*configschema.Attribute{ "foo": { @@ -108,6 +107,17 @@ func TestContext2Input_providerMulti(t *testing.T) { }, }) + // In order to update the provider to check only the configure calls during + // apply, we will need to inject a new factory function after plan. We must + // use a closure around the factory, because in order for the inputs to + // work during apply we need to maintain the same context value, preventing + // us from assigning a new Providers map. + providerFactory := func() (providers.Interface, error) { + p := testProvider("aws") + p.GetProviderSchemaResponse = getProviderSchemaResponse + return p, nil + } + inp := &MockUIInput{ InputReturnMap: map[string]string{ "provider.aws.foo": "bar", @@ -118,7 +128,7 @@ func TestContext2Input_providerMulti(t *testing.T) { ctx := testContext2(t, &ContextOpts{ Providers: map[addrs.Provider]providers.Factory{ addrs.NewDefaultProvider("aws"): func() (providers.Interface, error) { - return p, nil + return providerFactory() }, }, UIInput: inp, @@ -134,12 +144,18 @@ func TestContext2Input_providerMulti(t *testing.T) { plan, diags := ctx.Plan(m, states.NewState(), DefaultPlanOpts) assertNoErrors(t, diags) - p.ConfigureProviderFn = func(req providers.ConfigureProviderRequest) (resp providers.ConfigureProviderResponse) { - lock.Lock() - defer lock.Unlock() - actual = append(actual, req.Config.GetAttr("foo").AsString()) - return + providerFactory = func() (providers.Interface, error) { + p := testProvider("aws") + p.GetProviderSchemaResponse = getProviderSchemaResponse + p.ConfigureProviderFn = func(req providers.ConfigureProviderRequest) (resp providers.ConfigureProviderResponse) { + lock.Lock() + defer lock.Unlock() + actual = append(actual, req.Config.GetAttr("foo").AsString()) + return + } + return p, nil } + if _, diags := ctx.Apply(plan, m); diags.HasErrors() { t.Fatalf("apply errors: %s", diags.Err()) } From b0d10c985779955e0ff5bb4b4e04fecc1879de81 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 13 Oct 2021 08:34:23 +0000 Subject: [PATCH 159/644] build(deps): bump github.com/xanzy/ssh-agent from 0.2.1 to 0.3.1 Bumps [github.com/xanzy/ssh-agent](https://github.com/xanzy/ssh-agent) from 0.2.1 to 0.3.1. - [Release notes](https://github.com/xanzy/ssh-agent/releases) - [Commits](https://github.com/xanzy/ssh-agent/compare/v0.2.1...v0.3.1) --- updated-dependencies: - dependency-name: github.com/xanzy/ssh-agent dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] --- go.mod | 5 +++-- go.sum | 18 ++++++++++-------- 2 files changed, 13 insertions(+), 10 deletions(-) diff --git a/go.mod b/go.mod index 35b4b645a789..3dd2135813ff 100644 --- a/go.mod +++ b/go.mod @@ -76,13 +76,13 @@ require ( github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233 github.com/tencentyun/cos-go-sdk-v5 v0.7.29 github.com/tombuildsstuff/giovanni v0.15.1 - github.com/xanzy/ssh-agent v0.2.1 + github.com/xanzy/ssh-agent v0.3.1 github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557 github.com/zclconf/go-cty v1.9.1 github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b github.com/zclconf/go-cty-yaml v1.0.2 go.etcd.io/etcd v0.5.0-alpha.5.0.20210428180535-15715dcf1ace - golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2 + golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97 golang.org/x/mod v0.4.2 golang.org/x/net v0.0.0-20210614182718-04defd469f4e golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84 @@ -117,6 +117,7 @@ require ( github.com/Masterminds/goutils v1.1.0 // indirect github.com/Masterminds/semver v1.5.0 // indirect github.com/Masterminds/sprig v2.22.0+incompatible // indirect + github.com/Microsoft/go-winio v0.5.0 // indirect github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af // indirect github.com/antchfx/xpath v0.0.0-20190129040759-c8489ed3251e // indirect github.com/antchfx/xquery v0.0.0-20180515051857-ad5b8c7a47b0 // indirect diff --git a/go.sum b/go.sum index fa2a319b96f1..bc0ba56870d8 100644 --- a/go.sum +++ b/go.sum @@ -86,6 +86,8 @@ github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3Q github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y= github.com/Masterminds/sprig v2.22.0+incompatible h1:z4yfnGrZ7netVz+0EDJ0Wi+5VZCSYp4Z0m2dk6cEM60= github.com/Masterminds/sprig v2.22.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o= +github.com/Microsoft/go-winio v0.5.0 h1:Elr9Wn+sGKPlkaBvwu4mTrxtmOp3F3yV9qhaHbXGjwU= +github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84= github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ= github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= @@ -444,7 +446,6 @@ github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQL github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/klauspost/compress v1.11.2 h1:MiK62aErc3gIiVEtyzKfeOHgW7atJb5g/KNX5m3c2nQ= github.com/klauspost/compress v1.11.2/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs= -github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= @@ -590,8 +591,9 @@ github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ= github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= -github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM= +github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a h1:JSvGDIbmil4Ui/dDdFBExb7/cmkNjyX5F97oglmvCDo= @@ -637,8 +639,8 @@ github.com/vmihailenco/msgpack/v4 v4.3.12 h1:07s4sz9IReOgdikxLTKNbBdqDMLsjPKXwvC github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4= github.com/vmihailenco/tagparser v0.1.1 h1:quXMXlA39OCbd2wAdTsGDlK9RkOk6Wuw+x37wVyIuWY= github.com/vmihailenco/tagparser v0.1.1/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI= -github.com/xanzy/ssh-agent v0.2.1 h1:TCbipTQL2JiiCprBWx9frJ2eJlCYT00NmctrHxVAr70= -github.com/xanzy/ssh-agent v0.2.1/go.mod h1:mLlQY/MoOhWBj+gOGMQkOeiEvkx+8pJSI+0Bx9h2kr4= +github.com/xanzy/ssh-agent v0.3.1 h1:AmzO1SSWxw73zxFZPRwaMN1MohDw8UyHnmuxyceTEGo= +github.com/xanzy/ssh-agent v0.3.1/go.mod h1:QIE4lCeL7nkC25x+yA3LBIYfwCc1TFziCtG7cBAac6w= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557 h1:Jpn2j6wHkC9wJv5iMfJhKqrZJx3TahFx+7sbZ7zQdxs= @@ -679,7 +681,6 @@ go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181025213731-e84da0312774/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= -golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190222235706-ffb98f73852f/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= @@ -693,8 +694,8 @@ golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPh golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20201016220609-9e8e0b390897/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I= -golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2 h1:It14KIkyBFYkHkwZ7k45minvA9aorojkyjGk9KJ5B/w= -golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= +golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97 h1:/UOmuWzQfxxo9UtlXMwuQU8CMgg1eZXqTRwkSQJWKOI= +golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -811,7 +812,6 @@ golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5h golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190221075227-b4e8571b14e0/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -853,12 +853,14 @@ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c h1:F1jZWGFhYfh0Ci55sIpILtKKK8p3i2/krTr0H1rg74I= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= From 765bff1dd5dd1f906f959208f8b4d7bb2ca4870d Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 13 Oct 2021 10:53:27 -0700 Subject: [PATCH 160/644] Update CHANGELOG.md --- CHANGELOG.md | 1 + 1 file changed, 1 insertion(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 15da0516165b..50108f71ad2d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -19,6 +19,7 @@ ENHANCEMENTS: * config: Terraform now checks the syntax of and normalizes module source addresses (the `source` argument in `module` blocks) during configuration decoding rather than only at module installation time. This is largely just an internal refactoring, but a visible benefit of this change is that the `terraform init` messages about module downloading will now show the canonical module package address Terraform is downloading from, after interpreting the special shorthands for common cases like GitHub URLs. ([#28854](https://github.com/hashicorp/terraform/issues/28854)) * cli: Terraform will now report explicitly in the UI if it automatically moves a resource instance to a new address as a result of adding or removing the `count` argument from an existing resource. For example, if you previously had `resource "aws_subnet" "example"` _without_ `count`, you might have `aws_subnet.example` already bound to a remote object in your state. If you add `count = 1` to that resource then Terraform would previously silently rebind the object to `aws_subnet.example[0]` as part of planning, whereas now Terraform will mention that it did so explicitly in the plan description. ([#29605](https://github.com/hashicorp/terraform/issues/29605)) +* provisioner/remote-exec and provisioner/file: When using SSH agent authentication mode on Windows, Terraform can now detect and use [the Windows 10 built-in OpenSSH Client](https://devblogs.microsoft.com/powershell/using-the-openssh-beta-in-windows-10-fall-creators-update-and-windows-server-1709/)'s SSH Agent, when available, in addition to the existing support for the third-party solution [Pageant](https://documentation.help/PuTTY/pageant.html) that was already supported. [GH-29747] BUG FIXES: From 9b9b26a3cd9247c3c38aa2663c485150d3322af8 Mon Sep 17 00:00:00 2001 From: Megan Bang Date: Wed, 13 Oct 2021 13:51:07 -0500 Subject: [PATCH 161/644] update error message for invalid json --- internal/backend/remote-state/gcs/backend.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/internal/backend/remote-state/gcs/backend.go b/internal/backend/remote-state/gcs/backend.go index 1f157779967e..0478a95ab119 100644 --- a/internal/backend/remote-state/gcs/backend.go +++ b/internal/backend/remote-state/gcs/backend.go @@ -143,7 +143,7 @@ func (b *Backend) configure(ctx context.Context) error { } if !json.Valid([]byte(contents)) { - return fmt.Errorf("contents of credentials are invalid json") + return fmt.Errorf("the string provided in credentials is neither valid json nor a valid file path") } credOptions = append(credOptions, option.WithCredentialsJSON([]byte(contents))) From bee7403f3ef64a38c4d0730157b4154eb9a09a12 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 13 Oct 2021 12:21:23 -0700 Subject: [PATCH 162/644] command/workspace_delete: Allow deleting a workspace with empty husks Previously we would reject attempts to delete a workspace if its state contained any resources at all, even if none of the resources had any resource instance objects associated with it. Nowadays there isn't any situation where the normal Terraform workflow will leave behind resource husks, and so this isn't as problematic as it might've been in the v0.12 era, but nonetheless what we actually care about for this check is whether there might be any remote objects that this state is tracking, and for that it's more precise to look for non-nil resource instance objects, rather than whole resources. This also includes some adjustments to our error messaging to give more information about the problem and to use terminology more consistent with how we currently talk about this situation in our documentation and elsewhere in the UI. We were also using the old State.HasResources method as part of some of our tests. I considered preserving it to avoid changing the behavior of those tests, but the new check seemed close enough to the intent of those tests that it wasn't worth maintaining this method that wouldn't be used in any main code anymore. I've therefore updated those tests to use the new HasResourceInstanceObjects method instead. --- internal/backend/local/backend_refresh.go | 4 +- internal/backend/remote/backend_apply_test.go | 6 +- internal/backend/testing.go | 12 +- internal/command/workspace_command.go | 9 -- internal/command/workspace_command_test.go | 13 +- internal/command/workspace_delete.go | 25 +++- internal/states/state.go | 90 ++++++++++++- internal/states/state_equal.go | 4 +- internal/states/state_test.go | 119 ++++++++++++++++++ internal/terraform/context_apply_test.go | 4 +- internal/terraform/node_data_destroy_test.go | 2 +- internal/terraform/transform_state.go | 4 +- 12 files changed, 255 insertions(+), 37 deletions(-) diff --git a/internal/backend/local/backend_refresh.go b/internal/backend/local/backend_refresh.go index 988a8b8f3759..ddbedaf19088 100644 --- a/internal/backend/local/backend_refresh.go +++ b/internal/backend/local/backend_refresh.go @@ -64,11 +64,11 @@ func (b *Local) opRefresh( // If we succeed then we'll overwrite this with the resulting state below, // but otherwise the resulting state is just the input state. runningOp.State = lr.InputState - if !runningOp.State.HasResources() { + if !runningOp.State.HasManagedResourceInstanceObjects() { diags = diags.Append(tfdiags.Sourceless( tfdiags.Warning, "Empty or non-existent state", - "There are currently no resources tracked in the state, so there is nothing to refresh.", + "There are currently no remote objects tracked in the state, so there is nothing to refresh.", )) } diff --git a/internal/backend/remote/backend_apply_test.go b/internal/backend/remote/backend_apply_test.go index 71c819b9e505..e15fe0bb37cf 100644 --- a/internal/backend/remote/backend_apply_test.go +++ b/internal/backend/remote/backend_apply_test.go @@ -1008,7 +1008,7 @@ func TestRemote_applyForceLocal(t *testing.T) { if output := done(t).Stdout(); !strings.Contains(output, "1 to add, 0 to change, 0 to destroy") { t.Fatalf("expected plan summary in output: %s", output) } - if !run.State.HasResources() { + if !run.State.HasManagedResourceInstanceObjects() { t.Fatalf("expected resources in state") } } @@ -1071,7 +1071,7 @@ func TestRemote_applyWorkspaceWithoutOperations(t *testing.T) { if output := done(t).Stdout(); !strings.Contains(output, "1 to add, 0 to change, 0 to destroy") { t.Fatalf("expected plan summary in output: %s", output) } - if !run.State.HasResources() { + if !run.State.HasManagedResourceInstanceObjects() { t.Fatalf("expected resources in state") } } @@ -1646,7 +1646,7 @@ func TestRemote_applyVersionCheck(t *testing.T) { output := b.CLI.(*cli.MockUi).OutputWriter.String() hasRemote := strings.Contains(output, "Running apply in the remote backend") hasSummary := strings.Contains(output, "1 added, 0 changed, 0 destroyed") - hasResources := run.State.HasResources() + hasResources := run.State.HasManagedResourceInstanceObjects() if !tc.forceLocal && tc.hasOperations { if !hasRemote { t.Errorf("missing remote backend header in output: %s", output) diff --git a/internal/backend/testing.go b/internal/backend/testing.go index 00a9c684e328..b4a76879b089 100644 --- a/internal/backend/testing.go +++ b/internal/backend/testing.go @@ -105,7 +105,7 @@ func TestBackendStates(t *testing.T, b Backend) { if err := foo.RefreshState(); err != nil { t.Fatalf("bad: %s", err) } - if v := foo.State(); v.HasResources() { + if v := foo.State(); v.HasManagedResourceInstanceObjects() { t.Fatalf("should be empty: %s", v) } @@ -116,7 +116,7 @@ func TestBackendStates(t *testing.T, b Backend) { if err := bar.RefreshState(); err != nil { t.Fatalf("bad: %s", err) } - if v := bar.State(); v.HasResources() { + if v := bar.State(); v.HasManagedResourceInstanceObjects() { t.Fatalf("should be empty: %s", v) } @@ -168,7 +168,7 @@ func TestBackendStates(t *testing.T, b Backend) { t.Fatal("error refreshing foo:", err) } fooState = foo.State() - if fooState.HasResources() { + if fooState.HasManagedResourceInstanceObjects() { t.Fatal("after writing a resource to bar, foo now has resources too") } @@ -181,7 +181,7 @@ func TestBackendStates(t *testing.T, b Backend) { t.Fatal("error refreshing foo:", err) } fooState = foo.State() - if fooState.HasResources() { + if fooState.HasManagedResourceInstanceObjects() { t.Fatal("after writing a resource to bar and re-reading foo, foo now has resources too") } @@ -194,7 +194,7 @@ func TestBackendStates(t *testing.T, b Backend) { t.Fatal("error refreshing bar:", err) } barState = bar.State() - if !barState.HasResources() { + if !barState.HasManagedResourceInstanceObjects() { t.Fatal("after writing a resource instance object to bar and re-reading it, the object has vanished") } } @@ -237,7 +237,7 @@ func TestBackendStates(t *testing.T, b Backend) { if err := foo.RefreshState(); err != nil { t.Fatalf("bad: %s", err) } - if v := foo.State(); v.HasResources() { + if v := foo.State(); v.HasManagedResourceInstanceObjects() { t.Fatalf("should be empty: %s", v) } // and delete it again diff --git a/internal/command/workspace_command.go b/internal/command/workspace_command.go index 993b2a878d6c..a0f5f542acbb 100644 --- a/internal/command/workspace_command.go +++ b/internal/command/workspace_command.go @@ -82,15 +82,6 @@ for this configuration. envDeleted = `[reset][green]Deleted workspace %q!` - envNotEmpty = ` -Workspace %[1]q is not empty. - -Deleting %[1]q can result in dangling resources: resources that -exist but are no longer manageable by Terraform. Please destroy -these resources first. If you want to delete this workspace -anyway and risk dangling resources, use the '-force' flag. -` - envWarnNotEmpty = `[reset][yellow]WARNING: %q was non-empty. The resources managed by the deleted workspace may still exist, but are no longer manageable by Terraform since the state has diff --git a/internal/command/workspace_command_test.go b/internal/command/workspace_command_test.go index 92a87c504e16..30096cb86889 100644 --- a/internal/command/workspace_command_test.go +++ b/internal/command/workspace_command_test.go @@ -391,10 +391,10 @@ func TestWorkspace_deleteWithState(t *testing.T) { // create a non-empty state originalState := &legacy.State{ Modules: []*legacy.ModuleState{ - &legacy.ModuleState{ + { Path: []string{"root"}, Resources: map[string]*legacy.ResourceState{ - "test_instance.foo": &legacy.ResourceState{ + "test_instance.foo": { Type: "test_instance", Primary: &legacy.InstanceState{ ID: "bar", @@ -414,7 +414,7 @@ func TestWorkspace_deleteWithState(t *testing.T) { t.Fatal(err) } - ui := new(cli.MockUi) + ui := cli.NewMockUi() view, _ := testView(t) delCmd := &WorkspaceDeleteCommand{ Meta: Meta{Ui: ui, View: view}, @@ -423,6 +423,13 @@ func TestWorkspace_deleteWithState(t *testing.T) { if code := delCmd.Run(args); code == 0 { t.Fatalf("expected failure without -force.\noutput: %s", ui.OutputWriter) } + gotStderr := ui.ErrorWriter.String() + if want, got := `Workspace "test" is currently tracking the following resource instances`, gotStderr; !strings.Contains(got, want) { + t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, got) + } + if want, got := `- test_instance.foo`, gotStderr; !strings.Contains(got, want) { + t.Errorf("error message doesn't mention the remaining instance\nwant substring: %s\ngot:\n%s", want, got) + } ui = new(cli.MockUi) delCmd.Meta.Ui = ui diff --git a/internal/command/workspace_delete.go b/internal/command/workspace_delete.go index a29479e92554..5c826d9084e7 100644 --- a/internal/command/workspace_delete.go +++ b/internal/command/workspace_delete.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/terraform/internal/command/arguments" "github.com/hashicorp/terraform/internal/command/clistate" "github.com/hashicorp/terraform/internal/command/views" + "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/tfdiags" "github.com/mitchellh/cli" "github.com/posener/complete" @@ -124,12 +125,32 @@ func (c *WorkspaceDeleteCommand) Run(args []string) int { return 1 } - hasResources := stateMgr.State().HasResources() + hasResources := stateMgr.State().HasManagedResourceInstanceObjects() if hasResources && !force { + // We'll collect a list of what's being managed here as extra context + // for the message. + var buf strings.Builder + for _, obj := range stateMgr.State().AllResourceInstanceObjectAddrs() { + if obj.DeposedKey == states.NotDeposed { + fmt.Fprintf(&buf, "\n - %s", obj.Instance.String()) + } else { + fmt.Fprintf(&buf, "\n - %s (deposed object %s)", obj.Instance.String(), obj.DeposedKey) + } + } + // We need to release the lock before exit stateLocker.Unlock() - c.Ui.Error(fmt.Sprintf(strings.TrimSpace(envNotEmpty), workspace)) + + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Workspace is not empty", + fmt.Sprintf( + "Workspace %q is currently tracking the following resource instances:%s\n\nDeleting this workspace would cause Terraform to lose track of any associated remote objects, which would then require you to delete them manually outside of Terraform. You should destroy these objects with Terraform before deleting the workspace.\n\nIf you want to delete this workspace anyway, and have Terraform forget about these managed objects, use the -force option to disable this safety check.", + workspace, buf.String(), + ), + )) + c.showDiagnostics(diags) return 1 } diff --git a/internal/states/state.go b/internal/states/state.go index 1c972d662e0f..28d962786d0c 100644 --- a/internal/states/state.go +++ b/internal/states/state.go @@ -151,15 +151,27 @@ func (s *State) EnsureModule(addr addrs.ModuleInstance) *Module { return ms } -// HasResources returns true if there is at least one resource (of any mode) -// present in the receiving state. -func (s *State) HasResources() bool { +// HasManagedResourceInstanceObjects returns true if there is at least one +// resource instance object (current or deposed) associated with a managed +// resource in the receiving state. +// +// A true result would suggest that just discarding this state without first +// destroying these objects could leave "dangling" objects in remote systems, +// no longer tracked by any Terraform state. +func (s *State) HasManagedResourceInstanceObjects() bool { if s == nil { return false } for _, ms := range s.Modules { - if len(ms.Resources) > 0 { - return true + for _, rs := range ms.Resources { + if rs.Addr.Resource.Mode != addrs.ManagedResourceMode { + continue + } + for _, is := range rs.Instances { + if is.Current != nil || len(is.Deposed) != 0 { + return true + } + } } } return false @@ -187,6 +199,74 @@ func (s *State) Resources(addr addrs.ConfigResource) []*Resource { return ret } +// AllManagedResourceInstanceObjectAddrs returns a set of addresses for all of +// the leaf resource instance objects associated with managed resources that +// are tracked in this state. +// +// This result is the set of objects that would be effectively "forgotten" +// (like "terraform state rm") if this state were totally discarded, such as +// by deleting a workspace. This function is intended only for reporting +// context in error messages, such as when we reject deleting a "non-empty" +// workspace as detected by s.HasManagedResourceInstanceObjects. +// +// The ordering of the result is meaningless but consistent. DeposedKey will +// be NotDeposed (the zero value of DeposedKey) for any "current" objects. +// This method is guaranteed to return at least one item if +// s.HasManagedResourceInstanceObjects returns true for the same state, and +// to return a zero-length slice if it returns false. +func (s *State) AllResourceInstanceObjectAddrs() []struct { + Instance addrs.AbsResourceInstance + DeposedKey DeposedKey +} { + if s == nil { + return nil + } + + // We use an unnamed return type here just because we currently have no + // general need to return pairs of instance address and deposed key aside + // from this method, and this method itself is only of marginal value + // when producing some error messages. + // + // If that need ends up arising more in future then it might make sense to + // name this as addrs.AbsResourceInstanceObject, although that would require + // moving DeposedKey into the addrs package too. + type ResourceInstanceObject = struct { + Instance addrs.AbsResourceInstance + DeposedKey DeposedKey + } + var ret []ResourceInstanceObject + + for _, ms := range s.Modules { + for _, rs := range ms.Resources { + if rs.Addr.Resource.Mode != addrs.ManagedResourceMode { + continue + } + + for instKey, is := range rs.Instances { + instAddr := rs.Addr.Instance(instKey) + if is.Current != nil { + ret = append(ret, ResourceInstanceObject{instAddr, NotDeposed}) + } + for deposedKey := range is.Deposed { + ret = append(ret, ResourceInstanceObject{instAddr, deposedKey}) + } + } + } + } + + sort.SliceStable(ret, func(i, j int) bool { + objI, objJ := ret[i], ret[j] + switch { + case !objI.Instance.Equal(objJ.Instance): + return objI.Instance.Less(objJ.Instance) + default: + return objI.DeposedKey < objJ.DeposedKey + } + }) + + return ret +} + // ResourceInstance returns the state for the resource instance with the given // address, or nil if no such resource is tracked in the state. func (s *State) ResourceInstance(addr addrs.AbsResourceInstance) *ResourceInstance { diff --git a/internal/states/state_equal.go b/internal/states/state_equal.go index 06658ef2684c..b37aba062768 100644 --- a/internal/states/state_equal.go +++ b/internal/states/state_equal.go @@ -34,10 +34,10 @@ func (s *State) ManagedResourcesEqual(other *State) bool { return true } if s == nil { - return !other.HasResources() + return !other.HasManagedResourceInstanceObjects() } if other == nil { - return !s.HasResources() + return !s.HasManagedResourceInstanceObjects() } // If we get here then both states are non-nil. diff --git a/internal/states/state_test.go b/internal/states/state_test.go index b4495c0cd04f..768772aebe44 100644 --- a/internal/states/state_test.go +++ b/internal/states/state_test.go @@ -294,6 +294,125 @@ func TestStateDeepCopy(t *testing.T) { } } +func TestStateHasResourceInstanceObjects(t *testing.T) { + providerConfig := addrs.AbsProviderConfig{ + Module: addrs.RootModule, + Provider: addrs.MustParseProviderSourceString("test/test"), + } + childModuleProviderConfig := addrs.AbsProviderConfig{ + Module: addrs.RootModule.Child("child"), + Provider: addrs.MustParseProviderSourceString("test/test"), + } + + tests := map[string]struct { + Setup func(ss *SyncState) + Want bool + }{ + "empty": { + func(ss *SyncState) {}, + false, + }, + "one current, ready object in root module": { + func(ss *SyncState) { + ss.SetResourceInstanceCurrent( + mustAbsResourceAddr("test.foo").Instance(addrs.NoKey), + &ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: ObjectReady, + }, + providerConfig, + ) + }, + true, + }, + "one current, ready object in child module": { + func(ss *SyncState) { + ss.SetResourceInstanceCurrent( + mustAbsResourceAddr("module.child.test.foo").Instance(addrs.NoKey), + &ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: ObjectReady, + }, + childModuleProviderConfig, + ) + }, + true, + }, + "one current, tainted object in root module": { + func(ss *SyncState) { + ss.SetResourceInstanceCurrent( + mustAbsResourceAddr("test.foo").Instance(addrs.NoKey), + &ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: ObjectTainted, + }, + providerConfig, + ) + }, + true, + }, + "one deposed, ready object in root module": { + func(ss *SyncState) { + ss.SetResourceInstanceDeposed( + mustAbsResourceAddr("test.foo").Instance(addrs.NoKey), + DeposedKey("uhoh"), + &ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: ObjectTainted, + }, + providerConfig, + ) + }, + true, + }, + "one empty resource husk in root module": { + func(ss *SyncState) { + // Current Terraform doesn't actually create resource husks + // as part of its everyday work, so this is a "should never + // happen" case but we'll test to make sure we're robust to + // it anyway, because this was a historical bug blocking + // "terraform workspace delete" and similar. + ss.SetResourceInstanceCurrent( + mustAbsResourceAddr("test.foo").Instance(addrs.NoKey), + &ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: ObjectTainted, + }, + providerConfig, + ) + s := ss.Lock() + delete(s.Modules[""].Resources["test.foo"].Instances, addrs.NoKey) + ss.Unlock() + }, + false, + }, + "one current data resource object in root module": { + func(ss *SyncState) { + ss.SetResourceInstanceCurrent( + mustAbsResourceAddr("data.test.foo").Instance(addrs.NoKey), + &ResourceInstanceObjectSrc{ + AttrsJSON: []byte(`{}`), + Status: ObjectReady, + }, + providerConfig, + ) + }, + false, // data resources aren't managed resources, so they don't count + }, + } + + for name, test := range tests { + t.Run(name, func(t *testing.T) { + state := BuildState(test.Setup) + got := state.HasManagedResourceInstanceObjects() + if got != test.Want { + t.Errorf("wrong result\nstate content: (using legacy state string format; might not be comprehensive)\n%s\n\ngot: %t\nwant: %t", state, got, test.Want) + } + }) + } + +} + func TestState_MoveAbsResource(t *testing.T) { // Set up a starter state for the embedded tests, which should start from a copy of this state. state := NewState() diff --git a/internal/terraform/context_apply_test.go b/internal/terraform/context_apply_test.go index 4f6a7df89904..28c5c788b502 100644 --- a/internal/terraform/context_apply_test.go +++ b/internal/terraform/context_apply_test.go @@ -5743,7 +5743,7 @@ func TestContext2Apply_destroyModuleWithAttrsReferencingResource(t *testing.T) { } //Test that things were destroyed - if state.HasResources() { + if state.HasManagedResourceInstanceObjects() { t.Fatal("expected empty state, got:", state) } } @@ -8661,7 +8661,7 @@ func TestContext2Apply_providerWithLocals(t *testing.T) { t.Fatalf("err: %s", diags.Err()) } - if state.HasResources() { + if state.HasManagedResourceInstanceObjects() { t.Fatal("expected no state, got:", state) } diff --git a/internal/terraform/node_data_destroy_test.go b/internal/terraform/node_data_destroy_test.go index 7f06b4d4b737..f399ee4183c4 100644 --- a/internal/terraform/node_data_destroy_test.go +++ b/internal/terraform/node_data_destroy_test.go @@ -42,7 +42,7 @@ func TestNodeDataDestroyExecute(t *testing.T) { } // verify resource removed from state - if state.HasResources() { + if state.HasManagedResourceInstanceObjects() { t.Fatal("resources still in state after NodeDataDestroy.Execute") } } diff --git a/internal/terraform/transform_state.go b/internal/terraform/transform_state.go index 098768ca6f94..1ca060a88aad 100644 --- a/internal/terraform/transform_state.go +++ b/internal/terraform/transform_state.go @@ -26,8 +26,8 @@ type StateTransformer struct { } func (t *StateTransformer) Transform(g *Graph) error { - if !t.State.HasResources() { - log.Printf("[TRACE] StateTransformer: state is empty, so nothing to do") + if t.State == nil { + log.Printf("[TRACE] StateTransformer: state is nil, so nothing to do") return nil } From c76f54781a2daa3c87ed496f294a206d93ae8f2a Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 13 Oct 2021 13:56:47 -0700 Subject: [PATCH 163/644] Update CHANGELOG.md --- CHANGELOG.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 50108f71ad2d..cf33fafda12d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -18,7 +18,8 @@ NEW FEATURES: ENHANCEMENTS: * config: Terraform now checks the syntax of and normalizes module source addresses (the `source` argument in `module` blocks) during configuration decoding rather than only at module installation time. This is largely just an internal refactoring, but a visible benefit of this change is that the `terraform init` messages about module downloading will now show the canonical module package address Terraform is downloading from, after interpreting the special shorthands for common cases like GitHub URLs. ([#28854](https://github.com/hashicorp/terraform/issues/28854)) -* cli: Terraform will now report explicitly in the UI if it automatically moves a resource instance to a new address as a result of adding or removing the `count` argument from an existing resource. For example, if you previously had `resource "aws_subnet" "example"` _without_ `count`, you might have `aws_subnet.example` already bound to a remote object in your state. If you add `count = 1` to that resource then Terraform would previously silently rebind the object to `aws_subnet.example[0]` as part of planning, whereas now Terraform will mention that it did so explicitly in the plan description. ([#29605](https://github.com/hashicorp/terraform/issues/29605)) +* `terraform plan` and `terraform apply`: Terraform will now report explicitly in the UI if it automatically moves a resource instance to a new address as a result of adding or removing the `count` argument from an existing resource. For example, if you previously had `resource "aws_subnet" "example"` _without_ `count`, you might have `aws_subnet.example` already bound to a remote object in your state. If you add `count = 1` to that resource then Terraform would previously silently rebind the object to `aws_subnet.example[0]` as part of planning, whereas now Terraform will mention that it did so explicitly in the plan description. ([#29605](https://github.com/hashicorp/terraform/issues/29605)) +* `terraform workspace delete`: will now allow deleting a workspace whose state contains only data resource instances and output values, without running `terraform destroy` first. Previously the presence of data resources would require using `-force` to override the safety check guarding against accidentally forgetting about remote objects, but a data resource is not responsible for the management of its associated remote object(s) anyway. [GH-29754] * provisioner/remote-exec and provisioner/file: When using SSH agent authentication mode on Windows, Terraform can now detect and use [the Windows 10 built-in OpenSSH Client](https://devblogs.microsoft.com/powershell/using-the-openssh-beta-in-windows-10-fall-creators-update-and-windows-server-1709/)'s SSH Agent, when available, in addition to the existing support for the third-party solution [Pageant](https://documentation.help/PuTTY/pageant.html) that was already supported. [GH-29747] BUG FIXES: From 5ffa0839f9fe18087b9c51b100acb2fcd15c848d Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 13 Oct 2021 17:07:26 -0400 Subject: [PATCH 164/644] only check serial when applying the first plan Ensure that we still check for a stale plan even when it was created with no previous state. Create separate errors for incorrect lineage vs incorrect serial. To prevent confusion when applying a first plan multiple times, only report it as a stale plan rather than different lineage. --- internal/backend/local/backend_local.go | 29 ++++++++++++++++++------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/internal/backend/local/backend_local.go b/internal/backend/local/backend_local.go index 209757e43375..6082bfdf6c61 100644 --- a/internal/backend/local/backend_local.go +++ b/internal/backend/local/backend_local.go @@ -290,14 +290,27 @@ func (b *Local) localRunForPlanFile(op *backend.Operation, pf *planfile.Reader, // has changed since the plan was created. (All of the "real-world" // state manager implementations support this, but simpler test backends // may not.) - if currentStateMeta.Lineage != "" && priorStateFile.Lineage != "" { - if priorStateFile.Serial != currentStateMeta.Serial || priorStateFile.Lineage != currentStateMeta.Lineage { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Saved plan is stale", - "The given plan file can no longer be applied because the state was changed by another operation after the plan was created.", - )) - } + + // Because the plan always contains a state, even if it is empty, the + // first plan to be applied will have empty snapshot metadata. In this + // case we compare only the serial in order to provide a more correct + // error. + firstPlan := priorStateFile.Lineage == "" && priorStateFile.Serial == 0 + + switch { + case !firstPlan && priorStateFile.Lineage != currentStateMeta.Lineage: + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Saved plan does not match the given state", + "The given plan file can not be applied because it was created from a different state lineage.", + )) + + case priorStateFile.Serial != currentStateMeta.Serial: + diags = diags.Append(tfdiags.Sourceless( + tfdiags.Error, + "Saved plan is stale", + "The given plan file can no longer be applied because the state was changed by another operation after the plan was created.", + )) } } // When we're applying a saved plan, the input state is the "prior state" From 9c80574417f48d34313d7608ec56d48ee51f8bdd Mon Sep 17 00:00:00 2001 From: James Bardin Date: Wed, 13 Oct 2021 17:28:14 -0400 Subject: [PATCH 165/644] test planfile may need to have a specific lineage In order to test applying a plan from an existing state, we need to be able to inject the state meta into the planfile. --- internal/command/apply_test.go | 19 ++++++++++++++++--- internal/command/command_test.go | 10 ++++++++-- 2 files changed, 24 insertions(+), 5 deletions(-) diff --git a/internal/command/apply_test.go b/internal/command/apply_test.go index 750c6b6bf336..b2306623cb11 100644 --- a/internal/command/apply_test.go +++ b/internal/command/apply_test.go @@ -710,7 +710,6 @@ func TestApply_plan(t *testing.T) { } func TestApply_plan_backup(t *testing.T) { - planPath := applyFixturePlanFile(t) statePath := testTempFile(t) backupPath := testTempFile(t) @@ -724,11 +723,17 @@ func TestApply_plan_backup(t *testing.T) { } // create a state file that needs to be backed up - err := statemgr.NewFilesystem(statePath).WriteState(states.NewState()) + fs := statemgr.NewFilesystem(statePath) + fs.StateSnapshotMeta() + err := fs.WriteState(states.NewState()) if err != nil { t.Fatal(err) } + // the plan file must contain the metadata from the prior state to be + // backed up + planPath := applyFixturePlanFileMatchState(t, fs.StateSnapshotMeta()) + args := []string{ "-state", statePath, "-backup", backupPath, @@ -2280,6 +2285,13 @@ func applyFixtureProvider() *terraform.MockProvider { // a single change to create the test_instance.foo that is included in the // "apply" test fixture, returning the location of that plan file. func applyFixturePlanFile(t *testing.T) string { + return applyFixturePlanFileMatchState(t, statemgr.SnapshotMeta{}) +} + +// applyFixturePlanFileMatchState creates a planfile like applyFixturePlanFile, +// but inserts the state meta information if that plan must match a preexisting +// state. +func applyFixturePlanFileMatchState(t *testing.T, stateMeta statemgr.SnapshotMeta) string { _, snap := testModuleWithSnapshot(t, "apply") plannedVal := cty.ObjectVal(map[string]cty.Value{ "id": cty.UnknownVal(cty.String), @@ -2310,11 +2322,12 @@ func applyFixturePlanFile(t *testing.T) string { After: plannedValRaw, }, }) - return testPlanFile( + return testPlanFileMatchState( t, snap, states.NewState(), plan, + stateMeta, ) } diff --git a/internal/command/command_test.go b/internal/command/command_test.go index a9ba2a044e0a..4b6fcf311c77 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -229,15 +229,21 @@ func testPlan(t *testing.T) *plans.Plan { } func testPlanFile(t *testing.T, configSnap *configload.Snapshot, state *states.State, plan *plans.Plan) string { + return testPlanFileMatchState(t, configSnap, state, plan, statemgr.SnapshotMeta{}) +} + +func testPlanFileMatchState(t *testing.T, configSnap *configload.Snapshot, state *states.State, plan *plans.Plan, stateMeta statemgr.SnapshotMeta) string { t.Helper() stateFile := &statefile.File{ - Lineage: "", + Lineage: stateMeta.Lineage, + Serial: stateMeta.Serial, State: state, TerraformVersion: version.SemVer, } prevStateFile := &statefile.File{ - Lineage: "", + Lineage: stateMeta.Lineage, + Serial: stateMeta.Serial, State: state, // we just assume no changes detected during refresh TerraformVersion: version.SemVer, } From 88bddd714328ed9409577d085cf773ac00c13dc7 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Wed, 13 Oct 2021 17:30:31 -0700 Subject: [PATCH 166/644] go.mod: go get go get golang.org/x/tools@v0.1.7 This also transitively upgrades golang.org/x/sys and golang.org/x/net, but there do not seem to be any significant changes compared to the commits we were previously using. --- go.mod | 6 +++--- go.sum | 10 +++++++--- 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/go.mod b/go.mod index 3dd2135813ff..e4a2e970cef4 100644 --- a/go.mod +++ b/go.mod @@ -84,12 +84,12 @@ require ( go.etcd.io/etcd v0.5.0-alpha.5.0.20210428180535-15715dcf1ace golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97 golang.org/x/mod v0.4.2 - golang.org/x/net v0.0.0-20210614182718-04defd469f4e + golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84 - golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c + golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf golang.org/x/text v0.3.6 - golang.org/x/tools v0.1.4 + golang.org/x/tools v0.1.7 google.golang.org/api v0.44.0-impersonate-preview google.golang.org/grpc v1.36.0 google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 diff --git a/go.sum b/go.sum index bc0ba56870d8..6588616d1360 100644 --- a/go.sum +++ b/go.sum @@ -650,6 +650,7 @@ github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= +github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/zclconf/go-cty v1.0.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s= github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8= @@ -777,8 +778,9 @@ golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM= -golang.org/x/net v0.0.0-20210614182718-04defd469f4e h1:XpT3nA5TvE525Ne3hInMh6+GETgn27Zfm9dxsThnX2Q= golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= +golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d h1:20cMwl2fHAzkJMEA+8J4JgqBQcQGzbisXo31MIeenXI= +golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -861,8 +863,9 @@ golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c h1:F1jZWGFhYfh0Ci55sIpILtKKK8p3i2/krTr0H1rg74I= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e h1:WUoyKPm6nCo1BnNUvPGnFG3T5DUVem42yDJZZ4CNxMA= +golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf h1:MZ2shdL+ZM/XzY3ZGOnh4Nlpnxz5GSOhOmtHo3iPU6M= @@ -935,8 +938,9 @@ golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4f golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= -golang.org/x/tools v0.1.4 h1:cVngSRcfgyZCzys3KYOpCFa+4dqX/Oub9tAq00ttGVs= golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= +golang.org/x/tools v0.1.7 h1:6j8CgantCy3yc8JGBqkDLMKWqZ0RDU2g1HVgacojGWQ= +golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= From 8946d7ff206dfe91c5d2efef7c91ed04c327c27f Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Thu, 14 Oct 2021 00:52:36 +0000 Subject: [PATCH 167/644] build(deps): bump github.com/golang/protobuf from 1.5.0 to 1.5.2 Bumps [github.com/golang/protobuf](https://github.com/golang/protobuf) from 1.5.0 to 1.5.2. - [Release notes](https://github.com/golang/protobuf/releases) - [Commits](https://github.com/golang/protobuf/compare/v1.5.0...v1.5.2) --- updated-dependencies: - dependency-name: github.com/golang/protobuf dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] --- go.mod | 2 +- go.sum | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/go.mod b/go.mod index e4a2e970cef4..eab2aac97c6d 100644 --- a/go.mod +++ b/go.mod @@ -23,7 +23,7 @@ require ( github.com/dylanmei/winrmtest v0.0.0-20190225150635-99b7fe2fddf1 github.com/go-test/deep v1.0.3 github.com/golang/mock v1.5.0 - github.com/golang/protobuf v1.5.0 + github.com/golang/protobuf v1.5.2 github.com/google/go-cmp v0.5.5 github.com/google/uuid v1.2.0 github.com/gophercloud/gophercloud v0.10.1-0.20200424014253-c3bfe50899e5 diff --git a/go.sum b/go.sum index 6588616d1360..e14e6617a6a8 100644 --- a/go.sum +++ b/go.sum @@ -259,8 +259,9 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8= github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI= -github.com/golang/protobuf v1.5.0 h1:LUVKkCeviFUMKqHa4tXIIij/lbhnMbP7Fn5wKdKkRh4= github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= +github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw= +github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/google/btree v0.0.0-20160524151835-7d79101e329e/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ= github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo= @@ -1047,6 +1048,7 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= +google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ= google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= From 39779e70210a2e4fedfb5a6ef51ce1a084fd45f1 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Thu, 14 Oct 2021 09:33:54 -0700 Subject: [PATCH 168/644] backend/remote-state/cos: Don't use github.com/likexian/gokit We don't use this library anywhere else in Terraform, and this backend was using it only for trivial helpers that are easy to express inline anyway. The new direct code is also type-checkable, whereas these helper functions seem to be written using reflection. This gives us one fewer dependency to worry about and makes the test code for this backend follow a similar assertions style as the rest of this codebase. --- go.mod | 1 - go.sum | 8 --- .../backend/remote-state/cos/backend_state.go | 11 ++- .../backend/remote-state/cos/backend_test.go | 67 +++++++++++++------ 4 files changed, 57 insertions(+), 30 deletions(-) diff --git a/go.mod b/go.mod index eab2aac97c6d..60b751726eff 100644 --- a/go.mod +++ b/go.mod @@ -51,7 +51,6 @@ require ( github.com/joyent/triton-go v0.0.0-20180313100802-d8f9c0314926 github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 github.com/lib/pq v1.10.3 - github.com/likexian/gokit v0.20.15 github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82 github.com/masterzen/winrm v0.0.0-20200615185753-c42b5136ff88 github.com/mattn/go-isatty v0.0.12 diff --git a/go.sum b/go.sum index e14e6617a6a8..7d2eaf9415b0 100644 --- a/go.sum +++ b/go.sum @@ -460,14 +460,6 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0 github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/lib/pq v1.10.3 h1:v9QZf2Sn6AmjXtQeFpdoq/eaNtYP6IN+7lcrygsIAtg= github.com/lib/pq v1.10.3/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= -github.com/likexian/gokit v0.0.0-20190309162924-0a377eecf7aa/go.mod h1:QdfYv6y6qPA9pbBA2qXtoT8BMKha6UyNbxWGWl/9Jfk= -github.com/likexian/gokit v0.0.0-20190418170008-ace88ad0983b/go.mod h1:KKqSnk/VVSW8kEyO2vVCXoanzEutKdlBAPohmGXkxCk= -github.com/likexian/gokit v0.0.0-20190501133040-e77ea8b19cdc/go.mod h1:3kvONayqCaj+UgrRZGpgfXzHdMYCAO0KAt4/8n0L57Y= -github.com/likexian/gokit v0.20.15 h1:DgtIqqTRFqtbiLJFzuRESwVrxWxfs8OlY6hnPYBa3BM= -github.com/likexian/gokit v0.20.15/go.mod h1:kn+nTv3tqh6yhor9BC4Lfiu58SmH8NmQ2PmEl+uM6nU= -github.com/likexian/simplejson-go v0.0.0-20190409170913-40473a74d76d/go.mod h1:Typ1BfnATYtZ/+/shXfFYLrovhFyuKvzwrdOnIDHlmg= -github.com/likexian/simplejson-go v0.0.0-20190419151922-c1f9f0b4f084/go.mod h1:U4O1vIJvIKwbMZKUJ62lppfdvkCdVd2nfMimHK81eec= -github.com/likexian/simplejson-go v0.0.0-20190502021454-d8787b4bfa0b/go.mod h1:3BWwtmKP9cXWwYCr5bkoVDEfLywacOv0s06OBEDpyt8= github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82 h1:wnfcqULT+N2seWf6y4yHzmi7GD2kNx4Ute0qArktD48= github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82/go.mod h1:y54tfGmO3NKssKveTEFFzH8C/akrSOy/iW9qEAUDV84= github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= diff --git a/internal/backend/remote-state/cos/backend_state.go b/internal/backend/remote-state/cos/backend_state.go index c9df2a6c1815..ab92cfb7c0e8 100644 --- a/internal/backend/remote-state/cos/backend_state.go +++ b/internal/backend/remote-state/cos/backend_state.go @@ -11,7 +11,6 @@ import ( "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/states/remote" "github.com/hashicorp/terraform/internal/states/statemgr" - "github.com/likexian/gokit/assert" ) // Define file suffix @@ -88,7 +87,15 @@ func (b *Backend) StateMgr(name string) (statemgr.Full, error) { return nil, err } - if !assert.IsContains(ws, name) { + exists := false + for _, candidate := range ws { + if candidate == name { + exists = true + break + } + } + + if !exists { log.Printf("[DEBUG] workspace %v not exists", name) // take a lock on this state while we write it diff --git a/internal/backend/remote-state/cos/backend_test.go b/internal/backend/remote-state/cos/backend_test.go index f527d4a5a5cd..eb9038ff35f7 100644 --- a/internal/backend/remote-state/cos/backend_test.go +++ b/internal/backend/remote-state/cos/backend_test.go @@ -9,7 +9,6 @@ import ( "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/states/remote" - "github.com/likexian/gokit/assert" ) const ( @@ -38,12 +37,18 @@ func TestStateFile(t *testing.T) { } for _, c := range cases { - b := &Backend{ - prefix: c.prefix, - key: c.key, - } - assert.Equal(t, b.stateFile(c.stateName), c.wantStateFile) - assert.Equal(t, b.lockFile(c.stateName), c.wantLockFile) + t.Run(fmt.Sprintf("%s %s %s", c.prefix, c.key, c.stateName), func(t *testing.T) { + b := &Backend{ + prefix: c.prefix, + key: c.key, + } + if got, want := b.stateFile(c.stateName), c.wantStateFile; got != want { + t.Errorf("wrong state file name\ngot: %s\nwant: %s", got, want) + } + if got, want := b.lockFile(c.stateName), c.wantLockFile; got != want { + t.Errorf("wrong lock file name\ngot: %s\nwant: %s", got, want) + } + }) } } @@ -56,10 +61,14 @@ func TestRemoteClient(t *testing.T) { defer teardownBackend(t, be) ss, err := be.StateMgr(backend.DefaultStateName) - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } rs, ok := ss.(*remote.State) - assert.True(t, ok) + if !ok { + t.Fatalf("wrong state manager type\ngot: %T\nwant: %T", ss, rs) + } remote.TestClient(t, rs.Client) } @@ -74,10 +83,14 @@ func TestRemoteClientWithPrefix(t *testing.T) { defer teardownBackend(t, be) ss, err := be.StateMgr(backend.DefaultStateName) - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } rs, ok := ss.(*remote.State) - assert.True(t, ok) + if !ok { + t.Fatalf("wrong state manager type\ngot: %T\nwant: %T", ss, rs) + } remote.TestClient(t, rs.Client) } @@ -91,10 +104,14 @@ func TestRemoteClientWithEncryption(t *testing.T) { defer teardownBackend(t, be) ss, err := be.StateMgr(backend.DefaultStateName) - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } rs, ok := ss.(*remote.State) - assert.True(t, ok) + if !ok { + t.Fatalf("wrong state manager type\ngot: %T\nwant: %T", ss, rs) + } remote.TestClient(t, rs.Client) } @@ -122,10 +139,14 @@ func TestRemoteLocks(t *testing.T) { } c0, err := remoteClient() - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } c1, err := remoteClient() - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } remote.TestRemoteLocks(t, c0, c1) } @@ -203,10 +224,14 @@ func setupBackend(t *testing.T, bucket, prefix, key string, encrypt bool) backen be := b.(*Backend) c, err := be.client("tencentcloud") - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } err = c.putBucket() - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } return b } @@ -215,10 +240,14 @@ func teardownBackend(t *testing.T, b backend.Backend) { t.Helper() c, err := b.(*Backend).client("tencentcloud") - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } err = c.deleteBucket(true) - assert.Nil(t, err) + if err != nil { + t.Fatalf("unexpected error: %s", err) + } } func bucketName(t *testing.T) string { From 3c64b9b6048d5ad68564d708a0aa0e3ab1cc6436 Mon Sep 17 00:00:00 2001 From: James Bardin Date: Thu, 14 Oct 2021 14:43:48 -0400 Subject: [PATCH 169/644] update CHANGELOG.md --- CHANGELOG.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index cf33fafda12d..1bdc81a952ba 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -25,7 +25,9 @@ ENHANCEMENTS: BUG FIXES: * core: Fixed an issue where provider configuration input variables were not properly merging with values in configuration ([#29000](https://github.com/hashicorp/terraform/issues/29000)) +* core: Reduce scope of dependencies that may defer reading of data sources when using `depends_on` or directly referencing managed resources [GH-29682] * cli: Blocks using SchemaConfigModeAttr in the provider SDK can now represented in the plan json output ([#29522](https://github.com/hashicorp/terraform/issues/29522)) +* cli: Prevent applying a stale planfile when there was no previous state [GH-29755] ## Previous Releases From e111c103b80a45e3659833f7f070391becc472f0 Mon Sep 17 00:00:00 2001 From: Chris Griggs Date: Fri, 15 Oct 2021 10:55:07 -0700 Subject: [PATCH 170/644] [Website]Terraform Integration Program (#29763) * add new guide doc * update word doc * assign imange files * add fmting changes * formatting * some more changes * Fix Title * Update website/guides/terraform-integration-program.html.md We can remove, I was hoping to have "*Currently, pre-apply..." be in Bbold, but it looks like it doesnt render that way. So we can exclude the asterisk Co-authored-by: Jeff Escalante * Fix spacing and remove unused html paragraph * Update website/guides/terraform-integration-program.html.md Good changes, thanks for simplifying it. Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Agreed Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Agreed Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Agreed Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> * Adding suggested changes from PR. Removing example sentence. (Internal request) * Move note section above badge * Add spacing * Update website/guides/terraform-integration-program.html.md Approved Co-authored-by: Jeff Escalante Co-authored-by: Jeff Escalante Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com> --- .../terraform-integration-program.html.md | 235 ++++++++++++++++++ 1 file changed, 235 insertions(+) create mode 100644 website/guides/terraform-integration-program.html.md diff --git a/website/guides/terraform-integration-program.html.md b/website/guides/terraform-integration-program.html.md new file mode 100644 index 000000000000..56dea778e79e --- /dev/null +++ b/website/guides/terraform-integration-program.html.md @@ -0,0 +1,235 @@ +--- +layout: "extend" +page_title: "Terraform Integration Program" +sidebar_current: "guides-terraform-integration-program" +description: The Terraform Integration Program allows prospect partners to create and publish Terraform integrations that have been verified by HashiCorp. +--- + +# Terraform Integration Program + +The Terraform Integration Program facilitates prospect partners in creating and publishing Terraform integrations that have been verified by HashiCorp. + +## Terraform Editions + +Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version infrastructure safely and efficiently. This includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. Terraform can manage both existing service providers and custom in-house solutions. + +HashiCorp offers three editions of Terraform: Open Source, Terraform Cloud, and Terraform Enterprise. + +- [Terraform Open Source](https://www.terraform.io/) provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files. +- [Terraform Cloud (TFC)](https://www.terraform.io/cloud) is a free to use, self-service SaaS platform that extends the capabilities of the open source Terraform CLI. It adds automation and collaboration features, and performs Terraform functionality remotely, making it ideal for collaborative and production environments. Terraform Cloud is available as a hosted service at https://app.terraform.io. Small teams can sign up for free to connect Terraform to version control, share variables, run Terraform in a stable remote environment, and securely store remote state. Paid tiers allow you to add more than five users, create teams with different levels of permissions, enforce policies before creating infrastructure, and collaborate more effectively. +- [Terraform Enterprise (TFE)](https://www.terraform.io/docs/enterprise/index.html) is our self-hosted distribution of Terraform Cloud with advanced security and compliance features. It offers enterprises a private instance that includes the advanced features available in Terraform Cloud. + +## Types of Terraform Integrations + +The Terraform ecosystem is designed to enable users to apply Terraform across different use cases and environments. The Terraform Integration Program current supports both workflow and integration partners (details below). Some partners can be both, depending on their use cases. + +- **Workflow Partners** build integrations for Terraform Cloud and/or Terraform Enterprise. Ideally, these partners are seeking to enable customers to use their existing platform within a Terraform Run. + +- **Infrastructure Partners** empower customers to leverage Terraform to manage resources exposed by their platform APIs. These are accessible to users of all Terraform editions. + +Our Workflow Partners typically have the following use cases: + +- **Code Scanning:** These partners provide tooling to review infrastructure as code configurations to prevent errors or security issues. +- **Cost Estimation:** These partners drive cost estimation of new deployment based on historical deployments. +- **Monitoring:** These partners provide performance visibility. +- **Zero Trust Security:** These partners help users create configurations to verify connections prior to providing access to an organization’s systems. +- **Audit:** These partners focus on maintaining code formatting, preventing security threats, and performing additional code analysis. +- **ITSM (Information Technology Service Management):** These partners focus on implementation, deployment, and delivery of IT workflows. +- **SSO (Single Sign On):** These partners focus on authentication for end users to securely sign on. +- **CI/CD:** These partners focus on continuous integration and continuous delivery/deployment. +- **VCS:** These partners focus on tracking and managing software code changes. + +Most workflow partners integrate with the Terraform workflow itself. Run tasks allow Terraform Cloud to execute tasks in external systems at specific points in the Terraform Cloud run lifecycle. This offers much more extensibility to Terraform Cloud customers, enabling them to integrate your services into the Terraform Cloud workflow. The beta release of this feature allows users to add and execute these tasks during the new pre-apply stage which exists in between the plan and apply stages. Eventually, we will open the entire workflow to Terraform Cloud users, including the pre-plan and post apply stages. Reference the [Terraform Cloud Integrations documentation](https://www.terraform.io/guides/terraform-integration-program.html#terraform-cloud-integrations) for more details. + +![Integration program diagram](/assets/images/docs/terraform-integration-program-diagram.png) + +Our Infrastructure Partners typically have the following use cases: + +- **Public Cloud:** These are large-scale, global cloud providers that offer a range of services including IaaS, SaaS, and PaaS. +- **Container Orchestration:** These partners help with container provisioning and deployment. +- **IaaS (Infrastructure-as-a-Service):** These are infrastructure and IaaS providers that offer solutions such as storage, networking, and virtualization. +- **Security & Authentication:** These are partners with authentication and security monitoring platforms. +- **Asset Management:** These partners offer asset management of key organization and IT resources, including software licenses, hardware assets, and cloud resources. +- **CI/CD:** These partners focus on continuous integration and continuous delivery/deployment. +- **Logging & Monitoring:** These partners offer the capability to configure and manage services such as loggers, metric tools, and monitoring services. +- **Utility:** These partners offer helper functionality, such as random value generation, file creation, http interactions, and time-based resources. +- **Cloud Automation:** These partners offer specialized cloud infrastructure automation management capabilities such as configuration management. +- **Data Management:** These partners focus on data center storage, backup, and recovery solutions. +- **Networking:** These partners integrate with network-specific hardware and virtualized products such as routing, switching, firewalls, and SD-WAN solutions. +- **VCS (Version Control Systems):** These partners focus on VCS (Version Control System) projects, teams, and repositories from within Terraform. +- **Comms & Messaging:** These partners integrate with communication, email, and messaging platforms. +- **Database:** These partners offer capabilities to provision and configure your database resources. +- **PaaS (Platform-as-a-Service):** These are platform and PaaS providers that offer a range of hardware, software, and application development tools. This category includes smaller-scale providers and those with more specialized offerings. +- **Web Services:** These partners focus on web hosting, web performance, CDN and DNS services. + +Infrastructure partners integrate by building and publishing a plugin called a Terraform [provider](https://www.terraform.io/docs/language/providers/index.html). Providers are executable binaries written in Go that communicate with Terraform Core over an RPC interface. The provider acts as a translation layer for transactions with external APIs, such as a public cloud service (AWS, GCP, Azure), a PaaS service (Heroku), a SaaS service (DNSimple, CloudFlare), or on-prem resources (vSphere). Providers work across Terraform OSS, Terraform Cloud and Terraform Enterprise. Refer to the [Terraform Provider Integrations documentation](https://www.terraform.io/guides/terraform-integration-program.html#terraform-provider-integrations) for more detail. + + + +## Terraform Provider Integrations + +You can follow the five steps. below to develop your provider alongside HashiCorp. This ensures that you can publish new versions with Terraform quickly and efficiently. + +![Provider Development Process](/assets/images/docs/provider-program-steps.png) + +1. **Prepare**: Develop integration using included resources +2. **Publish**: Publish provider to the Registry or plugin documentation +3. **Apply**: Apply to Technology Partnership Program +4. **Verify**: Verify integration with HashiCorp Alliances team +5. **Support**: Vendor provides ongoing maintenance and support + +We encourage you to follow the tasks associated with each step fully to streamline the development process and minimize rework. + +All providers integrate into and operate with Terraform exactly the same way. The table below is intended to help users understand who develops, and maintains a particular provider. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    TierDescriptionNamespace
    Official providers are owned and maintained by HashiCorp hashicorp
    Verified providers are owned and maintained by third-party technology partners. Providers in this tier indicate HashiCorp has verified the authenticity of the Provider’s publisher, and that the partner is a member of the HashiCorp Technology Partner Program.Third-party organization, e.g. cisco/aci
    Community providers are published to the Terraform Registry by individual maintainers, groups of maintainers, or other members of the Terraform community.
    Maintainer’s individual or organization account, e.g. cyrilgdn/postgresql
    Archived Providers are Official or Verified Providers that are no longer maintained by HashiCorp or the community. This may occur if an API is deprecated or interest was low.hashicorp or third-party
    + + +### 1. Prepare +To get started with the Terraform provider development, we recommend reviewing and following the articles listed below. +#### Provider Development Kit + +a) Writing custom providers [guide](https://www.terraform.io/guides/writing-custom-terraform-providers.html) + +b) Creating a Terraform Provider for Just About Anything: [video](https://www.youtube.com/watch?v=noxwUVet5RE) + +c) Sample provider developed by [partner](http://container-solutions.com/write-terraform-provider-part-1/) + +d) Example provider for reference: [AWS](https://github.com/terraform-providers/terraform-provider-aws), [OPC](https://github.com/terraform-providers/terraform-provider-opc) + +e) Contributing to Terraform [guidelines](https://github.com/hashicorp/terraform/blob/master/.github/CONTRIBUTING.md) + +f) HashiCorp developer [forum](https://discuss.hashicorp.com/c/terraform-providers/tf-plugin-sdk/43) + +Please submit questions or suggestions about the Terraform SDK and provider development to the HashiCorp Terraform plugin SDK forum. If you are new to provider development and would like assistance, you can leverage one of the following development agencies that have developed Terraform providers in the past: + +| Partner | Email | Website | +|--------------------|:-----------------------------|:---------------------| +| Crest Data Systems | malhar@crestdatasys.com | www.crestdatasys.com | +| DigitalOnUs | hashicorp@digitalonus.com | www.digitalonus.com | +| Akava | bd@akava.io | www.akava.io | +| OpenCredo | hashicorp@opencredo.com | www.opencredo.com | + +#### Provider License + +All Terraform providers listed as Verified must contain one of the following open source licenses: + +- CDDL 1.0, 2.0 +- CPL 1.0 +- Eclipse Public License (EPL) 1.0 +- MPL 1.0, 1.1, 2.0 +- PSL 2.0 +- Ruby's Licensing +- AFL 2.1, 3.0 +- Apache License 2.0 +- Artistic License 1.0, 2.0 +- Apache Software License (ASL) 1.1 +- Boost Software License +- BSD, BSD 3-clause, "BSD-new" +- CC-BY +- Microsoft Public License (MS-PL) +- MIT +### 2. Publish + +After your provider is complete and ready to release, you can publish it the integration to the [Terraform Registry](https://registry.terraform.io/). This makes it publicly available for all Terraform users. + +Follow the [Terraform Registry publishing documentation](https://www.terraform.io/docs/registry/providers/publishing.html) and review the [provider publishing learn guide](https://learn.hashicorp.com/tutorials/terraform/provider-release-publish?in=terraform/providers). If your company has multiple products with separate providers, we recommend publishing them under the same Github organization to help with discoverability. + +Once completed, your provider will be visible in the Terraform Registry and available to use in Terraform. Please confirm that everything looks correct and that documentation is rendering properly. + +### 3. Apply + +After your provider is published, connect with HashiCorp Alliances to onboard your integration to the HashiCorp technology ecosystem or [apply to become a technology partner](https://www.hashicorp.com/ecosystem/become-a-partner/#technology). + +### 4. Verify + +Work with your HashiCorp Alliances representative to verify the plugin within the Registry and become listed as an HashiCorp technology partner integration on HashiCorp website. + +### 5. Support + +Getting a new provider built and published to the Terraform Registry is just the first step towards enabling your users with a quality Terraform integration. Once a verified provider has been published, on-going effort is required to maintain the provider. + +HashiCorp Terraform has an extremely wide community of users and contributors and we encourage everyone to report issues however small, as well as help resolve them when possible. We expect that all verified provider publishers will continue to maintain the provider and address any issues users report in a timely manner. This includes resolving all critical issues within 48 hours and all other issues within 5 business days. HashiCorp reserves the right to remove verified status from any integration that is no longer being maintained. + +Vendors who choose not to support their provider and prefer to make it a community-supported provider will no longer be listed as Verified. + +## Terraform Cloud Integrations + +Run tasks allow Terraform Cloud to execute tasks in external systems at specific points in the Terraform Cloud run lifecycle. The beta release of this feature allows users to add and execute these tasks during the new pre-apply stage that exists in between the plan and apply stages. Tasks are executed by sending an API payload to the external system. This payload contains a collection of run-related information and a callback URL which the external system can use to send updates back to Terraform Cloud. + +The external system can then use this run information and respond back to Terraform Cloud with a passed or failed status. Terraform Cloud uses this status response to determine if a run should proceed, based on the task's enforcement settings within a workspace. + +Partners who successfully complete the Terraform Cloud Integration Checklist obtain a Terraform Cloud badge. This signifies HashiCorp has verified the integration and the partner is a member of the HashiCorp Technology Partner Program. + +- Note: Currently, pre-apply is the only integration phase available at this time. As of September 2021, run tasks are available only as a beta feature, are subject to change, and not all customers will see this functionality in their Terraform Cloud organization since this is currently enabled by default for our business tier customers of Terraform Cloud. If you have a customer that is interested in run tasks and are not a current Terraform Cloud for Business customer, customers can [sign up here](https://docs.google.com/forms/d/e/1FAIpQLSf3JJIkU05bKWov2wXa9c-QV524WNaHuGIk7xjHnwl5ceGw2A/viewform). + +![TFC Badge](/assets/images/docs/tfc-badge.png) + +The above badge will help drive visibility for the partner as well as provide better differentiation for joint customers. This badge will be available for partners to use at their digital properties (as per guidelines in the technology partner guide that partners receive when they join HashiCorp’s technology partner program). + +The Terraform Cloud Integration program has the following five steps. + +![RunTask Program Process](/assets/images/docs/runtask-program-steps.png) + +1. **Engage**: Sign up for the Technology Partner +Program +2. **Develop & Test**: Understand and build using the API integration for Run Tasks +3. **Review**: Review integration with HashiCorp Alliances team +4. **Release**: Provide documentation for your Integration +5. **Support**: Vendor provides ongoing maintanance and support + +### 1. Engage + +For partners who are new to working with Hashicorp, we recommend [signing up for our Technology Partner Program](https://www.hashicorp.com/go/tech-partner). To understand more about the program, check out our “[Become a Partner](https://www.hashicorp.com/partners/become-a-partner)” page. + +### 2. Develop & Test +Partners should build an integration using [Run Task APIs in Terraform Cloud](https://www.terraform.io/docs/cloud/api/run-tasks.html). To better understand how run Task enhances the workflow, see diagram listed below and check out our [announcement about Terraform run Task](https://www.hashicorp.com/blog/terraform-cloud-run-tasks-beta-now-available). [Snyk](https://docs.snyk.io/features/integrations/ci-cd-integrations/integrating-snyk-with-terraform-cloud), for example, created an integration to detect configuration anomalies in code while reducing risk to the infrastructure. For additional API resources, [click here](https://www.terraform.io/docs/cloud/api/index.html). +**Currently, pre-apply is the only integration phase available.** + +![RunTask Diagram](/assets/images/docs/runtask-diagram.png) + +### 3. Review + +Schedule time with your Partner Alliance manager to review your integration. The review should include enabling the integration on the partner’s platform and Terraform Cloud, explaining the use case for the integration, and a live demonstration of the functionality. If you are unable to engage with your Partner Alliances manager, you can also reach out to [technologypartners@hashicorp.com](technologypartners@hashicorp.com). + +### 4. Release + +We add new partners to the [Terraform Run Task page](https://www.terraform.io/docs/cloud/integrations/run-tasks/index.html#run-tasks-technology-partners) after the integration review and documentation is complete. On this page, you will provide a two-line summary about your integration(s). If you have multiple integrations, we highly recommend creating a summary that highlights all potential integration options. + +You must provide documentation that helps users get started with your integration. You also need to provide documentation for our support team, including points of contact, email address, FAQ and/or best practices. We want to ensure end users are able to reach the right contacts for internal HashiCorp support when working with customers. + +### 5. Support + +At HashiCorp, we view the release step to be the beginning of the journey. Getting the integration built is just the first step in enabling users to leverage it against their infrastructure. On-going effort is required to support and maintain it after you complete the initial development. + +We expect that partners will create a mechanism to track and resolve all critical issues as soon as possible (ideally within 48 hours) and all other issues within 5 business days. This is a requirement given the critical nature of Terraform Cloud to a customer’s operation. If you choose not to support your integration, we cannot consider it Verified and will not list it on the Terraform documentation website. + +-> Contact us at technologypartners@hashicorp.com with any questions or feedback. From 7d3074df46b3d3351424a3b44df590fdcbbc6aad Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Fri, 15 Oct 2021 17:01:32 -0700 Subject: [PATCH 171/644] go.mod: go get github.com/mitchellh/go-wordwrap@v1.0.1 --- go.mod | 2 +- go.sum | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/go.mod b/go.mod index 60b751726eff..646309e2a559 100644 --- a/go.mod +++ b/go.mod @@ -60,7 +60,7 @@ require ( github.com/mitchellh/copystructure v1.2.0 github.com/mitchellh/go-homedir v1.1.0 github.com/mitchellh/go-linereader v0.0.0-20190213213312-1b945b3263eb - github.com/mitchellh/go-wordwrap v1.0.0 + github.com/mitchellh/go-wordwrap v1.0.1 github.com/mitchellh/gox v1.0.1 github.com/mitchellh/mapstructure v1.1.2 github.com/mitchellh/panicwrap v1.0.0 diff --git a/go.sum b/go.sum index 7d2eaf9415b0..bd8b487fba33 100644 --- a/go.sum +++ b/go.sum @@ -505,8 +505,9 @@ github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go. github.com/mitchellh/go-testing-interface v1.0.0 h1:fzU/JVNcaqHQEcVFAKeR41fkiLdIPrefOvVG1VZ96U0= github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= -github.com/mitchellh/go-wordwrap v1.0.0 h1:6GlHJ/LTGMrIJbwgdqdl2eEH8o+Exx/0m8ir9Gns0u4= github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= +github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0= +github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0= github.com/mitchellh/gox v1.0.1 h1:x0jD3dcHk9a9xPSDN6YEL4xL6Qz0dvNYm8yZqui5chI= github.com/mitchellh/gox v1.0.1/go.mod h1:ED6BioOGXMswlXa2zxfh/xdd5QhwYliBFn9V18Ap4z4= github.com/mitchellh/iochan v1.0.0 h1:C+X3KsSTLFVBr/tK1eYN/vs4rJcvsiLU338UhYPJWeY= From f32702c5c2369d59f279863cd2815f00add2bd23 Mon Sep 17 00:00:00 2001 From: Alex Khaerov Date: Mon, 18 Oct 2021 12:32:57 +0800 Subject: [PATCH 172/644] Support deprecated assume_role block --- internal/backend/remote-state/oss/backend.go | 140 +++++++++++++----- .../backend/remote-state/oss/backend_state.go | 17 +-- internal/backend/remote-state/oss/client.go | 30 ++-- 3 files changed, 127 insertions(+), 60 deletions(-) diff --git a/internal/backend/remote-state/oss/backend.go b/internal/backend/remote-state/oss/backend.go index 5a2b2880ce61..c4131060e84d 100644 --- a/internal/backend/remote-state/oss/backend.go +++ b/internal/backend/remote-state/oss/backend.go @@ -24,32 +24,84 @@ import ( "github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/aliyun/aliyun-tablestore-go-sdk/tablestore" "github.com/hashicorp/go-cleanhttp" + "github.com/jmespath/go-jmespath" + "github.com/mitchellh/go-homedir" + "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/legacy/helper/schema" "github.com/hashicorp/terraform/version" - "github.com/jmespath/go-jmespath" - "github.com/mitchellh/go-homedir" ) +// deprecated in favor to flatten parameters +func deprecatedAssumeRoleSchema() *schema.Schema { + return &schema.Schema{ + Type: schema.TypeSet, + Optional: true, + ConflictsWith: []string{"assume_role_role_arn", "assume_role_session_name", "assume_role_policy", "assume_role_session_expiration"}, + MaxItems: 1, + Deprecated: "use flatten assume_role_* instead", + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "role_arn": { + Type: schema.TypeString, + Required: true, + Description: "The ARN of a RAM role to assume prior to making API calls.", + DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_ARN", ""), + }, + "session_name": { + Type: schema.TypeString, + Optional: true, + Description: "The session name to use when assuming the role.", + DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_SESSION_NAME", ""), + }, + "policy": { + Type: schema.TypeString, + Optional: true, + Description: "The permissions applied when assuming a role. You cannot use this policy to grant permissions which exceed those of the role that is being assumed.", + }, + "session_expiration": { + Type: schema.TypeInt, + Optional: true, + Description: "The time after which the established session for assuming role expires.", + ValidateFunc: func(v interface{}, k string) ([]string, []error) { + min := 900 + max := 3600 + value, ok := v.(int) + if !ok { + return nil, []error{fmt.Errorf("expected type of %s to be int", k)} + } + + if value < min || value > max { + return nil, []error{fmt.Errorf("expected %s to be in the range (%d - %d), got %d", k, min, max, v)} + } + + return nil, nil + }, + }, + }, + }, + } +} + // New creates a new backend for OSS remote state. func New() backend.Backend { s := &schema.Backend{ Schema: map[string]*schema.Schema{ - "access_key": &schema.Schema{ + "access_key": { Type: schema.TypeString, Optional: true, Description: "Alibaba Cloud Access Key ID", DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ACCESS_KEY", os.Getenv("ALICLOUD_ACCESS_KEY_ID")), }, - "secret_key": &schema.Schema{ + "secret_key": { Type: schema.TypeString, Optional: true, Description: "Alibaba Cloud Access Secret Key", DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_SECRET_KEY", os.Getenv("ALICLOUD_ACCESS_KEY_SECRET")), }, - "security_token": &schema.Schema{ + "security_token": { Type: schema.TypeString, Optional: true, Description: "Alibaba Cloud Security Token", @@ -63,7 +115,7 @@ func New() backend.Backend { Description: "The RAM Role Name attached on a ECS instance for API operations. You can retrieve this from the 'Access Control' section of the Alibaba Cloud console.", }, - "region": &schema.Schema{ + "region": { Type: schema.TypeString, Optional: true, Description: "The region of the OSS bucket.", @@ -82,13 +134,13 @@ func New() backend.Backend { DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_OSS_ENDPOINT", os.Getenv("OSS_ENDPOINT")), }, - "bucket": &schema.Schema{ + "bucket": { Type: schema.TypeString, Required: true, Description: "The name of the OSS bucket", }, - "prefix": &schema.Schema{ + "prefix": { Type: schema.TypeString, Optional: true, Description: "The directory where state files will be saved inside the bucket", @@ -102,7 +154,7 @@ func New() backend.Backend { }, }, - "key": &schema.Schema{ + "key": { Type: schema.TypeString, Optional: true, Description: "The path of the state file inside the bucket", @@ -122,14 +174,14 @@ func New() backend.Backend { Default: "", }, - "encrypt": &schema.Schema{ + "encrypt": { Type: schema.TypeBool, Optional: true, Description: "Whether to enable server side encryption of the state file", Default: false, }, - "acl": &schema.Schema{ + "acl": { Type: schema.TypeString, Optional: true, Description: "Object ACL to be applied to the state file", @@ -158,27 +210,32 @@ func New() backend.Backend { Description: "This is the Alibaba Cloud profile name as set in the shared credentials file. It can also be sourced from the `ALICLOUD_PROFILE` environment variable.", DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_PROFILE", ""), }, + "assume_role": deprecatedAssumeRoleSchema(), "assume_role_role_arn": { - Type: schema.TypeString, - Required: true, - Description: "The ARN of a RAM role to assume prior to making API calls.", - DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_ARN", ""), + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"assume_role"}, + Description: "The ARN of a RAM role to assume prior to making API calls.", + DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_ARN", ""), }, "assume_role_session_name": { - Type: schema.TypeString, - Optional: true, - Description: "The session name to use when assuming the role.", - DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_SESSION_NAME", ""), + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"assume_role"}, + Description: "The session name to use when assuming the role.", + DefaultFunc: schema.EnvDefaultFunc("ALICLOUD_ASSUME_ROLE_SESSION_NAME", ""), }, "assume_role_policy": { - Type: schema.TypeString, - Optional: true, - Description: "The permissions applied when assuming a role. You cannot use this policy to grant permissions which exceed those of the role that is being assumed.", + Type: schema.TypeString, + Optional: true, + ConflictsWith: []string{"assume_role"}, + Description: "The permissions applied when assuming a role. You cannot use this policy to grant permissions which exceed those of the role that is being assumed.", }, "assume_role_session_expiration": { - Type: schema.TypeInt, - Optional: true, - Description: "The time after which the established session for assuming role expires.", + Type: schema.TypeInt, + Optional: true, + ConflictsWith: []string{"assume_role"}, + Description: "The time after which the established session for assuming role expires.", ValidateFunc: func(v interface{}, k string) ([]string, []error) { min := 900 max := 3600 @@ -214,7 +271,6 @@ type Backend struct { stateKey string serverSideEncryption bool acl string - endpoint string otsEndpoint string otsTable string } @@ -260,13 +316,29 @@ func (b *Backend) configure(ctx context.Context) error { sessionExpiration = (int)(expiredSeconds.(float64)) } - roleArn = d.Get("assume_role_role_arn").(string) - sessionName = d.Get("assume_role_session_name").(string) + if v, ok := d.GetOk("assume_role"); ok { + // deprecated assume_role block + for _, v := range v.(*schema.Set).List() { + assumeRole := v.(map[string]interface{}) + if assumeRole["role_arn"].(string) != "" { + roleArn = assumeRole["role_arn"].(string) + } + if assumeRole["session_name"].(string) != "" { + sessionName = assumeRole["session_name"].(string) + } + policy = assumeRole["policy"].(string) + sessionExpiration = assumeRole["session_expiration"].(int) + } + } else { + roleArn = d.Get("assume_role_role_arn").(string) + sessionName = d.Get("assume_role_session_name").(string) + policy = d.Get("assume_role_policy").(string) + sessionExpiration = d.Get("assume_role_session_expiration").(int) + } + if sessionName == "" { sessionName = "terraform" } - policy = d.Get("assume_role_policy").(string) - sessionExpiration = d.Get("assume_role_session_expiration").(int) if sessionExpiration == 0 { if v := os.Getenv("ALICLOUD_ASSUME_ROLE_SESSION_EXPIRATION"); v != "" { if expiredSeconds, err := strconv.Atoi(v); err == nil { @@ -346,13 +418,13 @@ func (b *Backend) getOSSEndpointByRegion(access_key, secret_key, security_token, locationClient, err := location.NewClientWithOptions(region, getSdkConfig(), credentials.NewStsTokenCredential(access_key, secret_key, security_token)) if err != nil { - return nil, fmt.Errorf("Unable to initialize the location client: %#v", err) + return nil, fmt.Errorf("unable to initialize the location client: %#v", err) } locationClient.AppendUserAgent(TerraformUA, TerraformVersion) endpointsResponse, err := locationClient.DescribeEndpoints(args) if err != nil { - return nil, fmt.Errorf("Describe oss endpoint using region: %#v got an error: %#v.", region, err) + return nil, fmt.Errorf("describe oss endpoint using region: %#v got an error: %#v", region, err) } return endpointsResponse, nil } @@ -442,7 +514,7 @@ func (a *Invoker) Run(f func() error) error { catcher.RetryCount-- if catcher.RetryCount <= 0 { - return fmt.Errorf("Retry timeout and got an error: %#v.", err) + return fmt.Errorf("retry timeout and got an error: %#v", err) } else { time.Sleep(time.Duration(catcher.RetryWaitSeconds) * time.Second) return a.Run(f) @@ -552,7 +624,7 @@ func getAuthCredentialByEcsRoleName(ecsRoleName string) (accessKey, secretKey, t response := responses.NewCommonResponse() err = responses.Unmarshal(response, httpResponse, "") if err != nil { - err = fmt.Errorf("Unmarshal Ecs sts token response err : %s", err.Error()) + err = fmt.Errorf("unmarshal Ecs sts token response err : %s", err.Error()) return } diff --git a/internal/backend/remote-state/oss/backend_state.go b/internal/backend/remote-state/oss/backend_state.go index d91ed6c5c9bd..d08e1d133897 100644 --- a/internal/backend/remote-state/oss/backend_state.go +++ b/internal/backend/remote-state/oss/backend_state.go @@ -3,19 +3,18 @@ package oss import ( "errors" "fmt" + "log" + "path" "sort" "strings" "github.com/aliyun/aliyun-oss-go-sdk/oss" + "github.com/aliyun/aliyun-tablestore-go-sdk/tablestore" + "github.com/hashicorp/terraform/internal/backend" "github.com/hashicorp/terraform/internal/states" "github.com/hashicorp/terraform/internal/states/remote" "github.com/hashicorp/terraform/internal/states/statemgr" - - "log" - "path" - - "github.com/aliyun/aliyun-tablestore-go-sdk/tablestore" ) const ( @@ -43,7 +42,7 @@ func (b *Backend) remoteClient(name string) (*RemoteClient, error) { TableName: b.otsTable, }) if err != nil { - return client, fmt.Errorf("Error describing table store %s: %#v", b.otsTable, err) + return client, fmt.Errorf("error describing table store %s: %#v", b.otsTable, err) } } @@ -53,7 +52,7 @@ func (b *Backend) remoteClient(name string) (*RemoteClient, error) { func (b *Backend) Workspaces() ([]string, error) { bucket, err := b.ossClient.Bucket(b.bucketName) if err != nil { - return []string{""}, fmt.Errorf("Error getting bucket: %#v", err) + return []string{""}, fmt.Errorf("error getting bucket: %#v", err) } var options []oss.Option @@ -85,7 +84,7 @@ func (b *Backend) Workspaces() ([]string, error) { } else { options = append(options, oss.Marker(lastObj)) } - resp, err = bucket.ListObjects(options...) + bucket.ListObjects(options...) } else { break } @@ -135,7 +134,7 @@ func (b *Backend) StateMgr(name string) (statemgr.Full, error) { lockInfo.Operation = "init" lockId, err := client.Lock(lockInfo) if err != nil { - return nil, fmt.Errorf("Failed to lock OSS state: %s", err) + return nil, fmt.Errorf("failed to lock OSS state: %s", err) } // Local helper function so we can call it multiple places diff --git a/internal/backend/remote-state/oss/client.go b/internal/backend/remote-state/oss/client.go index ccf19576a200..78d835ae13eb 100644 --- a/internal/backend/remote-state/oss/client.go +++ b/internal/backend/remote-state/oss/client.go @@ -3,22 +3,21 @@ package oss import ( "bytes" "crypto/md5" + "encoding/hex" "encoding/json" "fmt" "io" - - "encoding/hex" "log" - "sync" "time" "github.com/aliyun/aliyun-oss-go-sdk/oss" "github.com/aliyun/aliyun-tablestore-go-sdk/tablestore" "github.com/hashicorp/go-multierror" uuid "github.com/hashicorp/go-uuid" + "github.com/pkg/errors" + "github.com/hashicorp/terraform/internal/states/remote" "github.com/hashicorp/terraform/internal/states/statemgr" - "github.com/pkg/errors" ) const ( @@ -48,8 +47,6 @@ type RemoteClient struct { lockFile string serverSideEncryption bool acl string - info *statemgr.LockInfo - mu sync.Mutex otsTable string } @@ -99,7 +96,7 @@ func (c *RemoteClient) Get() (payload *remote.Payload, err error) { func (c *RemoteClient) Put(data []byte) error { bucket, err := c.ossClient.Bucket(c.bucketName) if err != nil { - return fmt.Errorf("Error getting bucket: %#v", err) + return fmt.Errorf("error getting bucket: %#v", err) } body := bytes.NewReader(data) @@ -116,7 +113,7 @@ func (c *RemoteClient) Put(data []byte) error { if body != nil { if err := bucket.PutObject(c.stateFile, body, options...); err != nil { - return fmt.Errorf("Failed to upload state %s: %#v", c.stateFile, err) + return fmt.Errorf("failed to upload state %s: %#v", c.stateFile, err) } } @@ -124,7 +121,7 @@ func (c *RemoteClient) Put(data []byte) error { if err := c.putMD5(sum[:]); err != nil { // if this errors out, we unfortunately have to error out altogether, // since the next Get will inevitably fail. - return fmt.Errorf("Failed to store state MD5: %s", err) + return fmt.Errorf("failed to store state MD5: %s", err) } return nil } @@ -132,13 +129,13 @@ func (c *RemoteClient) Put(data []byte) error { func (c *RemoteClient) Delete() error { bucket, err := c.ossClient.Bucket(c.bucketName) if err != nil { - return fmt.Errorf("Error getting bucket %s: %#v", c.bucketName, err) + return fmt.Errorf("error getting bucket %s: %#v", c.bucketName, err) } log.Printf("[DEBUG] Deleting remote state from OSS: %#v", c.stateFile) if err := bucket.DeleteObject(c.stateFile); err != nil { - return fmt.Errorf("Error deleting state %s: %#v", c.stateFile, err) + return fmt.Errorf("error deleting state %s: %#v", c.stateFile, err) } if err := c.deleteMD5(); err != nil { @@ -413,11 +410,11 @@ func (c *RemoteClient) lockPath() string { func (c *RemoteClient) getObj() (*remote.Payload, error) { bucket, err := c.ossClient.Bucket(c.bucketName) if err != nil { - return nil, fmt.Errorf("Error getting bucket %s: %#v", c.bucketName, err) + return nil, fmt.Errorf("error getting bucket %s: %#v", c.bucketName, err) } if exist, err := bucket.IsObjectExist(c.stateFile); err != nil { - return nil, fmt.Errorf("Estimating object %s is exist got an error: %#v", c.stateFile, err) + return nil, fmt.Errorf("estimating object %s is exist got an error: %#v", c.stateFile, err) } else if !exist { return nil, nil } @@ -425,12 +422,12 @@ func (c *RemoteClient) getObj() (*remote.Payload, error) { var options []oss.Option output, err := bucket.GetObject(c.stateFile, options...) if err != nil { - return nil, fmt.Errorf("Error getting object: %#v", err) + return nil, fmt.Errorf("error getting object: %#v", err) } buf := bytes.NewBuffer(nil) if _, err := io.Copy(buf, output); err != nil { - return nil, fmt.Errorf("Failed to read remote state: %s", err) + return nil, fmt.Errorf("failed to read remote state: %s", err) } sum := md5.Sum(buf.Bytes()) payload := &remote.Payload{ @@ -452,5 +449,4 @@ This may be caused by unusually long delays in OSS processing a previous state update. Please wait for a minute or two and try again. If this problem persists, and neither OSS nor TableStore are experiencing an outage, you may need to manually verify the remote state and update the Digest value stored in the -TableStore table to the following value: %x -` +TableStore table to the following value: %x` From 14f366dbf43131081e40ab156f991a2ac364da3b Mon Sep 17 00:00:00 2001 From: Alex Khaerov Date: Mon, 18 Oct 2021 12:54:40 +0800 Subject: [PATCH 173/644] Update documentation --- internal/backend/remote-state/oss/backend.go | 4 +- .../language/settings/backends/oss.html.md | 60 +++++++++++-------- 2 files changed, 38 insertions(+), 26 deletions(-) diff --git a/internal/backend/remote-state/oss/backend.go b/internal/backend/remote-state/oss/backend.go index c4131060e84d..236a5e7857c2 100644 --- a/internal/backend/remote-state/oss/backend.go +++ b/internal/backend/remote-state/oss/backend.go @@ -32,14 +32,14 @@ import ( "github.com/hashicorp/terraform/version" ) -// deprecated in favor to flatten parameters +// Deprecated in favor of flattening assume_role_* options func deprecatedAssumeRoleSchema() *schema.Schema { return &schema.Schema{ Type: schema.TypeSet, Optional: true, ConflictsWith: []string{"assume_role_role_arn", "assume_role_session_name", "assume_role_policy", "assume_role_session_expiration"}, MaxItems: 1, - Deprecated: "use flatten assume_role_* instead", + Deprecated: "use assume_role_* options instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "role_arn": { diff --git a/website/docs/language/settings/backends/oss.html.md b/website/docs/language/settings/backends/oss.html.md index 515095ceb533..92e0d3ab0520 100644 --- a/website/docs/language/settings/backends/oss.html.md +++ b/website/docs/language/settings/backends/oss.html.md @@ -77,29 +77,41 @@ data "terraform_remote_state" "network" { The following configuration options or environment variables are supported: - * `access_key` - (Optional) Alibaba Cloud access key. It supports environment variables `ALICLOUD_ACCESS_KEY` and `ALICLOUD_ACCESS_KEY_ID`. - * `secret_key` - (Optional) Alibaba Cloud secret access key. It supports environment variables `ALICLOUD_SECRET_KEY` and `ALICLOUD_ACCESS_KEY_SECRET`. - * `security_token` - (Optional) STS access token. It supports environment variable `ALICLOUD_SECURITY_TOKEN`. - * `ecs_role_name` - (Optional, Available in 0.12.14+) The RAM Role Name attached on a ECS instance for API operations. You can retrieve this from the 'Access Control' section of the Alibaba Cloud console. - * `region` - (Optional) The region of the OSS bucket. It supports environment variables `ALICLOUD_REGION` and `ALICLOUD_DEFAULT_REGION`. - * `endpoint` - (Optional) A custom endpoint for the OSS API. It supports environment variables `ALICLOUD_OSS_ENDPOINT` and `OSS_ENDPOINT`. - * `bucket` - (Required) The name of the OSS bucket. - * `prefix` - (Opeional) The path directory of the state file will be stored. Default to "env:". - * `key` - (Optional) The name of the state file. Defaults to `terraform.tfstate`. - * `tablestore_endpoint` / `ALICLOUD_TABLESTORE_ENDPOINT` - (Optional) A custom endpoint for the TableStore API. - * `tablestore_table` - (Optional) A TableStore table for state locking and consistency. The table must have a primary key named `LockID` of type `String`. - * `encrypt` - (Optional) Whether to enable server side - encryption of the state file. If it is true, OSS will use 'AES256' encryption algorithm to encrypt state file. - * `acl` - (Optional) [Object - ACL](https://www.alibabacloud.com/help/doc-detail/52284.htm) - to be applied to the state file. - * `shared_credentials_file` - (Optional, Available in 0.12.8+) This is the path to the shared credentials file. It can also be sourced from the `ALICLOUD_SHARED_CREDENTIALS_FILE` environment variable. If this is not set and a profile is specified, `~/.aliyun/config.json` will be used. - * `profile` - (Optional, Available in 0.12.8+) This is the Alibaba Cloud profile name as set in the shared credentials file. It can also be sourced from the `ALICLOUD_PROFILE` environment variable. -* `assume_role_role_arn` - (Optional, Available in 0.12.6+) The ARN of the role to assume. If ARN is set to an empty string, it does not perform role switching. It supports environment variable `ALICLOUD_ASSUME_ROLE_ARN`. +* `access_key` - (Optional) Alibaba Cloud access key. It supports environment variables `ALICLOUD_ACCESS_KEY` and `ALICLOUD_ACCESS_KEY_ID`. +* `secret_key` - (Optional) Alibaba Cloud secret access key. It supports environment variables `ALICLOUD_SECRET_KEY` and `ALICLOUD_ACCESS_KEY_SECRET`. +* `security_token` - (Optional) STS access token. It supports environment variable `ALICLOUD_SECURITY_TOKEN`. +* `ecs_role_name` - (Optional, Available in 0.12.14+) The RAM Role Name attached on a ECS instance for API operations. You can retrieve this from the 'Access Control' section of the Alibaba Cloud console. +* `region` - (Optional) The region of the OSS bucket. It supports environment variables `ALICLOUD_REGION` and `ALICLOUD_DEFAULT_REGION`. +* `endpoint` - (Optional) A custom endpoint for the OSS API. It supports environment variables `ALICLOUD_OSS_ENDPOINT` and `OSS_ENDPOINT`. +* `bucket` - (Required) The name of the OSS bucket. +* `prefix` - (Opeional) The path directory of the state file will be stored. Default to "env:". +* `key` - (Optional) The name of the state file. Defaults to `terraform.tfstate`. +* `tablestore_endpoint` / `ALICLOUD_TABLESTORE_ENDPOINT` - (Optional) A custom endpoint for the TableStore API. +* `tablestore_table` - (Optional) A TableStore table for state locking and consistency. The table must have a primary key named `LockID` of type `String`. +* `encrypt` - (Optional) Whether to enable server side + encryption of the state file. If it is true, OSS will use 'AES256' encryption algorithm to encrypt state file. +* `acl` - (Optional) [Object + ACL](https://www.alibabacloud.com/help/doc-detail/52284.htm) + to be applied to the state file. +* `shared_credentials_file` - (Optional, Available in 0.12.8+) This is the path to the shared credentials file. It can also be sourced from the `ALICLOUD_SHARED_CREDENTIALS_FILE` environment variable. If this is not set and a profile is specified, `~/.aliyun/config.json` will be used. +* `profile` - (Optional, Available in 0.12.8+) This is the Alibaba Cloud profile name as set in the shared credentials file. It can also be sourced from the `ALICLOUD_PROFILE` environment variable. +* `assume_role_role_arn` - (Optional, Available in 1.1.0+) The ARN of the role to assume. If ARN is set to an empty string, it does not perform role switching. It supports the environment variable `ALICLOUD_ASSUME_ROLE_ARN`. Terraform executes configuration on account with provided credentials. -* `assume_role_policy` - (Optional, Available in 0.12.6+) A more restrictive policy to apply to the temporary credentials. This gives you a way to further restrict the permissions for the resulting temporary - security credentials. You cannot use this policy to grant permissions which exceed those of the role that is being assumed. -* `assume_role_session_name` - (Optional, Available in 0.12.6+) The session name to use when assuming the role. If omitted, 'terraform' is passed to the AssumeRole call as session name. It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_NAME`. -* `assume_role_session_expiration` - (Optional, Available in 0.12.6+) The time after which the established session for assuming role expires. Valid value range: [900-3600] seconds. Default to 3600 (in this case Alibaba Cloud use own default value). It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_EXPIRATION`. +* `assume_role_policy` - (Optional, Available in 1.1.0+ A more restrictive policy to apply to the temporary credentials. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use this policy to grant permissions that exceed those of the role that is being assumed. +* `assume_role_session_name` - (Optional, Available in 1.1.0+) The session name to use when assuming the role. If omitted, 'terraform' is passed to the AssumeRole call as session name. It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_NAME`. +* `assume_role_session_expiration` - (Optional, Available in 1.1.0+ The time after which the established session for assuming role expires. Valid value range: [900-3600] seconds. Default to 3600 (in this case Alibaba Cloud uses its own default value). It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_EXPIRATION`. --> **Note:** If you want to store state in the custom OSS endpoint, you can specify a environment variable `OSS_ENDPOINT`, like "oss-cn-beijing-internal.aliyuncs.com" +* `assume_role` - (**Deprecated as of 1.1.0+**, Available in 0.12.6+) If provided with a role ARN, will attempt to assume this role using the supplied credentials. + + **Deprecated in favor of flattening assume_role_\* options** + + * `role_arn` - (Required) The ARN of the role to assume. If ARN is set to an empty string, it does not perform role switching. It supports the environment variable `ALICLOUD_ASSUME_ROLE_ARN`. + Terraform executes configuration on account with provided credentials. + + * `policy` - (Optional) A more restrictive policy to apply to the temporary credentials. This gives you a way to further restrict the permissions for the resulting temporary security credentials. You cannot use this policy to grant permissions that exceed those of the role that is being assumed. + + * `session_name` - (Optional) The session name to use when assuming the role. If omitted, 'terraform' is passed to the AssumeRole call as session name. It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_NAME`. + + * `session_expiration` - (Optional) The time after which the established session for assuming role expires. Valid value range: [900-3600] seconds. Default to 3600 (in this case Alibaba Cloud uses its own default value). It supports environment variable `ALICLOUD_ASSUME_ROLE_SESSION_EXPIRATION`. + +-> **Note:** If you want to store state in the custom OSS endpoint, you can specify an environment variable `OSS_ENDPOINT`, like "oss-cn-beijing-internal.aliyuncs.com" From fb58f9e6d2e9609590af1417d090f9c0aeb29f7b Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Mon, 18 Oct 2021 16:54:31 -0400 Subject: [PATCH 174/644] cli: Fix flaky init cancel test There is a race between the MockSource and ShutdownCh which sometimes causes this test to fail. Add a HangingSource implementation of Source which hangs until the context is cancelled, so that there is always time for a user-initiated shutdown to trigger the cancellation code path under test. --- internal/command/init_test.go | 14 +++++------- internal/getproviders/hanging_source.go | 29 +++++++++++++++++++++++++ 2 files changed, 35 insertions(+), 8 deletions(-) create mode 100644 internal/getproviders/hanging_source.go diff --git a/internal/command/init_test.go b/internal/command/init_test.go index 2d96b27b4403..74d4b0f64d36 100644 --- a/internal/command/init_test.go +++ b/internal/command/init_test.go @@ -1372,14 +1372,12 @@ func TestInit_cancel(t *testing.T) { defer os.RemoveAll(td) defer testChdir(t, td)() - providerSource, closeSrc := newMockProviderSource(t, map[string][]string{ - "test": {"1.2.3", "1.2.4"}, - "test-beta": {"1.2.4"}, - "source": {"1.2.2", "1.2.3", "1.2.1"}, - }) - defer closeSrc() + // Use a provider source implementation which is designed to hang indefinitely, + // to avoid a race between the closed shutdown channel and the provider source + // operations. + providerSource := &getproviders.HangingSource{} - // our shutdown channel is pre-closed so init will exit as soon as it + // Our shutdown channel is pre-closed so init will exit as soon as it // starts a cancelable portion of the process. shutdownCh := make(chan struct{}) close(shutdownCh) @@ -1401,7 +1399,7 @@ func TestInit_cancel(t *testing.T) { args := []string{} if code := c.Run(args); code == 0 { - t.Fatalf("succeeded; wanted error") + t.Fatalf("succeeded; wanted error\n%s", ui.OutputWriter.String()) } // Currently the first operation that is cancelable is provider // installation, so our error message comes from there. If we diff --git a/internal/getproviders/hanging_source.go b/internal/getproviders/hanging_source.go new file mode 100644 index 000000000000..388b617013f2 --- /dev/null +++ b/internal/getproviders/hanging_source.go @@ -0,0 +1,29 @@ +package getproviders + +import ( + "context" + + "github.com/hashicorp/terraform/internal/addrs" +) + +// HangingSource is an implementation of Source which hangs until the given +// context is cancelled. This is useful only for unit tests of user-controlled +// cancels. +type HangingSource struct { +} + +var _ Source = (*HangingSource)(nil) + +func (s *HangingSource) AvailableVersions(ctx context.Context, provider addrs.Provider) (VersionList, Warnings, error) { + <-ctx.Done() + return nil, nil, nil +} + +func (s *HangingSource) PackageMeta(ctx context.Context, provider addrs.Provider, version Version, target Platform) (PackageMeta, error) { + <-ctx.Done() + return PackageMeta{}, nil +} + +func (s *HangingSource) ForDisplay(provider addrs.Provider) string { + return "hanging source" +} From c587384dfffd98eaadbdba9f406a0ed59aed6faf Mon Sep 17 00:00:00 2001 From: Alisdair McDiarmid Date: Mon, 18 Oct 2021 14:41:04 -0400 Subject: [PATCH 175/644] cli: Restore -lock and -lock-timeout init flags The -lock and -lock-timeout flags were removed prior to the release of 1.0 as they were thought to have no effect. This is not true in the case of state migrations when changing backends. This commit restores these flags, and adds test coverage for locking during backend state migration. Also update the help output describing other boolean flags, showing the argument as the user would type it rather than the default behavior. --- internal/command/init.go | 27 +++++++--- internal/command/init_test.go | 54 +++++++++++++++++++ .../.terraform/terraform.tfstate | 22 ++++++++ .../input.config | 1 + .../init-backend-migrate-while-locked/main.tf | 5 ++ 5 files changed, 102 insertions(+), 7 deletions(-) create mode 100644 internal/command/testdata/init-backend-migrate-while-locked/.terraform/terraform.tfstate create mode 100644 internal/command/testdata/init-backend-migrate-while-locked/input.config create mode 100644 internal/command/testdata/init-backend-migrate-while-locked/main.tf diff --git a/internal/command/init.go b/internal/command/init.go index 551a4d0b8a5a..8083b7513d32 100644 --- a/internal/command/init.go +++ b/internal/command/init.go @@ -44,6 +44,8 @@ func (c *InitCommand) Run(args []string) int { cmdFlags.StringVar(&flagFromModule, "from-module", "", "copy the source of the given module into the directory before init") cmdFlags.BoolVar(&flagGet, "get", true, "") cmdFlags.BoolVar(&c.forceInitCopy, "force-copy", false, "suppress prompts about copying state data") + cmdFlags.BoolVar(&c.Meta.stateLock, "lock", true, "lock state") + cmdFlags.DurationVar(&c.Meta.stateLockTimeout, "lock-timeout", 0, "lock timeout") cmdFlags.BoolVar(&c.reconfigure, "reconfigure", false, "reconfigure") cmdFlags.BoolVar(&c.migrateState, "migrate-state", false, "migrate state") cmdFlags.BoolVar(&flagUpgrade, "upgrade", false, "") @@ -932,6 +934,8 @@ func (c *InitCommand) AutocompleteFlags() complete.Flags { "-from-module": completePredictModuleSource, "-get": completePredictBoolean, "-input": completePredictBoolean, + "-lock": completePredictBoolean, + "-lock-timeout": complete.PredictAnything, "-no-color": complete.PredictNothing, "-plugin-dir": complete.PredictDirs(""), "-reconfigure": complete.PredictNothing, @@ -959,7 +963,8 @@ Usage: terraform [global options] init [options] Options: - -backend=true Configure the backend for this configuration. + -backend=false Disable backend initialization for this configuration + and use the previously initialized backend instead. -backend-config=path This can be either a path to an HCL file with key/value assignments (same format as terraform.tfvars) or a @@ -975,10 +980,17 @@ Options: -from-module=SOURCE Copy the contents of the given module into the target directory before initialization. - -get=true Download any modules for this configuration. + -get=false Disable downloading modules for this configuration. - -input=true Ask for input if necessary. If false, will error if - input was required. + -input=false Disable prompting for missing backend configuration + values. This will result in an error if the backend + configuration is not fully specified. + + -lock=false Don't hold a state lock during backend migration. + This is dangerous if others might concurrently run + commands against the same workspace. + + -lock-timeout=0s Duration to retry a state lock. -no-color If specified, output won't contain any color. @@ -993,9 +1005,10 @@ Options: -migrate-state Reconfigure the backend, and attempt to migrate any existing state. - -upgrade=false If installing modules (-get) or plugins, ignore - previously-downloaded objects and install the - latest version allowed within configured constraints. + -upgrade Install the latest module and provider versions + allowed within configured constraints, overriding the + default behavior of selecting exactly the version + recorded in the dependency lockfile. -lockfile=MODE Set a dependency lockfile mode. Currently only "readonly" is valid. diff --git a/internal/command/init_test.go b/internal/command/init_test.go index 2d96b27b4403..c50e97ac9018 100644 --- a/internal/command/init_test.go +++ b/internal/command/init_test.go @@ -562,6 +562,60 @@ func TestInit_backendConfigFileChange(t *testing.T) { } } +func TestInit_backendMigrateWhileLocked(t *testing.T) { + // Create a temporary working directory that is empty + td := tempDir(t) + testCopyDir(t, testFixturePath("init-backend-migrate-while-locked"), td) + defer os.RemoveAll(td) + defer testChdir(t, td)() + + providerSource, close := newMockProviderSource(t, map[string][]string{ + "hashicorp/test": {"1.2.3"}, + }) + defer close() + + ui := new(cli.MockUi) + view, _ := testView(t) + c := &InitCommand{ + Meta: Meta{ + testingOverrides: metaOverridesForProvider(testProvider()), + ProviderSource: providerSource, + Ui: ui, + View: view, + }, + } + + // Create some state, so the backend has something to migrate from + f, err := os.Create("local-state.tfstate") + if err != nil { + t.Fatalf("err: %s", err) + } + err = writeStateForTesting(testState(), f) + f.Close() + if err != nil { + t.Fatalf("err: %s", err) + } + + // Lock the source state + unlock, err := testLockState(testDataDir, "local-state.tfstate") + if err != nil { + t.Fatal(err) + } + defer unlock() + + // Attempt to migrate + args := []string{"-backend-config", "input.config", "-migrate-state", "-force-copy"} + if code := c.Run(args); code == 0 { + t.Fatalf("expected nonzero exit code: %s", ui.OutputWriter.String()) + } + + // Disabling locking should work + args = []string{"-backend-config", "input.config", "-migrate-state", "-force-copy", "-lock=false"} + if code := c.Run(args); code != 0 { + t.Fatalf("expected zero exit code, got %d: %s", code, ui.ErrorWriter.String()) + } +} + func TestInit_backendConfigKV(t *testing.T) { // Create a temporary working directory that is empty td := tempDir(t) diff --git a/internal/command/testdata/init-backend-migrate-while-locked/.terraform/terraform.tfstate b/internal/command/testdata/init-backend-migrate-while-locked/.terraform/terraform.tfstate new file mode 100644 index 000000000000..073bd7a82237 --- /dev/null +++ b/internal/command/testdata/init-backend-migrate-while-locked/.terraform/terraform.tfstate @@ -0,0 +1,22 @@ +{ + "version": 3, + "serial": 0, + "lineage": "666f9301-7e65-4b19-ae23-71184bb19b03", + "backend": { + "type": "local", + "config": { + "path": "local-state.tfstate" + }, + "hash": 9073424445967744180 + }, + "modules": [ + { + "path": [ + "root" + ], + "outputs": {}, + "resources": {}, + "depends_on": [] + } + ] +} diff --git a/internal/command/testdata/init-backend-migrate-while-locked/input.config b/internal/command/testdata/init-backend-migrate-while-locked/input.config new file mode 100644 index 000000000000..6cd14f4a3d03 --- /dev/null +++ b/internal/command/testdata/init-backend-migrate-while-locked/input.config @@ -0,0 +1 @@ +path = "hello" diff --git a/internal/command/testdata/init-backend-migrate-while-locked/main.tf b/internal/command/testdata/init-backend-migrate-while-locked/main.tf new file mode 100644 index 000000000000..bea8e789f8bb --- /dev/null +++ b/internal/command/testdata/init-backend-migrate-while-locked/main.tf @@ -0,0 +1,5 @@ +terraform { + backend "local" { + path = "local-state.tfstate" + } +} From 7afaea4cf209e3598f9e17475643c99b961c23c2 Mon Sep 17 00:00:00 2001 From: xiaozhu36 Date: Wed, 20 Oct 2021 16:05:00 +0800 Subject: [PATCH 176/644] backend/oss: Fixes the nil pointer panic error when missing access key or secret key --- internal/backend/remote-state/oss/backend.go | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/internal/backend/remote-state/oss/backend.go b/internal/backend/remote-state/oss/backend.go index de08af37d68f..79534ed22f9a 100644 --- a/internal/backend/remote-state/oss/backend.go +++ b/internal/backend/remote-state/oss/backend.go @@ -319,7 +319,10 @@ func (b *Backend) configure(ctx context.Context) error { } if endpoint == "" { - endpointsResponse, _ := b.getOSSEndpointByRegion(accessKey, secretKey, securityToken, region) + endpointsResponse, err := b.getOSSEndpointByRegion(accessKey, secretKey, securityToken, region) + if err != nil { + return err + } for _, endpointItem := range endpointsResponse.Endpoints.Endpoint { if endpointItem.Type == "openAPI" { endpoint = endpointItem.Endpoint From 5b266dd5ca739b1438e2093aa644d3f02fcf1015 Mon Sep 17 00:00:00 2001 From: Martin Atkins Date: Mon, 18 Oct 2021 15:08:35 -0700 Subject: [PATCH 177/644] command: Remove the experimental "terraform add" command We introduced this experiment to gather feedback, and the feedback we saw led to us deciding to do another round of design work before we move forward with something to meet this use-case. In addition to being experimental, this has only been included in alpha releases so far, and so on both counts it is not protected by the Terraform v1.0 Compatibility Promises. --- commands.go | 6 - internal/command/add.go | 369 ------ internal/command/add_test.go | 685 ----------- internal/command/arguments/add.go | 110 -- internal/command/arguments/add_test.go | 146 --- internal/command/command_test.go | 8 - internal/command/testdata/add/basic/main.tf | 14 - internal/command/testdata/add/module/main.tf | 17 - .../testdata/add/module/module/main.tf | 9 - internal/command/views/add.go | 562 --------- internal/command/views/add_test.go | 1018 ----------------- website/docs/cli/commands/add.html.md | 81 -- website/docs/cli/commands/index.html.md | 1 - website/layouts/docs.erb | 8 - 14 files changed, 3034 deletions(-) delete mode 100644 internal/command/add.go delete mode 100644 internal/command/add_test.go delete mode 100644 internal/command/arguments/add.go delete mode 100644 internal/command/arguments/add_test.go delete mode 100644 internal/command/testdata/add/basic/main.tf delete mode 100644 internal/command/testdata/add/module/main.tf delete mode 100644 internal/command/testdata/add/module/module/main.tf delete mode 100644 internal/command/views/add.go delete mode 100644 internal/command/views/add_test.go delete mode 100644 website/docs/cli/commands/add.html.md diff --git a/commands.go b/commands.go index 2c1cb90ee1ae..41c39066dd9f 100644 --- a/commands.go +++ b/commands.go @@ -109,12 +109,6 @@ func initCommands( // that to match. Commands = map[string]cli.CommandFactory{ - "add": func() (cli.Command, error) { - return &command.AddCommand{ - Meta: meta, - }, nil - }, - "apply": func() (cli.Command, error) { return &command.ApplyCommand{ Meta: meta, diff --git a/internal/command/add.go b/internal/command/add.go deleted file mode 100644 index 69f91b77304e..000000000000 --- a/internal/command/add.go +++ /dev/null @@ -1,369 +0,0 @@ -package command - -import ( - "fmt" - "os" - "path/filepath" - "strings" - - "github.com/hashicorp/hcl/v2" - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/backend" - "github.com/hashicorp/terraform/internal/command/arguments" - "github.com/hashicorp/terraform/internal/command/views" - "github.com/hashicorp/terraform/internal/configs" - "github.com/hashicorp/terraform/internal/states" - "github.com/hashicorp/terraform/internal/tfdiags" - "github.com/zclconf/go-cty/cty" -) - -// AddCommand is a Command implementation that generates resource configuration templates. -type AddCommand struct { - Meta -} - -func (c *AddCommand) Run(rawArgs []string) int { - // Parse and apply global view arguments - common, rawArgs := arguments.ParseView(rawArgs) - c.View.Configure(common) - - args, diags := arguments.ParseAdd(rawArgs) - view := views.NewAdd(args.ViewType, c.View, args) - if diags.HasErrors() { - view.Diagnostics(diags) - return 1 - } - - // In case the output configuration path is specified, we should ensure the - // target resource address doesn't exist in the module tree indicated by - // the existing configuration files. - if args.OutPath != "" { - // Ensure the directory to the path exists and is accessible. - outDir := filepath.Dir(args.OutPath) - if _, err := os.Stat(outDir); os.IsNotExist(err) { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "The out path doesn't exist or is not accessible", - err.Error(), - )) - view.Diagnostics(diags) - return 1 - } - - config, loadDiags := c.loadConfig(outDir) - diags = diags.Append(loadDiags) - if diags.HasErrors() { - view.Diagnostics(diags) - return 1 - } - - if config != nil && config.Module != nil { - if rs, ok := config.Module.ManagedResources[args.Addr.ContainingResource().Config().String()]; ok { - diags = diags.Append(&hcl.Diagnostic{ - Severity: hcl.DiagError, - Summary: "Resource already in configuration", - Detail: fmt.Sprintf("The resource %s is already in this configuration at %s. Resource names must be unique per type in each module.", args.Addr, rs.DeclRange), - Subject: &rs.DeclRange, - }) - c.View.Diagnostics(diags) - return 1 - } - } - } - - // Check for user-supplied plugin path - var err error - if c.pluginPath, err = c.loadPluginPath(); err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Error loading plugin path", - err.Error(), - )) - view.Diagnostics(diags) - return 1 - } - - // Apply the state arguments to the meta object here because they are later - // used when initializing the backend. - c.Meta.applyStateArguments(args.State) - - // Load the backend - b, backendDiags := c.Backend(nil) - diags = diags.Append(backendDiags) - if backendDiags.HasErrors() { - view.Diagnostics(diags) - return 1 - } - - // We require a local backend - local, ok := b.(backend.Local) - if !ok { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Unsupported backend", - ErrUnsupportedLocalOp, - )) - view.Diagnostics(diags) - return 1 - } - - // This is a read-only command (until -import is implemented) - c.ignoreRemoteBackendVersionConflict(b) - - cwd, err := os.Getwd() - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Error determining current working directory", - err.Error(), - )) - view.Diagnostics(diags) - return 1 - } - - // Build the operation - opReq := c.Operation(b) - opReq.AllowUnsetVariables = true - opReq.ConfigDir = cwd - opReq.ConfigLoader, err = c.initConfigLoader() - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Error initializing config loader", - err.Error(), - )) - view.Diagnostics(diags) - return 1 - } - - // Get the context - lr, _, ctxDiags := local.LocalRun(opReq) - diags = diags.Append(ctxDiags) - if ctxDiags.HasErrors() { - view.Diagnostics(diags) - return 1 - } - - // Successfully creating the context can result in a lock, so ensure we release it - defer func() { - diags := opReq.StateLocker.Unlock() - if diags.HasErrors() { - c.showDiagnostics(diags) - } - }() - - // load the configuration to verify that the resource address doesn't - // already exist in the config. - var module *configs.Module - if args.Addr.Module.IsRoot() { - module = lr.Config.Module - } else { - // This is weird, but users can potentially specify non-existant module names - cfg := lr.Config.Root.Descendent(args.Addr.Module.Module()) - if cfg != nil { - module = cfg.Module - } - } - - // Get the schemas from the context - schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState) - diags = diags.Append(moreDiags) - if moreDiags.HasErrors() { - view.Diagnostics(diags) - return 1 - } - - // Determine the correct provider config address. The provider-related - // variables may get updated below - absProviderConfig := args.Provider - var providerLocalName string - rs := args.Addr.Resource.Resource - - // If we are getting the values from state, get the AbsProviderConfig - // directly from state as well. - var resource *states.Resource - if args.FromState { - resource, moreDiags = c.getResource(b, args.Addr.ContainingResource()) - if moreDiags.HasErrors() { - diags = diags.Append(moreDiags) - c.View.Diagnostics(diags) - return 1 - } - absProviderConfig = &resource.ProviderConfig - } - - if absProviderConfig == nil { - ip := rs.ImpliedProvider() - if module != nil { - provider := module.ImpliedProviderForUnqualifiedType(ip) - providerLocalName = module.LocalNameForProvider(provider) - absProviderConfig = &addrs.AbsProviderConfig{ - Provider: provider, - Module: args.Addr.Module.Module(), - } - } else { - // lacking any configuration to query, we'll go with a default provider. - absProviderConfig = &addrs.AbsProviderConfig{ - Provider: addrs.NewDefaultProvider(ip), - } - providerLocalName = ip - } - } else { - if module != nil { - providerLocalName = module.LocalNameForProvider(absProviderConfig.Provider) - } else { - providerLocalName = absProviderConfig.Provider.Type - } - } - - localProviderConfig := addrs.LocalProviderConfig{ - LocalName: providerLocalName, - Alias: absProviderConfig.Alias, - } - - // Get the schemas from the context - if _, exists := schemas.Providers[absProviderConfig.Provider]; !exists { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Missing schema for provider", - fmt.Sprintf("No schema found for provider %s. Please verify that this provider exists in the configuration.", absProviderConfig.Provider.String()), - )) - c.View.Diagnostics(diags) - return 1 - } - - // Get the schema for the resource - schema, schemaVersion := schemas.ResourceTypeConfig(absProviderConfig.Provider, rs.Mode, rs.Type) - if schema == nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Missing resource schema from provider", - fmt.Sprintf("No resource schema found for %s.", rs.Type), - )) - c.View.Diagnostics(diags) - return 1 - } - - stateVal := cty.NilVal - // Now that we have the schema, we can decode the previously-acquired resource state - if args.FromState { - ri := resource.Instance(args.Addr.Resource.Key) - if ri.Current == nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "No state for resource", - fmt.Sprintf("There is no state found for the resource %s, so add cannot populate values.", rs.String()), - )) - c.View.Diagnostics(diags) - return 1 - } - - rio, err := ri.Current.Decode(schema.ImpliedType()) - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Error decoding state", - fmt.Sprintf("Error decoding state for resource %s: %s", rs.String(), err.Error()), - )) - c.View.Diagnostics(diags) - return 1 - } - - if ri.Current.SchemaVersion != schemaVersion { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Schema version mismatch", - fmt.Sprintf("schema version %d for %s in state does not match version %d from the provider", ri.Current.SchemaVersion, rs.String(), schemaVersion), - )) - c.View.Diagnostics(diags) - return 1 - } - - stateVal = rio.Value - } - - diags = diags.Append(view.Resource(args.Addr, schema, localProviderConfig, stateVal)) - c.View.Diagnostics(diags) - if diags.HasErrors() { - return 1 - } - return 0 -} - -func (c *AddCommand) Help() string { - helpText := ` -Usage: terraform [global options] add [options] ADDRESS - - Generates a blank resource template. With no additional options, Terraform - will write the result to standard output. - -Options: - - -from-state Fill the template with values from an existing resource - instance tracked in the state. By default, Terraform will - emit only placeholder values based on the resource type. - - -out=string Write the template to a file, instead of to standard - output. - - -optional Include optional arguments. By default, the result will - include only required arguments. - - -provider=provider Override the provider configuration for the resource, - using the absolute provider configuration address syntax. - - This is incompatible with -from-state, because in that - case Terraform will use the provider configuration already - selected in the state. -` - return strings.TrimSpace(helpText) -} - -func (c *AddCommand) Synopsis() string { - return "Generate a resource configuration template" -} - -func (c *AddCommand) getResource(b backend.Enhanced, addr addrs.AbsResource) (*states.Resource, tfdiags.Diagnostics) { - var diags tfdiags.Diagnostics - // Get the state - env, err := c.Workspace() - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Error selecting workspace", - err.Error(), - )) - return nil, diags - } - - stateMgr, err := b.StateMgr(env) - if err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Error loading state", - fmt.Sprintf(errStateLoadingState, err), - )) - return nil, diags - } - - if err := stateMgr.RefreshState(); err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Error refreshing state", - err.Error(), - )) - return nil, diags - } - - state := stateMgr.State() - if state == nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "No state", - "There is no state found for the current workspace, so add cannot populate values.", - )) - return nil, diags - } - - return state.Resource(addr), nil -} diff --git a/internal/command/add_test.go b/internal/command/add_test.go deleted file mode 100644 index 7afe5e754bc6..000000000000 --- a/internal/command/add_test.go +++ /dev/null @@ -1,685 +0,0 @@ -package command - -import ( - "fmt" - "os" - "path/filepath" - "strings" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/hashicorp/terraform/internal/providers" - "github.com/hashicorp/terraform/internal/states" - "github.com/mitchellh/cli" - "github.com/zclconf/go-cty/cty" -) - -// simple test cases with a simple resource schema -func TestAdd_basic(t *testing.T) { - td := tempDir(t) - testCopyDir(t, testFixturePath("add/basic"), td) - defer os.RemoveAll(td) - defer testChdir(t, td)() - - p := testProvider() - p.GetProviderSchemaResponse = &providers.GetProviderSchemaResponse{ - ResourceTypes: map[string]providers.Schema{ - "test_instance": { - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": {Type: cty.String, Optional: true, Computed: true}, - "ami": {Type: cty.String, Optional: true, Description: "the ami to use"}, - "value": {Type: cty.String, Required: true, Description: "a value of a thing"}, - }, - }, - }, - }, - } - - overrides := &testingOverrides{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): providers.FactoryFixed(p), - addrs.NewProvider("registry.terraform.io", "happycorp", "test"): providers.FactoryFixed(p), - }, - } - - t.Run("basic", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"test_instance.new"} - code := c.Run(args) - output := done(t) - if code != 0 { - fmt.Println(output.Stderr()) - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - value = null # REQUIRED string -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - }) - - t.Run("basic to file", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - outPath := "add.tf" - args := []string{fmt.Sprintf("-out=%s", outPath), "test_instance.new"} - code := c.Run(args) - output := done(t) - if code != 0 { - fmt.Println(output.Stderr()) - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - value = null # REQUIRED string -} -` - result, err := os.ReadFile(outPath) - if err != nil { - t.Fatalf("error reading result file %s: %s", outPath, err.Error()) - } - // While the entire directory will get removed once the whole test suite - // is done, we remove this lest it gets in the way of another (not yet - // written) test. - os.Remove(outPath) - - if !cmp.Equal(expected, string(result)) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, string(result))) - } - }) - - t.Run("basic to existing file", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - outPath := "add.tf" - args := []string{fmt.Sprintf("-out=%s", outPath), "test_instance.new"} - c.Run(args) - args = []string{fmt.Sprintf("-out=%s", outPath), "test_instance.new2"} - code := c.Run(args) - output := done(t) - if code != 0 { - fmt.Println(output.Stderr()) - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - value = null # REQUIRED string -} -# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new2" { - value = null # REQUIRED string -} -` - result, err := os.ReadFile(outPath) - if err != nil { - t.Fatalf("error reading result file %s: %s", outPath, err.Error()) - } - // While the entire directory will get removed once the whole test suite - // is done, we remove this lest it gets in the way of another (not yet - // written) test. - os.Remove(outPath) - - if !cmp.Equal(expected, string(result)) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, string(result))) - } - }) - - t.Run("optionals", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"-optional", "test_instance.new"} - code := c.Run(args) - if code != 0 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - output := done(t) - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - ami = null # OPTIONAL string - id = null # OPTIONAL string - value = null # REQUIRED string -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - }) - - t.Run("alternate provider for resource", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"-provider=provider[\"registry.terraform.io/happycorp/test\"].alias", "test_instance.new"} - code := c.Run(args) - output := done(t) - if code != 0 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - // The provider happycorp/test has a localname "othertest" in the provider configuration. - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - provider = othertest.alias - - value = null # REQUIRED string -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - }) - - t.Run("resource exists error", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - outPath := "add.tf" - args := []string{fmt.Sprintf("-out=%s", outPath), "test_instance.exists"} - code := c.Run(args) - if code != 1 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - output := done(t) - if !strings.Contains(output.Stderr(), "The resource test_instance.exists is already in this configuration") { - t.Fatalf("missing expected error message: %s", output.Stderr()) - } - }) - - t.Run("output existing resource to stdout", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"test_instance.exists"} - code := c.Run(args) - output := done(t) - if code != 0 { - fmt.Println(output.Stderr()) - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "exists" { - value = null # REQUIRED string -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - }) - - t.Run("provider not in configuration", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"toast_instance.new"} - code := c.Run(args) - if code != 1 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - output := done(t) - if !strings.Contains(output.Stderr(), "No schema found for provider registry.terraform.io/hashicorp/toast.") { - t.Fatalf("missing expected error message: %s", output.Stderr()) - } - }) - - t.Run("no schema for resource", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"test_pet.meow"} - code := c.Run(args) - if code != 1 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - output := done(t) - if !strings.Contains(output.Stderr(), "No resource schema found for test_pet.") { - t.Fatalf("missing expected error message: %s", output.Stderr()) - } - }) -} - -func TestAdd(t *testing.T) { - td := tempDir(t) - testCopyDir(t, testFixturePath("add/module"), td) - defer os.RemoveAll(td) - defer testChdir(t, td)() - - // a simple hashicorp/test provider, and a more complex happycorp/test provider - p := testProvider() - p.GetProviderSchemaResponse = &providers.GetProviderSchemaResponse{ - ResourceTypes: map[string]providers.Schema{ - "test_instance": { - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": {Type: cty.String, Required: true}, - }, - }, - }, - }, - } - - happycorp := testProvider() - happycorp.GetProviderSchemaResponse = &providers.GetProviderSchemaResponse{ - ResourceTypes: map[string]providers.Schema{ - "test_instance": { - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": {Type: cty.String, Optional: true, Computed: true}, - "ami": {Type: cty.String, Optional: true, Description: "the ami to use"}, - "value": {Type: cty.String, Required: true, Description: "a value of a thing"}, - "disks": { - NestedType: &configschema.Object{ - Nesting: configschema.NestingList, - Attributes: map[string]*configschema.Attribute{ - "size": {Type: cty.String, Optional: true}, - "mount_point": {Type: cty.String, Required: true}, - }, - }, - Optional: true, - }, - }, - BlockTypes: map[string]*configschema.NestedBlock{ - "network_interface": { - Nesting: configschema.NestingList, - MinItems: 1, - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "device_index": {Type: cty.String, Optional: true}, - "description": {Type: cty.String, Optional: true}, - }, - }, - }, - }, - }, - }, - }, - } - providerSource, psClose := newMockProviderSource(t, map[string][]string{ - "registry.terraform.io/happycorp/test": {"1.0.0"}, - "registry.terraform.io/hashicorp/test": {"1.0.0"}, - }) - defer psClose() - - overrides := &testingOverrides{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewProvider("registry.terraform.io", "happycorp", "test"): providers.FactoryFixed(happycorp), - addrs.NewDefaultProvider("test"): providers.FactoryFixed(p), - }, - } - - // the test fixture uses a module, so we need to run init. - m := Meta{ - testingOverrides: overrides, - ProviderSource: providerSource, - Ui: new(cli.MockUi), - } - - init := &InitCommand{ - Meta: m, - } - - code := init.Run([]string{}) - if code != 0 { - t.Fatal("init failed") - } - - t.Run("optional", func(t *testing.T) { - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"-optional", "test_instance.new"} - code := c.Run(args) - output := done(t) - if code != 0 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - ami = null # OPTIONAL string - disks = [{ # OPTIONAL list of object - mount_point = null # REQUIRED string - size = null # OPTIONAL string - }] - id = null # OPTIONAL string - value = null # REQUIRED string - network_interface { # REQUIRED block - description = null # OPTIONAL string - device_index = null # OPTIONAL string - } -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - - }) - - t.Run("chooses correct provider for root module", func(t *testing.T) { - // in the root module of this test fixture, "test" is the local name for "happycorp/test" - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"test_instance.new"} - code := c.Run(args) - output := done(t) - if code != 0 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - value = null # REQUIRED string - network_interface { # REQUIRED block - } -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - }) - - t.Run("chooses correct provider for child module", func(t *testing.T) { - // in the child module of this test fixture, "test" is a default "hashicorp/test" provider - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"module.child.test_instance.new"} - code := c.Run(args) - output := done(t) - if code != 0 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - id = null # REQUIRED string -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - }) - - t.Run("chooses correct provider for an unknown module", func(t *testing.T) { - // it's weird but ok to use a new/unknown module name; terraform will - // fall back on default providers (unless a -provider argument is - // supplied) - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - args := []string{"module.madeup.test_instance.new"} - code := c.Run(args) - output := done(t) - if code != 0 { - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - id = null # REQUIRED string -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - }) -} - -func TestAdd_from_state(t *testing.T) { - td := tempDir(t) - testCopyDir(t, testFixturePath("add/basic"), td) - defer os.RemoveAll(td) - defer testChdir(t, td)() - - // write some state - testState := states.BuildState(func(s *states.SyncState) { - s.SetResourceInstanceCurrent( - addrs.Resource{ - Mode: addrs.ManagedResourceMode, - Type: "test_instance", - Name: "new", - }.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), - &states.ResourceInstanceObjectSrc{ - AttrsJSON: []byte("{\"id\":\"bar\",\"ami\":\"ami-123456\",\"disks\":[{\"mount_point\":\"diska\",\"size\":null}],\"value\":\"bloop\"}"), - Status: states.ObjectReady, - Dependencies: []addrs.ConfigResource{}, - }, - mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`), - ) - }) - f, err := os.Create("terraform.tfstate") - if err != nil { - t.Fatalf("failed to create temporary state file: %s", err) - } - defer f.Close() - err = writeStateForTesting(testState, f) - if err != nil { - t.Fatalf("failed to write state file: %s", err) - } - - p := testProvider() - p.GetProviderSchemaResponse = &providers.GetProviderSchemaResponse{ - ResourceTypes: map[string]providers.Schema{ - "test_instance": { - Block: &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": {Type: cty.String, Optional: true, Computed: true}, - "ami": {Type: cty.String, Optional: true, Description: "the ami to use"}, - "value": {Type: cty.String, Required: true, Description: "a value of a thing"}, - "disks": { - NestedType: &configschema.Object{ - Nesting: configschema.NestingList, - Attributes: map[string]*configschema.Attribute{ - "size": {Type: cty.String, Optional: true}, - "mount_point": {Type: cty.String, Required: true}, - }, - }, - Optional: true, - }, - }, - BlockTypes: map[string]*configschema.NestedBlock{ - "network_interface": { - Nesting: configschema.NestingList, - MinItems: 1, - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "device_index": {Type: cty.String, Optional: true}, - "description": {Type: cty.String, Optional: true}, - }, - }, - }, - }, - }, - }, - }, - } - overrides := &testingOverrides{ - Providers: map[addrs.Provider]providers.Factory{ - addrs.NewDefaultProvider("test"): providers.FactoryFixed(p), - addrs.NewProvider("registry.terraform.io", "happycorp", "test"): providers.FactoryFixed(p), - }, - } - view, done := testView(t) - c := &AddCommand{ - Meta: Meta{ - testingOverrides: overrides, - View: view, - }, - } - - args := []string{"-from-state", "test_instance.new"} - code := c.Run(args) - output := done(t) - if code != 0 { - fmt.Println(output.Stderr()) - t.Fatalf("wrong exit status. Got %d, want 0", code) - } - - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "new" { - ami = "ami-123456" - disks = [ - { - mount_point = "diska" - size = null - }, - ] - id = "bar" - value = "bloop" -} -` - - if !cmp.Equal(output.Stdout(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout())) - } - - if _, err := os.Stat(filepath.Join(td, ".terraform.tfstate.lock.info")); !os.IsNotExist(err) { - t.Fatal("state left locked after add") - } -} diff --git a/internal/command/arguments/add.go b/internal/command/arguments/add.go deleted file mode 100644 index 18fa59f1ba41..000000000000 --- a/internal/command/arguments/add.go +++ /dev/null @@ -1,110 +0,0 @@ -package arguments - -import ( - "fmt" - - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/tfdiags" -) - -// Add represents the command-line arguments for the Add command. -type Add struct { - // Addr specifies which resource to generate configuration for. - Addr addrs.AbsResourceInstance - - // FromState specifies that the configuration should be populated with - // values from state. - FromState bool - - // OutPath contains an optional path to store the generated configuration. - OutPath string - - // Optional specifies whether or not to include optional attributes in the - // generated configuration. Defaults to false. - Optional bool - - // Provider specifies the provider for the target. - Provider *addrs.AbsProviderConfig - - // State from the common extended flags. - State *State - - // ViewType specifies which output format to use. ViewHuman is currently the - // only supported view type. - ViewType ViewType -} - -func ParseAdd(args []string) (*Add, tfdiags.Diagnostics) { - add := &Add{State: &State{}, ViewType: ViewHuman} - - var diags tfdiags.Diagnostics - var provider string - - cmdFlags := extendedFlagSet("add", add.State, nil, nil) - cmdFlags.BoolVar(&add.FromState, "from-state", false, "fill attribute values from a resource already managed by terraform") - cmdFlags.BoolVar(&add.Optional, "optional", false, "include optional attributes") - cmdFlags.StringVar(&add.OutPath, "out", "", "out") - cmdFlags.StringVar(&provider, "provider", "", "provider") - - if err := cmdFlags.Parse(args); err != nil { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Failed to parse command-line flags", - err.Error(), - )) - return add, diags - } - - args = cmdFlags.Args() - if len(args) != 1 { - //var adj string - adj := "few" - if len(args) > 1 { - adj = "many" - } - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - fmt.Sprintf("Too %s command line arguments", adj), - "Expected exactly one positional argument, giving the address of the resource to generate configuration for.", - )) - return add, diags - } - - // parse address from the argument - addr, addrDiags := addrs.ParseAbsResourceInstanceStr(args[0]) - if addrDiags.HasErrors() { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - fmt.Sprintf("Error parsing resource address: %s", args[0]), - "This command requires that the address argument specifies one resource instance.", - )) - return add, diags - } - add.Addr = addr - - if provider != "" { - if add.FromState { - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - "Incompatible command-line options", - "Cannot use both -from-state and -provider. The provider will be determined from the resource's state.", - )) - return add, diags - } - - absProvider, providerDiags := addrs.ParseAbsProviderConfigStr(provider) - if providerDiags.HasErrors() { - // The diagnostics returned from ParseAbsProviderConfigStr are - // not always clear, so we wrap them in a single customized diagnostic. - diags = diags.Append(tfdiags.Sourceless( - tfdiags.Error, - fmt.Sprintf("Invalid provider string: %s", provider), - providerDiags.Err().Error(), - )) - return add, diags - } - add.Provider = &absProvider - } - - return add, diags -} diff --git a/internal/command/arguments/add_test.go b/internal/command/arguments/add_test.go deleted file mode 100644 index bc63255cd2a3..000000000000 --- a/internal/command/arguments/add_test.go +++ /dev/null @@ -1,146 +0,0 @@ -package arguments - -import ( - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/tfdiags" -) - -func TestParseAdd(t *testing.T) { - tests := map[string]struct { - args []string - want *Add - wantError string - }{ - "defaults": { - []string{"test_foo.bar"}, - &Add{ - Addr: mustResourceInstanceAddr("test_foo.bar"), - State: &State{Lock: true}, - ViewType: ViewHuman, - }, - ``, - }, - "some flags": { - []string{"-optional=true", "test_foo.bar"}, - &Add{ - Addr: mustResourceInstanceAddr("test_foo.bar"), - State: &State{Lock: true}, - Optional: true, - ViewType: ViewHuman, - }, - ``, - }, - "-from-state": { - []string{"-from-state", "module.foo.test_foo.baz"}, - &Add{ - Addr: mustResourceInstanceAddr("module.foo.test_foo.baz"), - State: &State{Lock: true}, - ViewType: ViewHuman, - FromState: true, - }, - ``, - }, - "-provider": { - []string{"-provider=provider[\"example.com/happycorp/test\"]", "test_foo.bar"}, - &Add{ - Addr: mustResourceInstanceAddr("test_foo.bar"), - State: &State{Lock: true}, - ViewType: ViewHuman, - Provider: &addrs.AbsProviderConfig{ - Provider: addrs.NewProvider("example.com", "happycorp", "test"), - }, - }, - ``, - }, - "state options from extended flag set": { - []string{"-state=local.tfstate", "test_foo.bar"}, - &Add{ - Addr: mustResourceInstanceAddr("test_foo.bar"), - State: &State{Lock: true, StatePath: "local.tfstate"}, - ViewType: ViewHuman, - }, - ``, - }, - - // Error cases - "missing required argument": { - nil, - &Add{ - ViewType: ViewHuman, - State: &State{Lock: true}, - }, - `Too few command line arguments`, - }, - "too many arguments": { - []string{"-from-state", "resource_foo.bar", "module.foo.resource_foo.baz"}, - &Add{ - ViewType: ViewHuman, - State: &State{Lock: true}, - FromState: true, - }, - `Too many command line arguments`, - }, - "invalid target address": { - []string{"definitely-not_a-VALID-resource"}, - &Add{ - ViewType: ViewHuman, - State: &State{Lock: true}, - }, - `Error parsing resource address: definitely-not_a-VALID-resource`, - }, - "invalid provider flag": { - []string{"-provider=/this/isn't/quite/correct", "resource_foo.bar"}, - &Add{ - Addr: mustResourceInstanceAddr("resource_foo.bar"), - ViewType: ViewHuman, - State: &State{Lock: true}, - }, - `Invalid provider string: /this/isn't/quite/correct`, - }, - "incompatible options": { - []string{"-from-state", "-provider=provider[\"example.com/happycorp/test\"]", "test_compute.bar"}, - &Add{ViewType: ViewHuman, - Addr: mustResourceInstanceAddr("test_compute.bar"), - State: &State{Lock: true}, - FromState: true, - }, - `Incompatible command-line options`, - }, - } - - for name, test := range tests { - t.Run(name, func(t *testing.T) { - got, diags := ParseAdd(test.args) - if test.wantError != "" { - if len(diags) != 1 { - t.Fatalf("got %d diagnostics; want exactly 1\n", len(diags)) - } - if diags[0].Severity() != tfdiags.Error { - t.Fatalf("got a warning; want an error\n%s", diags.ErrWithWarnings()) - } - if desc := diags[0].Description(); desc.Summary != test.wantError { - t.Fatalf("wrong error\ngot: %s\nwant: %s", desc.Summary, test.wantError) - } - } else { - if len(diags) != 0 { - t.Fatalf("got %d diagnostics; want none\n%s", len(diags), diags.Err().Error()) - } - } - - if diff := cmp.Diff(test.want, got); diff != "" { - t.Errorf("unexpected result\n%s", diff) - } - }) - } -} - -func mustResourceInstanceAddr(s string) addrs.AbsResourceInstance { - addr, diags := addrs.ParseAbsResourceInstanceStr(s) - if diags.HasErrors() { - panic(diags.Err()) - } - return addr -} diff --git a/internal/command/command_test.go b/internal/command/command_test.go index 4b6fcf311c77..5fab71f1fc64 100644 --- a/internal/command/command_test.go +++ b/internal/command/command_test.go @@ -1003,14 +1003,6 @@ func mustResourceAddr(s string) addrs.ConfigResource { return addr.Config() } -func mustProviderConfig(s string) addrs.AbsProviderConfig { - p, diags := addrs.ParseAbsProviderConfigStr(s) - if diags.HasErrors() { - panic(diags.Err()) - } - return p -} - // This map from provider type name to namespace is used by the fake registry // when called via LookupLegacyProvider. Providers not in this map will return // a 404 Not Found error. diff --git a/internal/command/testdata/add/basic/main.tf b/internal/command/testdata/add/basic/main.tf deleted file mode 100644 index ec661dbd9025..000000000000 --- a/internal/command/testdata/add/basic/main.tf +++ /dev/null @@ -1,14 +0,0 @@ -terraform { - required_providers { - test = { - source = "hashicorp/test" - } - othertest = { - source = "happycorp/test" - } - } -} - -resource "test_instance" "exists" { - // I exist! -} \ No newline at end of file diff --git a/internal/command/testdata/add/module/main.tf b/internal/command/testdata/add/module/main.tf deleted file mode 100644 index 11fca99237f5..000000000000 --- a/internal/command/testdata/add/module/main.tf +++ /dev/null @@ -1,17 +0,0 @@ -terraform { - required_providers { - // This is deliberately odd, so we can test that the correct happycorp - // provider is selected for any test_ resource added for this module - test = { - source = "happycorp/test" - } - } -} - -resource "test_instance" "exists" { - // I exist! -} - -module "child" { - source = "./module" -} \ No newline at end of file diff --git a/internal/command/testdata/add/module/module/main.tf b/internal/command/testdata/add/module/module/main.tf deleted file mode 100644 index 55210f20f31f..000000000000 --- a/internal/command/testdata/add/module/module/main.tf +++ /dev/null @@ -1,9 +0,0 @@ -terraform { - required_providers { - test = { - source = "hashicorp/test" - } - } -} - -resource "test_instance" "exists" {} \ No newline at end of file diff --git a/internal/command/views/add.go b/internal/command/views/add.go deleted file mode 100644 index c009fb463e7c..000000000000 --- a/internal/command/views/add.go +++ /dev/null @@ -1,562 +0,0 @@ -package views - -import ( - "fmt" - "os" - "sort" - "strings" - - "github.com/hashicorp/hcl/v2/hclwrite" - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/command/arguments" - "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/hashicorp/terraform/internal/lang/marks" - "github.com/hashicorp/terraform/internal/tfdiags" - "github.com/zclconf/go-cty/cty" -) - -// Add is the view interface for the "terraform add" command. -type Add interface { - Resource(addrs.AbsResourceInstance, *configschema.Block, addrs.LocalProviderConfig, cty.Value) error - Diagnostics(tfdiags.Diagnostics) -} - -// NewAdd returns an initialized Validate implementation. At this time, -// ViewHuman is the only implemented view type. -func NewAdd(vt arguments.ViewType, view *View, args *arguments.Add) Add { - return &addHuman{ - view: view, - optional: args.Optional, - outPath: args.OutPath, - } -} - -type addHuman struct { - view *View - optional bool - outPath string -} - -func (v *addHuman) Resource(addr addrs.AbsResourceInstance, schema *configschema.Block, pc addrs.LocalProviderConfig, stateVal cty.Value) error { - var buf strings.Builder - - buf.WriteString(`# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -`) - - buf.WriteString(fmt.Sprintf("resource %q %q {\n", addr.Resource.Resource.Type, addr.Resource.Resource.Name)) - - if pc.LocalName != addr.Resource.Resource.ImpliedProvider() || pc.Alias != "" { - buf.WriteString(strings.Repeat(" ", 2)) - buf.WriteString(fmt.Sprintf("provider = %s\n\n", pc.StringCompact())) - } - - if stateVal.RawEquals(cty.NilVal) { - if err := v.writeConfigAttributes(&buf, schema.Attributes, 2); err != nil { - return err - } - if err := v.writeConfigBlocks(&buf, schema.BlockTypes, 2); err != nil { - return err - } - } else { - if err := v.writeConfigAttributesFromExisting(&buf, stateVal, schema.Attributes, 2); err != nil { - return err - } - if err := v.writeConfigBlocksFromExisting(&buf, stateVal, schema.BlockTypes, 2); err != nil { - return err - } - } - - buf.WriteString("}") - - // The output better be valid HCL which can be parsed and formatted. - formatted := hclwrite.Format([]byte(buf.String())) - - var err error - if v.outPath == "" { - _, err = v.view.streams.Println(string(formatted)) - return err - } else { - // The Println call above adds this final newline automatically; we add it manually here. - formatted = append(formatted, '\n') - - f, err := os.OpenFile(v.outPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) - if err != nil { - return err - } - defer f.Close() - _, err = f.Write(formatted) - return err - } -} - -func (v *addHuman) Diagnostics(diags tfdiags.Diagnostics) { - v.view.Diagnostics(diags) -} - -func (v *addHuman) writeConfigAttributes(buf *strings.Builder, attrs map[string]*configschema.Attribute, indent int) error { - if len(attrs) == 0 { - return nil - } - - // Get a list of sorted attribute names so the output will be consistent between runs. - keys := make([]string, 0, len(attrs)) - for k := range attrs { - keys = append(keys, k) - } - sort.Strings(keys) - - for i := range keys { - name := keys[i] - attrS := attrs[name] - if attrS.NestedType != nil { - if err := v.writeConfigNestedTypeAttribute(buf, name, attrS, indent); err != nil { - return err - } - continue - } - if attrS.Required { - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s = ", name)) - tok := hclwrite.TokensForValue(attrS.EmptyValue()) - if _, err := tok.WriteTo(buf); err != nil { - return err - } - writeAttrTypeConstraint(buf, attrS) - } else if attrS.Optional && v.optional { - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s = ", name)) - tok := hclwrite.TokensForValue(attrS.EmptyValue()) - if _, err := tok.WriteTo(buf); err != nil { - return err - } - writeAttrTypeConstraint(buf, attrS) - } - } - return nil -} - -func (v *addHuman) writeConfigAttributesFromExisting(buf *strings.Builder, stateVal cty.Value, attrs map[string]*configschema.Attribute, indent int) error { - if len(attrs) == 0 { - return nil - } - - // Get a list of sorted attribute names so the output will be consistent between runs. - keys := make([]string, 0, len(attrs)) - for k := range attrs { - keys = append(keys, k) - } - sort.Strings(keys) - - for i := range keys { - name := keys[i] - attrS := attrs[name] - if attrS.NestedType != nil { - if err := v.writeConfigNestedTypeAttributeFromExisting(buf, name, attrS, stateVal, indent); err != nil { - return err - } - continue - } - - // Exclude computed-only attributes - if attrS.Required || attrS.Optional { - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s = ", name)) - - var val cty.Value - if stateVal.Type().HasAttribute(name) { - val = stateVal.GetAttr(name) - } else { - val = attrS.EmptyValue() - } - if attrS.Sensitive || val.HasMark(marks.Sensitive) { - buf.WriteString("null # sensitive") - } else { - val, _ = val.Unmark() - tok := hclwrite.TokensForValue(val) - if _, err := tok.WriteTo(buf); err != nil { - return err - } - } - - buf.WriteString("\n") - } - } - return nil -} - -func (v *addHuman) writeConfigBlocks(buf *strings.Builder, blocks map[string]*configschema.NestedBlock, indent int) error { - if len(blocks) == 0 { - return nil - } - - // Get a list of sorted block names so the output will be consistent between runs. - names := make([]string, 0, len(blocks)) - for k := range blocks { - names = append(names, k) - } - sort.Strings(names) - - for i := range names { - name := names[i] - blockS := blocks[name] - if err := v.writeConfigNestedBlock(buf, name, blockS, indent); err != nil { - return err - } - } - return nil -} - -func (v *addHuman) writeConfigNestedBlock(buf *strings.Builder, name string, schema *configschema.NestedBlock, indent int) error { - if !v.optional && schema.MinItems == 0 { - return nil - } - - switch schema.Nesting { - case configschema.NestingSingle, configschema.NestingGroup: - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s {", name)) - writeBlockTypeConstraint(buf, schema) - if err := v.writeConfigAttributes(buf, schema.Attributes, indent+2); err != nil { - return err - } - if err := v.writeConfigBlocks(buf, schema.BlockTypes, indent+2); err != nil { - return err - } - buf.WriteString("}\n") - return nil - case configschema.NestingList, configschema.NestingSet: - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s {", name)) - writeBlockTypeConstraint(buf, schema) - if err := v.writeConfigAttributes(buf, schema.Attributes, indent+2); err != nil { - return err - } - if err := v.writeConfigBlocks(buf, schema.BlockTypes, indent+2); err != nil { - return err - } - buf.WriteString("}\n") - return nil - case configschema.NestingMap: - buf.WriteString(strings.Repeat(" ", indent)) - // we use an arbitrary placeholder key (block label) "key" - buf.WriteString(fmt.Sprintf("%s \"key\" {", name)) - writeBlockTypeConstraint(buf, schema) - if err := v.writeConfigAttributes(buf, schema.Attributes, indent+2); err != nil { - return err - } - if err := v.writeConfigBlocks(buf, schema.BlockTypes, indent+2); err != nil { - return err - } - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString("}\n") - return nil - default: - // This should not happen, the above should be exhaustive. - return fmt.Errorf("unsupported NestingMode %s", schema.Nesting.String()) - } -} - -func (v *addHuman) writeConfigNestedTypeAttribute(buf *strings.Builder, name string, schema *configschema.Attribute, indent int) error { - if !schema.Required && !v.optional { - return nil - } - - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s = ", name)) - - switch schema.NestedType.Nesting { - case configschema.NestingSingle: - buf.WriteString("{") - writeAttrTypeConstraint(buf, schema) - if err := v.writeConfigAttributes(buf, schema.NestedType.Attributes, indent+2); err != nil { - return err - } - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString("}\n") - return nil - case configschema.NestingList, configschema.NestingSet: - buf.WriteString("[{") - writeAttrTypeConstraint(buf, schema) - if err := v.writeConfigAttributes(buf, schema.NestedType.Attributes, indent+2); err != nil { - return err - } - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString("}]\n") - return nil - case configschema.NestingMap: - buf.WriteString("{") - writeAttrTypeConstraint(buf, schema) - buf.WriteString(strings.Repeat(" ", indent+2)) - // we use an arbitrary placeholder key "key" - buf.WriteString("key = {\n") - if err := v.writeConfigAttributes(buf, schema.NestedType.Attributes, indent+4); err != nil { - return err - } - buf.WriteString(strings.Repeat(" ", indent+2)) - buf.WriteString("}\n") - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString("}\n") - return nil - default: - // This should not happen, the above should be exhaustive. - return fmt.Errorf("unsupported NestingMode %s", schema.NestedType.Nesting.String()) - } -} - -func (v *addHuman) writeConfigBlocksFromExisting(buf *strings.Builder, stateVal cty.Value, blocks map[string]*configschema.NestedBlock, indent int) error { - if len(blocks) == 0 { - return nil - } - - // Get a list of sorted block names so the output will be consistent between runs. - names := make([]string, 0, len(blocks)) - for k := range blocks { - names = append(names, k) - } - sort.Strings(names) - - for _, name := range names { - blockS := blocks[name] - // This shouldn't happen in real usage; state always has all values (set - // to null as needed), but it protects against panics in tests (and any - // really weird and unlikely cases). - if !stateVal.Type().HasAttribute(name) { - continue - } - blockVal := stateVal.GetAttr(name) - if err := v.writeConfigNestedBlockFromExisting(buf, name, blockS, blockVal, indent); err != nil { - return err - } - } - - return nil -} - -func (v *addHuman) writeConfigNestedTypeAttributeFromExisting(buf *strings.Builder, name string, schema *configschema.Attribute, stateVal cty.Value, indent int) error { - switch schema.NestedType.Nesting { - case configschema.NestingSingle: - if schema.Sensitive || stateVal.HasMark(marks.Sensitive) { - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s = {} # sensitive\n", name)) - return nil - } - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s = {\n", name)) - - // This shouldn't happen in real usage; state always has all values (set - // to null as needed), but it protects against panics in tests (and any - // really weird and unlikely cases). - if !stateVal.Type().HasAttribute(name) { - return nil - } - nestedVal := stateVal.GetAttr(name) - if err := v.writeConfigAttributesFromExisting(buf, nestedVal, schema.NestedType.Attributes, indent+2); err != nil { - return err - } - buf.WriteString("}\n") - return nil - - case configschema.NestingList, configschema.NestingSet: - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s = [", name)) - - if schema.Sensitive || stateVal.HasMark(marks.Sensitive) { - buf.WriteString("] # sensitive\n") - return nil - } - - buf.WriteString("\n") - - listVals := ctyCollectionValues(stateVal.GetAttr(name)) - for i := range listVals { - buf.WriteString(strings.Repeat(" ", indent+2)) - - // The entire element is marked. - if listVals[i].HasMark(marks.Sensitive) { - buf.WriteString("{}, # sensitive\n") - continue - } - - buf.WriteString("{\n") - if err := v.writeConfigAttributesFromExisting(buf, listVals[i], schema.NestedType.Attributes, indent+4); err != nil { - return err - } - buf.WriteString(strings.Repeat(" ", indent+2)) - buf.WriteString("},\n") - } - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString("]\n") - return nil - - case configschema.NestingMap: - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s = {", name)) - - if schema.Sensitive || stateVal.HasMark(marks.Sensitive) { - buf.WriteString(" } # sensitive\n") - return nil - } - - buf.WriteString("\n") - - vals := stateVal.GetAttr(name).AsValueMap() - keys := make([]string, 0, len(vals)) - for key := range vals { - keys = append(keys, key) - } - sort.Strings(keys) - for _, key := range keys { - buf.WriteString(strings.Repeat(" ", indent+2)) - buf.WriteString(fmt.Sprintf("%s = {", key)) - - // This entire value is marked - if vals[key].HasMark(marks.Sensitive) { - buf.WriteString("} # sensitive\n") - continue - } - - buf.WriteString("\n") - if err := v.writeConfigAttributesFromExisting(buf, vals[key], schema.NestedType.Attributes, indent+4); err != nil { - return err - } - buf.WriteString(strings.Repeat(" ", indent+2)) - buf.WriteString("}\n") - } - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString("}\n") - return nil - - default: - // This should not happen, the above should be exhaustive. - return fmt.Errorf("unsupported NestingMode %s", schema.NestedType.Nesting.String()) - } -} - -func (v *addHuman) writeConfigNestedBlockFromExisting(buf *strings.Builder, name string, schema *configschema.NestedBlock, stateVal cty.Value, indent int) error { - switch schema.Nesting { - case configschema.NestingSingle, configschema.NestingGroup: - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s {", name)) - - // If the entire value is marked, don't print any nested attributes - if stateVal.HasMark(marks.Sensitive) { - buf.WriteString("} # sensitive\n") - return nil - } - buf.WriteString("\n") - if err := v.writeConfigAttributesFromExisting(buf, stateVal, schema.Attributes, indent+2); err != nil { - return err - } - if err := v.writeConfigBlocksFromExisting(buf, stateVal, schema.BlockTypes, indent+2); err != nil { - return err - } - buf.WriteString("}\n") - return nil - case configschema.NestingList, configschema.NestingSet: - if stateVal.HasMark(marks.Sensitive) { - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s {} # sensitive\n", name)) - return nil - } - listVals := ctyCollectionValues(stateVal) - for i := range listVals { - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s {\n", name)) - if err := v.writeConfigAttributesFromExisting(buf, listVals[i], schema.Attributes, indent+2); err != nil { - return err - } - if err := v.writeConfigBlocksFromExisting(buf, listVals[i], schema.BlockTypes, indent+2); err != nil { - return err - } - buf.WriteString("}\n") - } - return nil - case configschema.NestingMap: - // If the entire value is marked, don't print any nested attributes - if stateVal.HasMark(marks.Sensitive) { - buf.WriteString(fmt.Sprintf("%s {} # sensitive\n", name)) - return nil - } - - vals := stateVal.AsValueMap() - keys := make([]string, 0, len(vals)) - for key := range vals { - keys = append(keys, key) - } - sort.Strings(keys) - for _, key := range keys { - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString(fmt.Sprintf("%s %q {", name, key)) - // This entire map element is marked - if vals[key].HasMark(marks.Sensitive) { - buf.WriteString("} # sensitive\n") - return nil - } - buf.WriteString("\n") - - if err := v.writeConfigAttributesFromExisting(buf, vals[key], schema.Attributes, indent+2); err != nil { - return err - } - if err := v.writeConfigBlocksFromExisting(buf, vals[key], schema.BlockTypes, indent+2); err != nil { - return err - } - buf.WriteString(strings.Repeat(" ", indent)) - buf.WriteString("}\n") - } - return nil - default: - // This should not happen, the above should be exhaustive. - return fmt.Errorf("unsupported NestingMode %s", schema.Nesting.String()) - } -} - -func writeAttrTypeConstraint(buf *strings.Builder, schema *configschema.Attribute) { - if schema.Required { - buf.WriteString(" # REQUIRED ") - } else { - buf.WriteString(" # OPTIONAL ") - } - - if schema.NestedType != nil { - buf.WriteString(fmt.Sprintf("%s\n", schema.NestedType.ImpliedType().FriendlyName())) - } else { - buf.WriteString(fmt.Sprintf("%s\n", schema.Type.FriendlyName())) - } -} - -func writeBlockTypeConstraint(buf *strings.Builder, schema *configschema.NestedBlock) { - if schema.MinItems > 0 { - buf.WriteString(" # REQUIRED block\n") - } else { - buf.WriteString(" # OPTIONAL block\n") - } -} - -// copied from command/format/diff -func ctyCollectionValues(val cty.Value) []cty.Value { - if !val.IsKnown() || val.IsNull() { - return nil - } - - var len int - if val.IsMarked() { - val, _ = val.Unmark() - len = val.LengthInt() - } else { - len = val.LengthInt() - } - - ret := make([]cty.Value, 0, len) - for it := val.ElementIterator(); it.Next(); { - _, value := it.Element() - ret = append(ret, value) - } - - return ret -} diff --git a/internal/command/views/add_test.go b/internal/command/views/add_test.go deleted file mode 100644 index c0986e4daa41..000000000000 --- a/internal/command/views/add_test.go +++ /dev/null @@ -1,1018 +0,0 @@ -package views - -import ( - "strings" - "testing" - - "github.com/google/go-cmp/cmp" - "github.com/hashicorp/terraform/internal/addrs" - "github.com/hashicorp/terraform/internal/configs/configschema" - "github.com/hashicorp/terraform/internal/lang/marks" - "github.com/hashicorp/terraform/internal/terminal" - "github.com/zclconf/go-cty/cty" -) - -// The output is tested in greater detail in other tests; this suite focuses on -// details specific to the Resource function. -func TestAddResource(t *testing.T) { - t.Run("config only", func(t *testing.T) { - streams, done := terminal.StreamsForTesting(t) - v := addHuman{view: NewView(streams), optional: true} - err := v.Resource( - mustResourceInstanceAddr("test_instance.foo"), - addTestSchemaSensitive(configschema.NestingSingle), - addrs.NewDefaultLocalProviderConfig("mytest"), cty.NilVal, - ) - if err != nil { - t.Fatal(err.Error()) - } - - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "foo" { - provider = mytest - - ami = null # OPTIONAL string - disks = { # OPTIONAL object - mount_point = null # OPTIONAL string - size = null # OPTIONAL string - } - id = null # OPTIONAL string - root_block_device { # OPTIONAL block - volume_type = null # OPTIONAL string - } -} -` - output := done(t) - if output.Stdout() != expected { - t.Errorf("wrong result: %s", cmp.Diff(expected, output.Stdout())) - } - }) - - t.Run("from state", func(t *testing.T) { - streams, done := terminal.StreamsForTesting(t) - v := addHuman{view: NewView(streams), optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "ami": cty.StringVal("ami-123456789"), - "disks": cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/foo"), - "size": cty.StringVal("50GB"), - }), - }) - - err := v.Resource( - mustResourceInstanceAddr("test_instance.foo"), - addTestSchemaSensitive(configschema.NestingSingle), - addrs.NewDefaultLocalProviderConfig("mytest"), val, - ) - if err != nil { - t.Fatal(err.Error()) - } - - expected := `# NOTE: The "terraform add" command is currently experimental and offers only a -# starting point for your resource configuration, with some limitations. -# -# The behavior of this command may change in future based on feedback, possibly -# in incompatible ways. We don't recommend building automation around this -# command at this time. If you have feedback about this command, please open -# a feature request issue in the Terraform GitHub repository. -resource "test_instance" "foo" { - provider = mytest - - ami = "ami-123456789" - disks = {} # sensitive - id = null -} -` - output := done(t) - if output.Stdout() != expected { - t.Errorf("wrong result: %s", cmp.Diff(expected, output.Stdout())) - } - }) - -} - -func TestAdd_writeConfigAttributes(t *testing.T) { - tests := map[string]struct { - attrs map[string]*configschema.Attribute - expected string - }{ - "empty returns nil": { - map[string]*configschema.Attribute{}, - "", - }, - "attributes": { - map[string]*configschema.Attribute{ - "ami": { - Type: cty.Number, - Required: true, - }, - "boot_disk": { - Type: cty.String, - Optional: true, - }, - "password": { - Type: cty.String, - Optional: true, - Sensitive: true, // sensitivity is ignored when printing blank templates - }, - }, - `ami = null # REQUIRED number -boot_disk = null # OPTIONAL string -password = null # OPTIONAL string -`, - }, - "attributes with nested types": { - map[string]*configschema.Attribute{ - "ami": { - Type: cty.Number, - Required: true, - }, - "disks": { - NestedType: &configschema.Object{ - Nesting: configschema.NestingSingle, - Attributes: map[string]*configschema.Attribute{ - "size": { - Type: cty.Number, - Optional: true, - }, - "mount_point": { - Type: cty.String, - Required: true, - }, - }, - }, - Optional: true, - }, - }, - `ami = null # REQUIRED number -disks = { # OPTIONAL object - mount_point = null # REQUIRED string - size = null # OPTIONAL number -} -`, - }, - } - - v := addHuman{optional: true} - - for name, test := range tests { - t.Run(name, func(t *testing.T) { - var buf strings.Builder - if err := v.writeConfigAttributes(&buf, test.attrs, 0); err != nil { - t.Errorf("unexpected error") - } - if buf.String() != test.expected { - t.Errorf("wrong result: %s", cmp.Diff(test.expected, buf.String())) - } - }) - } -} - -func TestAdd_writeConfigAttributesFromExisting(t *testing.T) { - attrs := map[string]*configschema.Attribute{ - "ami": { - Type: cty.Number, - Required: true, - }, - "boot_disk": { - Type: cty.String, - Optional: true, - }, - "password": { - Type: cty.String, - Optional: true, - Sensitive: true, - }, - "disks": { - NestedType: &configschema.Object{ - Nesting: configschema.NestingSingle, - Attributes: map[string]*configschema.Attribute{ - "size": { - Type: cty.Number, - Optional: true, - }, - "mount_point": { - Type: cty.String, - Required: true, - }, - }, - }, - Optional: true, - }, - } - - tests := map[string]struct { - attrs map[string]*configschema.Attribute - val cty.Value - expected string - }{ - "empty returns nil": { - map[string]*configschema.Attribute{}, - cty.NilVal, - "", - }, - "mixed attributes": { - attrs, - cty.ObjectVal(map[string]cty.Value{ - "ami": cty.NumberIntVal(123456), - "boot_disk": cty.NullVal(cty.String), - "password": cty.StringVal("i am secret"), - "disks": cty.ObjectVal(map[string]cty.Value{ - "size": cty.NumberIntVal(50), - "mount_point": cty.NullVal(cty.String), - }), - }), - `ami = 123456 -boot_disk = null -disks = { - mount_point = null - size = 50 -} -password = null # sensitive -`, - }, - } - - v := addHuman{optional: true} - - for name, test := range tests { - t.Run(name, func(t *testing.T) { - var buf strings.Builder - if err := v.writeConfigAttributesFromExisting(&buf, test.val, test.attrs, 0); err != nil { - t.Errorf("unexpected error") - } - if buf.String() != test.expected { - t.Errorf("wrong result: %s", cmp.Diff(test.expected, buf.String())) - } - }) - } -} - -func TestAdd_writeConfigBlocks(t *testing.T) { - t.Run("NestingSingle", func(t *testing.T) { - v := addHuman{optional: true} - schema := addTestSchema(configschema.NestingSingle) - var buf strings.Builder - v.writeConfigBlocks(&buf, schema.BlockTypes, 0) - - expected := `network_rules { # REQUIRED block - ip_address = null # OPTIONAL string -} -root_block_device { # OPTIONAL block - volume_type = null # OPTIONAL string -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Errorf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingList", func(t *testing.T) { - v := addHuman{optional: true} - schema := addTestSchema(configschema.NestingList) - var buf strings.Builder - v.writeConfigBlocks(&buf, schema.BlockTypes, 0) - - expected := `network_rules { # REQUIRED block - ip_address = null # OPTIONAL string -} -root_block_device { # OPTIONAL block - volume_type = null # OPTIONAL string -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingSet", func(t *testing.T) { - v := addHuman{optional: true} - schema := addTestSchema(configschema.NestingSet) - var buf strings.Builder - v.writeConfigBlocks(&buf, schema.BlockTypes, 0) - - expected := `network_rules { # REQUIRED block - ip_address = null # OPTIONAL string -} -root_block_device { # OPTIONAL block - volume_type = null # OPTIONAL string -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingMap", func(t *testing.T) { - v := addHuman{optional: true} - schema := addTestSchema(configschema.NestingMap) - var buf strings.Builder - v.writeConfigBlocks(&buf, schema.BlockTypes, 0) - - expected := `network_rules "key" { # REQUIRED block - ip_address = null # OPTIONAL string -} -root_block_device "key" { # OPTIONAL block - volume_type = null # OPTIONAL string -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) -} - -func TestAdd_writeConfigBlocksFromExisting(t *testing.T) { - - t.Run("NestingSingle", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - }) - schema := addTestSchema(configschema.NestingSingle) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device { - volume_type = "foo" -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Errorf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingSingle_marked_attr", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo").Mark(marks.Sensitive), - }), - }) - schema := addTestSchema(configschema.NestingSingle) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device { - volume_type = null # sensitive -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Errorf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingSingle_entirely_marked", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - }).Mark(marks.Sensitive) - schema := addTestSchema(configschema.NestingSingle) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device {} # sensitive -` - - if !cmp.Equal(buf.String(), expected) { - t.Errorf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingList", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }), - }), - }) - schema := addTestSchema(configschema.NestingList) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device { - volume_type = "foo" -} -root_block_device { - volume_type = "bar" -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingList_marked_attr", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo").Mark(marks.Sensitive), - }), - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }), - }), - }) - schema := addTestSchema(configschema.NestingList) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device { - volume_type = null # sensitive -} -root_block_device { - volume_type = "bar" -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingList_entirely_marked", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }), - }).Mark(marks.Sensitive), - }) - schema := addTestSchema(configschema.NestingList) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device {} # sensitive -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingSet", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }), - }), - }) - schema := addTestSchema(configschema.NestingSet) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device { - volume_type = "bar" -} -root_block_device { - volume_type = "foo" -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingSet_marked", func(t *testing.T) { - v := addHuman{optional: true} - // In cty.Sets, the entire set ends up marked if any element is marked. - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.SetVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }), - }).Mark(marks.Sensitive), - }) - schema := addTestSchema(configschema.NestingSet) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - // When the entire set of blocks is sensitive, we only print one block. - expected := `root_block_device {} # sensitive -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingMap", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.MapVal(map[string]cty.Value{ - "1": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - "2": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }), - }), - }) - schema := addTestSchema(configschema.NestingMap) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device "1" { - volume_type = "foo" -} -root_block_device "2" { - volume_type = "bar" -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingMap_marked", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.MapVal(map[string]cty.Value{ - "1": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo").Mark(marks.Sensitive), - }), - "2": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }), - }), - }) - schema := addTestSchema(configschema.NestingMap) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device "1" { - volume_type = null # sensitive -} -root_block_device "2" { - volume_type = "bar" -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingMap_entirely_marked", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.MapVal(map[string]cty.Value{ - "1": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - "2": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }), - }).Mark(marks.Sensitive), - }) - schema := addTestSchema(configschema.NestingMap) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device {} # sensitive -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingMap_marked_elem", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "root_block_device": cty.MapVal(map[string]cty.Value{ - "1": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("foo"), - }), - "2": cty.ObjectVal(map[string]cty.Value{ - "volume_type": cty.StringVal("bar"), - }).Mark(marks.Sensitive), - }), - }) - schema := addTestSchema(configschema.NestingMap) - var buf strings.Builder - v.writeConfigBlocksFromExisting(&buf, val, schema.BlockTypes, 0) - - expected := `root_block_device "1" { - volume_type = "foo" -} -root_block_device "2" {} # sensitive -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) -} - -func TestAdd_writeConfigNestedTypeAttribute(t *testing.T) { - t.Run("NestingSingle", func(t *testing.T) { - v := addHuman{optional: true} - schema := addTestSchema(configschema.NestingSingle) - var buf strings.Builder - v.writeConfigNestedTypeAttribute(&buf, "disks", schema.Attributes["disks"], 0) - - expected := `disks = { # OPTIONAL object - mount_point = null # OPTIONAL string - size = null # OPTIONAL string -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingList", func(t *testing.T) { - v := addHuman{optional: true} - schema := addTestSchema(configschema.NestingList) - var buf strings.Builder - v.writeConfigNestedTypeAttribute(&buf, "disks", schema.Attributes["disks"], 0) - - expected := `disks = [{ # OPTIONAL list of object - mount_point = null # OPTIONAL string - size = null # OPTIONAL string -}] -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingSet", func(t *testing.T) { - v := addHuman{optional: true} - schema := addTestSchema(configschema.NestingSet) - var buf strings.Builder - v.writeConfigNestedTypeAttribute(&buf, "disks", schema.Attributes["disks"], 0) - - expected := `disks = [{ # OPTIONAL set of object - mount_point = null # OPTIONAL string - size = null # OPTIONAL string -}] -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingMap", func(t *testing.T) { - v := addHuman{optional: true} - schema := addTestSchema(configschema.NestingMap) - var buf strings.Builder - v.writeConfigNestedTypeAttribute(&buf, "disks", schema.Attributes["disks"], 0) - - expected := `disks = { # OPTIONAL map of object - key = { - mount_point = null # OPTIONAL string - size = null # OPTIONAL string - } -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) -} - -func TestAdd_WriteConfigNestedTypeAttributeFromExisting(t *testing.T) { - t.Run("NestingSingle", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "disks": cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/foo"), - "size": cty.StringVal("50GB"), - }), - }) - schema := addTestSchema(configschema.NestingSingle) - var buf strings.Builder - v.writeConfigNestedTypeAttributeFromExisting(&buf, "disks", schema.Attributes["disks"], val, 0) - - expected := `disks = { - mount_point = "/mnt/foo" - size = "50GB" -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingSingle_sensitive", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "disks": cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/foo"), - "size": cty.StringVal("50GB"), - }), - }) - schema := addTestSchemaSensitive(configschema.NestingSingle) - var buf strings.Builder - v.writeConfigNestedTypeAttributeFromExisting(&buf, "disks", schema.Attributes["disks"], val, 0) - - expected := `disks = {} # sensitive -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingList", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "disks": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/foo"), - "size": cty.StringVal("50GB"), - }), - cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/bar"), - "size": cty.StringVal("250GB"), - }), - }), - }) - - schema := addTestSchema(configschema.NestingList) - var buf strings.Builder - v.writeConfigNestedTypeAttributeFromExisting(&buf, "disks", schema.Attributes["disks"], val, 0) - - expected := `disks = [ - { - mount_point = "/mnt/foo" - size = "50GB" - }, - { - mount_point = "/mnt/bar" - size = "250GB" - }, -] -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingList - marked", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "disks": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/foo"), - "size": cty.StringVal("50GB").Mark(marks.Sensitive), - }), - // This is an odd example, where the entire element is marked. - cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/bar"), - "size": cty.StringVal("250GB"), - }).Mark(marks.Sensitive), - }), - }) - - schema := addTestSchema(configschema.NestingList) - var buf strings.Builder - v.writeConfigNestedTypeAttributeFromExisting(&buf, "disks", schema.Attributes["disks"], val, 0) - - expected := `disks = [ - { - mount_point = "/mnt/foo" - size = null # sensitive - }, - {}, # sensitive -] -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingList - entirely marked", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "disks": cty.ListVal([]cty.Value{ - cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/foo"), - "size": cty.StringVal("50GB"), - }), - // This is an odd example, where the entire element is marked. - cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/bar"), - "size": cty.StringVal("250GB"), - }), - }), - }).Mark(marks.Sensitive) - - schema := addTestSchema(configschema.NestingList) - var buf strings.Builder - v.writeConfigNestedTypeAttributeFromExisting(&buf, "disks", schema.Attributes["disks"], val, 0) - - expected := `disks = [] # sensitive -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingMap", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "disks": cty.MapVal(map[string]cty.Value{ - "foo": cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/foo"), - "size": cty.StringVal("50GB"), - }), - "bar": cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/bar"), - "size": cty.StringVal("250GB"), - }), - }), - }) - schema := addTestSchema(configschema.NestingMap) - var buf strings.Builder - v.writeConfigNestedTypeAttributeFromExisting(&buf, "disks", schema.Attributes["disks"], val, 0) - - expected := `disks = { - bar = { - mount_point = "/mnt/bar" - size = "250GB" - } - foo = { - mount_point = "/mnt/foo" - size = "50GB" - } -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) - - t.Run("NestingMap - marked", func(t *testing.T) { - v := addHuman{optional: true} - val := cty.ObjectVal(map[string]cty.Value{ - "disks": cty.MapVal(map[string]cty.Value{ - "foo": cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/foo"), - "size": cty.StringVal("50GB").Mark(marks.Sensitive), - }), - "bar": cty.ObjectVal(map[string]cty.Value{ - "mount_point": cty.StringVal("/mnt/bar"), - "size": cty.StringVal("250GB"), - }).Mark(marks.Sensitive), - }), - }) - schema := addTestSchema(configschema.NestingMap) - var buf strings.Builder - v.writeConfigNestedTypeAttributeFromExisting(&buf, "disks", schema.Attributes["disks"], val, 0) - - expected := `disks = { - bar = {} # sensitive - foo = { - mount_point = "/mnt/foo" - size = null # sensitive - } -} -` - - if !cmp.Equal(buf.String(), expected) { - t.Fatalf("wrong output:\n%s", cmp.Diff(expected, buf.String())) - } - }) -} - -func addTestSchema(nesting configschema.NestingMode) *configschema.Block { - return &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": {Type: cty.String, Optional: true, Computed: true}, - // Attributes which are neither optional nor required should not print. - "uuid": {Type: cty.String, Computed: true}, - "ami": {Type: cty.String, Optional: true}, - "disks": { - NestedType: &configschema.Object{ - Attributes: map[string]*configschema.Attribute{ - "mount_point": {Type: cty.String, Optional: true}, - "size": {Type: cty.String, Optional: true}, - }, - Nesting: nesting, - }, - }, - }, - BlockTypes: map[string]*configschema.NestedBlock{ - "root_block_device": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "volume_type": { - Type: cty.String, - Optional: true, - Computed: true, - }, - }, - }, - Nesting: nesting, - }, - "network_rules": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "ip_address": { - Type: cty.String, - Optional: true, - Computed: true, - }, - }, - }, - Nesting: nesting, - MinItems: 1, - }, - }, - } -} - -// addTestSchemaSensitive returns a schema with a sensitive NestedType and a -// NestedBlock with sensitive attributes. -func addTestSchemaSensitive(nesting configschema.NestingMode) *configschema.Block { - return &configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "id": {Type: cty.String, Optional: true, Computed: true}, - // Attributes which are neither optional nor required should not print. - "uuid": {Type: cty.String, Computed: true}, - "ami": {Type: cty.String, Optional: true}, - "disks": { - NestedType: &configschema.Object{ - Attributes: map[string]*configschema.Attribute{ - "mount_point": {Type: cty.String, Optional: true}, - "size": {Type: cty.String, Optional: true}, - }, - Nesting: nesting, - }, - Sensitive: true, - }, - }, - BlockTypes: map[string]*configschema.NestedBlock{ - "root_block_device": { - Block: configschema.Block{ - Attributes: map[string]*configschema.Attribute{ - "volume_type": { - Type: cty.String, - Optional: true, - Computed: true, - Sensitive: true, - }, - }, - }, - Nesting: nesting, - }, - }, - } -} - -func mustResourceInstanceAddr(s string) addrs.AbsResourceInstance { - addr, diags := addrs.ParseAbsResourceInstanceStr(s) - if diags.HasErrors() { - panic(diags.Err()) - } - return addr -} diff --git a/website/docs/cli/commands/add.html.md b/website/docs/cli/commands/add.html.md deleted file mode 100644 index 73364f06deee..000000000000 --- a/website/docs/cli/commands/add.html.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -layout: "docs" -page_title: "Command: add" -sidebar_current: "docs-commands-add" -description: |- - The `terraform add` command generates resource configuration templates. ---- - -# Command: add - -The `terraform add` command generates a starting point for the configuration -of a particular resource. - -~> **Warning:** This command is currently experimental. Its exact behavior and -command line arguments are likely to change in future releases based on -feedback. We don't recommend building automation around the current design of -this command, but it's safe to use directly in a development environment -setting. - -By default, Terraform will include only the subset of arguments that are marked -by the provider as being required, and will use `null` as a placeholder for -their values. You can then replace `null` with suitable expressions in order -to make the arguments valid. - -If you use the `-optional` option then Terraform will also include arguments -that the provider declares as optional. You can then either write a suitable -expression for each argument or remove the arguments you wish to leave unset. - -If you use the `-from-state` option then Terraform will instead generate a -configuration containing expressions which will produce the same values as -the corresponding resource instance object already tracked in the Terraform -state, if for example you've previously imported the object using -[`terraform import`](import.html). - --> **Note:** If you use `-from-state`, the result will not include expressions -for any values which are marked as sensitive in the state. If you want to -see those, you can inspect the state data directly using -`terraform state show ADDRESS`. - -## Usage - -Usage: `terraform add [options] ADDRESS` - -This command requires an address that points to a resource which does not -already exist in the configuration. Addresses are in -[resource addressing format](/docs/cli/state/resource-addressing.html). - -This command accepts the following options: - -* `-from-state` - Fill the template with values from an existing resource - instance already tracked in the state. By default, Terraform will emit only - placeholder values based on the resource type. - -* `-optional` - Include optional arguments. By default, the result will - include only required arguments. - -* `-out=FILENAME` - Write the template to a file, instead of to standard - output. - -* `-provider=provider` - Override the provider configuration for the resource, -using the absolute provider configuration address syntax. - - Absolute provider configuration syntax uses the full source address of - the provider, rather than a local name declared in the relevant module. - For example, to select the aliased provider configuration "us-east-1" - of the official AWS provider, use: - - ``` - -provider='provider["hashicorp/aws"].us-east-1' - ``` - - or, if you are using the Windows command prompt, use Windows-style escaping - for the quotes inside the address: - - ``` - -provider=provider[\"hashicorp/aws\"].us-east-1 - ``` - - This is incompatible with `-from-state`, because in that case Terraform - will use the provider configuration already selected in the state, which - is the provider configuration that most recently managed the object. diff --git a/website/docs/cli/commands/index.html.md b/website/docs/cli/commands/index.html.md index 1b01d60d1339..6fb5919a77ba 100644 --- a/website/docs/cli/commands/index.html.md +++ b/website/docs/cli/commands/index.html.md @@ -39,7 +39,6 @@ Main commands: destroy Destroy previously-created infrastructure All other commands: - add Generate a resource configuration template console Try Terraform expressions at an interactive command prompt fmt Reformat your configuration in the standard style force-unlock Release a stuck lock on the current workspace diff --git a/website/layouts/docs.erb b/website/layouts/docs.erb index da4f0f6c9176..602a490e565d 100644 --- a/website/layouts/docs.erb +++ b/website/layouts/docs.erb @@ -74,10 +74,6 @@ Overview -
  • - add -
  • -
  • console
  • @@ -361,10 +357,6 @@ Alphabetical List of Commands